Skip to main content

School of Philosophy

  • Home
  • People
  • Events
    • Event series
    • Conferences
      • Past conferences
    • Past events
  • News
    • Audio/Video Recordings
  • Research
  • Study with us
    • Prizes and scholarships
  • Visit us
  • Contact us

Centres & Projects

  • Centre for Consciousness
  • Centre for Moral, Social and Political Theory
  • Centre for Philosophy of the Sciences
  • Humanising Machine Intelligence

Related Sites

  • Research School of Social Sciences
  • ANU College of Arts & Social Sciences

Centre for Consciousness

Related Sites

Centre for Moral, Social and Political Theory

Centre for Philosophy of the Sciences

School of Philosophy

Administrator

Breadcrumb

HomeUpcoming EventsToni Erskine (ANU): Flesh-and-Blood, Corporate, Robotic? Moral Agents of Restraint and The Problem of Misplaced Responsibility In War
Toni Erskine (ANU): Flesh-and-Blood, Corporate, Robotic? Moral Agents of Restraint and the Problem of Misplaced Responsibility in War

Who – or what – are ‘moral agents of restraint’ in war?

This is a critical moment for such an enquiry. Two different movements – one in the academic sphere, another in the realm of practice – contribute to its timeliness. First, within the just war tradition, a recent rift between ‘traditionalists’ and ‘revisionists’ might be viewed, from one angle, as a debate over what form relevant moral agents, or bearers of duties, can take with respect to particular prescriptions for restraint. Second, as sophisticated weapons systems slouch towards autonomy, the question of where exactly moral responsibility lies for specific acts and omissions involving sophisticated forms of artificial intelligence is posed with increasing frequency and urgency. Identifying the different types of moral agent involved in the practice of war, and what unites and distinguishes them, is necessary for understanding – and perhaps correcting – the responsibility judgements that are made in relation to them.

In this paper, I explore three potential categories of moral agent of restraint in war: (1) individual human beings, generally considered to be ‘paradigmatic’ moral agents; (2) corporate entities, or what I call ‘institutional moral agents’; and (3) intelligent artefacts in the form of sophisticated computers, robots, and other machines, which I will tentatively label ‘simulated moral agents’. Motivating this analysis is the simple principle that any compelling attribution of moral responsibility must be informed by the specific capacities and limitations of the entity towards which it is directed. I argue that understanding the general features that define different manifestations of moral agency is a crucial step in this endeavour. As a corollary to this point, I suggest that we risk misplacing responsibility – often to calamitous effect – when these defining features are ignored or misunderstood.

Date & time

  • Mon 04 Mar 2019, 12:30 pm - 2:00 pm

Location

Coombs Ext Rm 1.04

Speakers

  • Toni Erskine (ANU)

Event Series

MSPT seminars

Contact

  •  School of Philosophy
     Send email