Who – or what – are ‘moral agents of restraint’ in war?
This is a critical moment for such an enquiry. Two different movements – one in the academic sphere, another in the realm of practice – contribute to its timeliness. First, within the just war tradition, a recent rift between ‘traditionalists’ and ‘revisionists’ might be viewed, from one angle, as a debate over what form relevant moral agents, or bearers of duties, can take with respect to particular prescriptions for restraint. Second, as sophisticated weapons systems slouch towards autonomy, the question of where exactly moral responsibility lies for specific acts and omissions involving sophisticated forms of artificial intelligence is posed with increasing frequency and urgency. Identifying the different types of moral agent involved in the practice of war, and what unites and distinguishes them, is necessary for understanding – and perhaps correcting – the responsibility judgements that are made in relation to them.
In this paper, I explore three potential categories of moral agent of restraint in war: (1) individual human beings, generally considered to be ‘paradigmatic’ moral agents; (2) corporate entities, or what I call ‘institutional moral agents’; and (3) intelligent artefacts in the form of sophisticated computers, robots, and other machines, which I will tentatively label ‘simulated moral agents’. Motivating this analysis is the simple principle that any compelling attribution of moral responsibility must be informed by the specific capacities and limitations of the entity towards which it is directed. I argue that understanding the general features that define different manifestations of moral agency is a crucial step in this endeavour. As a corollary to this point, I suggest that we risk misplacing responsibility – often to calamitous effect – when these defining features are ignored or misunderstood.
Location
Speakers
- Toni Erskine (ANU)
Event Series
Contact
- School of Philosophy