DPhil Seminar (Friday - Week 4, MT23)

chess king white background victory shadow black 1418482 pxhere com

Chair: Katherine Hong

We should not deploy autonomous weapons systems. We should not try to program ethics into self-driving cars. We should not replace judges with computer systems. Arguments of this sort—that is, arguments against the use of AI systems in particular decision contexts—often point to the same reason: AI systems should not be deployed in such situations because AI systems are not moral agents. But it’s not always clear what the term “moral agent” means—and whether it means the same thing in different contexts.

This paper proposes a framework for moral agency according to the two primary roles of moral agency ascriptions: (1) identifying appropriate subjects of deontic evaluations and (2) identifying appropriate subjects of moral responsibility. I conceptualize these roles into two types of moral agents. Briefly, deontic moral agents are genuine sources of moral actions, while responsible moral agents are capable of bearing moral responsibility for their actions. After defending this distinction and applying it to existing marginal cases of moral agency, I consider the relevant criteria for each type of moral agent. I then revisit the relationship between wrongness and responsibility. Finally, I consider implications for artificial moral agency.

See the DPhil Seminar website for details.


DPhil Seminar Convenors: Lewis Williams and Kyle van Oosterum