Ethics in AI Research Seminar (Wednesday - Week 4, TT22)

tom simpson

There is a compelling argument, broadly realist in flavour, in support of the development of automated weapons systems, which is that if these systems are as powerful as they promise to be, a nation which chose not to develop them would incur an unacceptable strategic vulnerability against adversaries which did. Further, specific features of AI technology—in particular, its low ‘footprint’, in contrast with nuclear weapons—mean that it is difficult if not impossible to verify, publicly, any commitment by adversaries to forego developing AWS. However, resistance to AWS remains widespread, on a variety of grounds. Is there a way of reconciling these positions through a ‘no-first-use’ policy? This talk explores this question. 

Registration details available here.


Ethics in AI Research Seminar Convenors: John Tasioulas and Ted Lechterman | Any queries should be directed to aiethics@philosophy.ox.ac.uk