Over the last decade, concerns about the power and danger of Artificial Intelligence have moved from the fantasy of ‘Terminator’ to reality, and anxieties about killer robots have been joined by many others that are more immediate. Robotic systems threaten a massive disruption of employment and transport, while algorithms fuelled by machine learning on (potentially biased) ‘big data’ increasingly play a role in life-changing decisions, whether financial, legal, or medical. More subtly, AI combines with social media to give huge potential for the manipulation of opinion and behaviour, whether to sell a product, influence financial markets, provoke divisive factionalism, or fix an election. All of this has raised huge ethical questions, some fairly familiar (e.g. concerning privacy, information security, appropriate rules of automated behaviour) but many quite new (e.g. concerning algorithmic bias, transparency, and wider impacts).
Oxford has a wealth of researchers in relevant fields, scattered through numerous University departments – including Philosophy, Computer Science, Engineering, Social Science, and Medicine – and also a wide range of specialist ‘centres’ and ‘institutes’. But hitherto, this rich number and variety of researchers has tended to lack any integrating focus, with those in one part of the University sometimes unaware of those elsewhere, even while working in closely cognate areas. It is against this background that Oxford created the Institute for AI Ethics, to promote broad conversation between relevant researchers and students across the entire University, and thus to generate a coherent powerhouse of AI Ethics which will be more than the sum of its (already impressive) parts. Seminars are the first formal activities of this new initiative, but we envisage them as an ongoing part of it, inspiring and nurturing interdisciplinary discussion and collaboration into the future.