Does AI threaten Human Autonomy?
This event is also part of the Humanities Cultural Programme, one of the founding stones for the
future Stephen A. Schwarzman Centre for the Humanities.
Live Event: Thursday 26th November 5pm-6.30pm
Watch live event here.
- How can AI systems influence our decision-making in ways that undermine autonomy? Do they do so in new or more problematic ways?
- To what extent can we outsource tasks to AI systems without losing our autonomy?
- Do we need a new conception of autonomy that incorporates considerations of the digital self?
Autonomy is a core value in contemporary Western societies – it is a value that is invoked across a range of debates in practical ethics, and it lies at the heart of liberal democratic theory. It is therefore no surprise that AI policy documents frequently champion the importance of ensuring the protection of human autonomy. At first glance, this sort of protection may appear unnecessary – after all, in some ways, it seems that AI systems can serve to significantly enhance our autonomy. They can give us more information upon which to base our choices, and they may allow us to achieve many of our goals more effectively and efficiently. However, it is becoming increasingly clear that AI systems do pose a number of threats to our autonomy. One (but not the only) example is the fact that they enable the pervasive and covert use of manipulative and deceptive techniques that aim to target and exploit well-documented vulnerabilities in our decision-making. This raises the question of whether it is possible to harness the considerable power of AI to improve our lives in a manner that is compatible with respect for autonomy, and whether we need to reconceptualize both the nature and value of autonomy in the digital age. In this session, Carina Prunkl, Jessica Morley and Jonathan Pugh engage with these general questions, using the example of mHealth tools as an illuminating case study for a debate about the various ways in which an AI system can both enhance and hinder our autonomy.
Dr Carina Prunkl, Research Fellow at the Institute for Ethics in AI, University of Oxford (where she is one of the inaugural team); also Research Affiliate at the Centre for the Governance of AI, Future of Humanity Institute. Carina works on the ethics and governance of AI, with a particular focus on autonomy, and has both publicly advocated and published on the importance of accountability mechanisms for AI.
Jessica Morley, Policy Lead at Oxford’s DataLab, leading its engagement work to encourage use of modern computational analytics in the NHS, and ensuring public trust in health data records (notably those developed in response to the COVID-19 pandemic). Jess is also pursuing a related doctorate at the Oxford Internet Institute’s Digital Ethics Lab. As Technical Advisor for the Department of Health and Social Care, she co-authored the NHS Code of Conduct for data-driven technologies.
Dr Jonathan Pugh, Senior Research Fellow at the Oxford Uehiro Centre for Practical Ethics, University of Oxford researching on how far AI Ethics should incorporate traditional conceptions of autonomy and “moral status”. He recently led a three-year project on the ethics of experimental Deep Brain Stimulation and “neuro-hacking”, and in 2020 published Autonomy, Rationality and Contemporary Bioethics (OUP). he has written on a wide range of ethical topics, but has particular interest in issues concerning personal autonomy and informed consent.
Professor Peter Millican is Gilbert Ryle Fellow and Professor of Philosophy at Hertford College, Oxford. He has researched and published over a wide range, including Early Modern Philosophy, Epistemology, Ethics, Philosophy of Language and of Religion, but has a particular focus on interdisciplinary connections with Computing and AI. He founded and oversees the Oxford undergraduate degree in Computer Science and Philosophy, which has been running since 2012, and last year he instituted this ongoing series of Ethics in AI Seminars.