Ethics in AI Lunchtime Research Seminar (Wednesday - Week 4, MT22)

2nd november  talk by jeff howard

This paper sets out a philosophical framework for governing harmful speech on social media. It argues that platforms have an enforceable moral duty to combat various forms of harmful speech through their content moderation systems. It pinpoints several underlying duties that together determine the content and stringency of this responsibility. It then confronts the objection that it is morally impermissible to use automated systems to moderate harmful content, given the propensity of AI to generate false positives and false negatives. After explaining why this objection is not decisive, the paper concludes by sketching some implications for legal regulation.

Hosted by Dr Charlotte Unruh

Note: The format for the research seminars are hybrid and registration is needed, whether attending in-person or online. Joining instructions will follow once registered via the form below.

 

2nd November registration:

https://forms.office.com/Pages/ResponsePage.aspx?id=G96VzPWXk0-0uv5ouFLPkUbXexlJuMhCiksodiLwh4ZUOExBUlRNNFoxUVAwTkZZSlkxNjE3MTVMOC4u

The Institute for Ethics in AI brings together world-leading philosophers and other experts in the humanities with the technical developers and users of AI in academia, business and government. The ethics and governance of AI is an exceptionally vibrant area of research at Oxford and the Institute is an opportunity to take a bold leap forward from this platform.


Convenor: Dr Linda Eggert