2025 - present |
Associate Professor and Tutorial Fellow in Philosophy, Jesus College, University of Oxford |
2023 - 2025 |
Lecturer (Assistant Professor) in Philosophy of AI, Macquarie University |
2020 - 2023 |
Presidential Scholar, Columbia University |
2015 - 2020 |
DPhil in Philosophy, University of Oxford |
Millière, R. (forthcoming). Language Models as Models of Language. In R. Nefdt, G. Dupre, & K. Stanton (Eds.), The Oxford Handbook of the Philosophy of Linguistics. Oxford University Press.
|
Musker, S., Duchnowski, A., Millière, R., & Pavlick, E. (forthcoming). LLMs as Models for Analogical Reasoning. Journal of Memory and Language.
|
Millière, R. (forthcoming). Constitutive Self-Consciousness. Australasian Journal of Philosophy.
|
Millière, R., & Buckner, C. (forthcoming). Interventionist Methods for Interpreting Deep Neural Networks. In G. Piccinini (Ed.), Neurocognitive Foundations of Mind. Oxford University Press.
|
Millière, R. (2025). Normative Conflicts And Shallow AI Alignment. Philosophical Studies, 1–44.
|
Wu, Y., Geiger, A., & Millière, R. (2025). How Do Transformers Learn Variable Binding in Symbolic Programs? Forty-second International Conference on Machine Learning.
|
Millière, R. (2024). Philosophy of Cognitive Science in the Age of Deep Learning. WIREs Cognitive Science. |
My research is primarily focused on the philosophy of artificial intelligence, cognitive science, and mind. Much of my work addresses fundamental questions about the capacities, interpretability, and safety of contemporary AI systems. I am particularly interested in first-order questions about whether these systems exhibit specific cognitive capacities, such as syntactic competence or analogical reasoning, and second-order methodological issues involved in evaluating these cognitive capacities in artificial systems. As an AI2050 fellow funded by Schmidt Sciences, I also work on the foundations of mechanistic interpretability, the project of reverse-engineering the representations and computations induced by neural networks. Finally, my work on AI safety investigates why current alignment methods remain vulnerable to adversarial attacks, and what we can do to mitigate this issue. My other principal area of research concerns consciousness and self-consciousness, where I have developed a pluralist account of self-representation and argue, by drawing on evidence from altered states, against the view that self-consciousness is a necessary feature of all conscious experience.