Robot Existentialism: Artificial Intelligence and the Limits of Rationality
FAIN: DOC-293714-23
University of Puget Sound (Tacoma, WA 98416-5000)
Ariela Tubert (Project Director: February 2023 to present)
Justin Tiehen (Co Project Director: June 2023 to present)
Research and writing a co-authored book on existential philosophy and artificial intelligence.
Our proposed project is to complete a monograph on philosophical issues connected to existentialism and artificial intelligence titled, Robot Existentialism: Artificial Intelligence and the Limits of Rationality. We argue that a full understanding of human agency requires a recognition of the limits of rationality, together with an emphasis on the value of creation, including especially self-creation. The book engages with philosophical work on personal identity, the philosophy of mind, practical reason, and ethics, as well as work in artificial intelligence and aligned empirical fields to develop a unified view of a distinctive aspect of agency that is currently lacking in artificial beings. The expected final outcome of this collaborative team project will be the complete, publication-ready manuscript of the book, in addition to two pieces of public philosophy and presentations drawing on ideas in the book.
Associated Products
Value alignment, human enhancement, and moral revolutions (Article)Title: Value alignment, human enhancement, and moral revolutions
Author: Ariela Tubert
Author: Justin Tiehen
Abstract: Human beings are internally inconsistent in various ways. One way to develop this thought involves using the language of value alignment: the values we hold are not always aligned with our behavior and are not always aligned with each other. Because of this self-misalignment, there is room for potential projects of human enhancement that involve achieving a greater degree of value alignment than we presently have. Relatedly, discussions of AI ethics sometimes focus on what is known as the value alignment problem, the challenge of how to build AI that acts in accordance with our human values. We argue that there is an especially close connection between solving the value alignment problem in AI ethics and using AI to pursue certain forms of human enhancement. But in addition, we also argue that there are important limits to what kinds of human enhancement can be pursued in this way, because some forms of human enhancement—namely moral revolutions—involve a kind of value misalignment rather than alignment.
Year: 2023
Primary URL:
https://www.tandfonline.com/doi/pdf/10.1080/0020174X.2023.2261506Primary URL Description: Journal website
Access Model: subscription
Format: Journal
Periodical Title: Inquiry: An Interdisciplinary Journal of Philosophy
Publisher: Taylor & Francis
Existentialist risk and value misalignment (Article)Title: Existentialist risk and value misalignment
Author: Ariela Tubert
Author: Justin Tiehen
Abstract: We argue that two long-term goals of AI research stand in tension with one another. The first involves creating AI that is safe, where this is understood as solving the problem of value alignment. The second involves creating artificial general intelligence, meaning AI that operates at or beyond human capacity across all or many intellectual domains. Our argument focuses on the human capacity to make what we call “existential choices”, choices that transform who we are as persons, including transforming what we most deeply value or desire. It is a capacity for a kind of value misalignment, in that the values held prior to making such choices can be significantly different from (misaligned with) the values held after making them. Because of the connection to existentialist philosophers who highlight these choices, we call the resulting form of risk “existentialist risk.” It is, roughly, the risk that results from AI taking an active role in authoring its own values rather than passively going along with the values given to it. On our view, human-like intelligence requires a human-like capacity for value misalignment, which is in tension with the possibility of guaranteeing value alignment between AI and humans.
Year: 2024
Primary URL:
https://link.springer.com/article/10.1007/s11098-024-02142-6Primary URL Description: journal website
Access Model: subscription
Format: Journal
Periodical Title: Philosophical Studies
Publisher: Springer
"Existentialist Risk" episode of the Ethical Machines Podcast (Radio/Audio Broadcast or Recording)Title: "Existentialist Risk" episode of the Ethical Machines Podcast
Abstract: Technologist’s are racing to create AGI, artificial general intelligence. They also say we must align the AGI’s moral values with our own. But Professors Ariela Tubert and Justin Tiehen argue that’s impossible. Once you create an AGI, they say, you also give them the intellectual capacity needed for freedom, including the freedom to reject your given values.
Date: 06/27/2024
Primary URL:
https://www.ethicalmachinespodcast.com/episodes/existentialist-riskPrimary URL Description: episode on the podcast webpage
Secondary URL:
https://podcasts.apple.com/us/podcast/existentialist-risk/id1751550186?i=1000660417296Secondary URL Description: Link to episodes on apple podcasts
Access Model: subscription
Format: Web
Format: Other