Artificial Intelligence and Existential Risk: Deontological and Consequentialist Approach

Budić, Marina and Galjak, Marko (2024) Artificial Intelligence and Existential Risk: Deontological and Consequentialist Approach. In: International Conference: Existential Threats and Other Disasters: How Should We Address Them? Center for the Study of Bioethics, The Hastings Center, and The Oxford Uehiro Centre for Practical Ethics, Belgrade, p. 18.

[img] Text
Disasters CBS M34 Budic Galjak.pdf - Published Version
Available under License Creative Commons Attribution Non-commercial No Derivatives.

Download (378kB)

Abstract

As we edge closer to the development of AGI, the potential for a superintelligent entity that surpasses human intelligence becomes real. While AGI promises innovation and progress, it also introduces existential risk. The argument that AGI constitutes an existential risk for the human species rests on two premises: (1) the Singularity claim that states AI may reach superintelligent levels, at which point humans lose control; (2) the Orthogonality thesis that states any level of intelligence can go along with any goal, i.e., intelligence doesn’t necessarily correlate with benevolent goals. This presents a quandary: how does humanity proceed if an AGI’s goals aren’t aligned with ours? AGI’s potential necessitates ethical consideration. We explore how deontological and consequentialist ethical theories can be used as a prism to view the existential risk of AI. We examine these ethical approaches in answering whether we should take that risk. This presentation delves into the ethical challenges AGI poses, exploring value alignment, control problems, and societal impacts. Drawing upon insights from the fields of AI ethics and normative ethics, we evaluate specific responses to the risk at hand: the alignment issue, the morality of boxing solutions – questioning the constraints placed on AI and its entitlement to rights and exploring governance structures, including possible regulatory interventions or stringent oversight mechanisms to curb unchecked AGI evolution, or total surveillance to prevent further AGI developments. We answer these questions from perspectives of deontological and consequentialist approaches in the realm of AGI and superintelligence, emphasizing the depth and complexity of ethical considerations in this field

Item Type: Book Section
Institutional centre: Centre for demographic research
Depositing User: D. Arsenijević
Date Deposited: 24 Sep 2025 11:26
Last Modified: 29 Sep 2025 07:49
URI: http://iriss.idn.org.rs/id/eprint/2789

Actions (login required)

View Item View Item