
Pusan National University Study Reveals a Shared Responsibility of Both Humans and AI in AI-Caused Harm
Researchers highlight the need for distributed model of AI responsibility, where duties are shared among humans and AI systems
BUSAN, South Korea, Nov. 25, 2025 /PRNewswire/ -- Artificial intelligence (AI) is becoming an integral part of our everyday lives and with that emerges a pressing question: Who should be held responsible, when AI goes wrong? AI lacks consciousness and free-will, which makes it difficult to blame the system for the mistakes. AI systems operate through complex, opaque processes in a semi-autonomous manner. Hence, even though the systems are developed and used by human stakeholders, it is impossible for them to predict the harm. The traditional ethical frameworks thus fail to explain who is responsible for these harms, leading to the so-called responsibility gap in AI ethics.
A recent study by Dr. Hyungrae Noh, an Assistant Professor of Philosophy at Pusan National University, Republic of Korea, elucidates the philosophical and empirical issues surrounding moral responsibility in the context of AI systems. The study critiques traditional moral frameworks centered on human psychological capacities such as intention and free will, which make it practically impossible to ascribe responsibility to either AI systems or human stakeholders. The findings of the study were published in the journal of Topoi on November 6, 2025.
"With AI-technologies becoming deeply integrated in our lives, the instances of AI-mediated harm are bound to increase. So, it is crucial to understand who is morally responsible for the unforeseeable harms caused by AI," says Dr. Noh.
AI systems cannot be blamed for harm under traditional ethical frameworks. These frameworks typically require an agent to possess certain mental capacities to be held morally responsible. In the case of AI, there is a lack of conscious understanding, i.e., capacity to understand the moral significance of their actions. Moreover, AI systems do not go through subjective experiences, leading to a lack of phenomenal consciousness. The systems are not given full control over their behavior and decisions. They also lack intention, the capacity for deliberate decision-making that underlies actions. Lastly, these systems often lack the ability to answer or provide any explanation regarding their actions. Due to these gaps, it is not right to hold the systems responsible.
The study also sheds light on Luciano Floridi's non-anthropocentric theory of agency and responsibility in the domain of AI, which is also supported by other researchers from the field. This theory replaces traditional ethical frameworks with the idea of censorship, according to which human stakeholders have a duty to prevent AI from causing harm by monitoring and modifying the systems, and by disconnecting or deleting them when necessary. The same duty also extends to AI systems themselves, if they possess a sufficient level of autonomy.
"Instead of insisting traditional ethical frameworks in contexts of AI, it is important to acknowledge the idea of distributed responsibility. This implies a shared duty of both human stakeholders—including programmers, users, and developers—and AI agents themselves to address AI-mediated harms, even when the harm was not anticipated or intended. This will help to promptly rectify errors and prevent their recurrence, thereby reinforcing ethical practices in both the design and use of AI systems," concludes Dr. Noh.
Reference
Noh, H. Beyond the Responsibility Gap: Distributed Non-anthropocentric Responsibility in the AI Era. Topoi.
https://doi.org/10.1007/s11245-025-10302-4
Lab: https://sites.google.com/view/hyungraenoh
ORCID id: 0000-0001-9503-6222
Media Contact:
Goon-Soo Kim
82 51 510 7928
[email protected]
SOURCE Pusan National University
Share this article