Depending on the amount of data to process, file generation may take longer.

If it takes too long to generate, you can limit the data by, for example, reducing the range of years.

Article

Download file Download BibTeX

Title

Artificial Intelligence as a Moral Agent: Regulatory Implications and a Relational–Contextual Extension of Moor’s Classification

Authors

[ 1 ] Szkoła Doktorska, Uniwersytet Komisji Edukacji Narodowej, Poland

Year of publication

2025

Published in

Security and Defence Quarterly

Journal year: 2025 | Journal number: Online first

Article type

scientific article

Publication language

english

Keywords
EN
PL
Abstract

EN This paper reassesses the regulatory value of James Moor’s four-level typology of machine morality, considering the European Artificial Intelligence Act (AI Act) and the forthcoming European Union (EU) liability directives. It asks whether Moors categories—ethical impact, implicit, explicit, and full moral agents—still capture the morally relevant properties of today’s generative, adaptive AI, and, if not, whether adding a relational−contextual dimension can better anticipate responsibility gaps. To address this gap, we introduce a novel relational–contextual dimension and a three-factor Responsibility Index (RI₃) that refines Moor’s typology by cross-classifying AI systems according to complexity, autonomy, and behavioural predictability for regulatory use. Adopting a strictly conceptual design, the study combines analytic philosophy with illustrative comparisons drawn from recent EU policy debates and high-profile incidents. It refines key terms, tests their coherence against statutory risk tiers, and distils the analysis into a three-factor matrix—complexity, autonomy, and predictability—that can be operationalised by lawmakers. The evaluation confirms that Moor’s typology remains a valuable baseline for distinguishing between passive and decision-making artefacts. Nevertheless, it also highlights how moral accountability is distributed within socio-technical networks. The proposed relational−contextual dimension, in conjunction with the regulatory matrix, aligns more closely with the AI Act’s risk logic and highlights scenarios in which moral agency is effectively delegated to the system. Moor’s framework should be retained but augmented: only by integrating relational criteria can legislators close emerging accountability gaps surrounding large-scale, autonomous AI. The matrix offers a pragmatic tool for aligning philosophical insight with concrete legal duties.

Date of online publication

31.12.2025

DOI

10.35467/sdq/213917

URL

https://securityanddefence.pl/Artificial-Intelligence-as-a-Moral-Agent-Regulatory-Implications-and-a-Relational,213917,0,2.html

License type

CC BY (attribution alone)

Open Access Mode

open journal

Open Access Text Version

final published version

Release date

31.12.2025

Date of Open Access to the publication

at the time of publication

Full text of article

Download file

Access level to full text

public

Ministry points / journal

70