A Quadruple Ethics in a Society Including Robots
TRANSCEND MEMBERS, 17 Mar 2025
Prof. Antonino Drago – TRANSCEND Media Service
15 Mar 2025 – Some decades ago Elio Sgreccia recognized four models of ethics as those followed by people in the world (Sgreccia 1986). These models may be traced back to two dichotomies: one on the notion of Infinity (either a Potentially achievable aim, PI, or an Actually infinite, mythical aim, AI) and the other dichotomy on the notion of Organization (either deductively drawing all results from few ethical Axioms, OA, or aimed at solving a basic Problem, PO). The four couples of choices fashion four models of ethics which are mutually incompatible owing to the radical difference of the two choices on each dichotomy. That means an irreducible pluralism of ethics (Drago 2000).
About human interaction with a robot, first of all one has to choose which kind of theory he wants to build. No axiomatic theory is suitable for fulfilling our purpose. Rather, the wanted theory has to be based on a basic problem, i.e. the problem that we recognize at glance: does the autonomous behavior of a robot lead to a behavior which human beings may accept? A theory based on a problem is managed by not a deductive reasoning, but an inductive reasoning; that means intuitionist logic, where the double negated law fails, i.e. its propositions are doubly negated (DNP). In fact, the celebrated methodological principle of new Jonas’ ethics: “Avoid the suicide of mankind” is a DNP. Moreover the celebrated Asimov’s so-called Principles on the interaction human-robot all are DNPs:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Law 0. A robot cannot harm humanity, nor can it allow humanity to be harmed by its inaction. (Asimov 1950)
Hence, a problem-based theory of this interaction has to find out a lot of DNPs and reason through the typical kind of arguments of an inductive theory, i.e. ad absurdum arguments. This task is too ambitious at the present stage of research. Rather, let us proceed by suitably generalizing to robots what some other DNPs which represent a thousands-year ethical wisdom which therefore has been assumed by all national legislations: the essentially four social commandments.
Surely, “Thou shalt not kill” has to be included in this new ethics in order to avoid the multiplication, through the robots, of the catastrophes of a War. “Thou shalt not steal“ (including its internalized version, the Christian 9th commandment) may be intended as forbidding the robot’s social performances replacing humans in their natural places within society; it wants to prevent any fallen of humans into Misery. “Thou shalt not bear false witness” may be intended as forbidding any cumbersome or even misleading inspection of all potentialities of a robot’s program. It wants to prevent that humans fall into Servitude to unknowable robots. “Thou shalt not commit adultery [= betrayal]” (including its internalized version, the Christian 10th commandment) may be intended as forbidding that a robot betrays humans. It wants to prevent any Sedition or Revolution by robots. Notice that the above four social scourges are those that, according to Lanza del Vasto (1959, §. 1) are made by human hand; also they can be expressions of two basic dichotomies, i.e. they correspond to the four couples of choices on the above two dichotomies: respectively, AO&AI, PO&AI, OA&PI, PO&PI.
Notice that 1st extended commandment is a task of human species on itself; 2nd extended commandment is a task of national law; it has to limit robot’s performances within society; 3rd extended commandment is again a task of international law; it has to limit how to built a single robot; to formulate 4th extended commandment requires a comprehension of the entire relationship human-robot. About the last one fortunately Asimov already suggested the above listed four methodological principles (DNPs).
Bibliography:
Asimov, I. (1950). I, robot. New York: Doubleday. https://archive.org/details/i-robot-isaac-asimov/page/28/mode/1up?view=theater.
Drago A. (2000), “Etica e scienza: loro fondazione comune secondo una visione pluralista”, in L. Chieffi (ed.): Bioetica, Torino: Paravia Scriptorium, pp. 303-331.
Lanza del Vasto (1959), Les Quatre Fléaux, Paris: Denoël, p. 11 (Engl. Tr. Make Straight the Way to Our Lord, New York: Knopf, p. 185).
Sgreccia E.: Manuale di Bioetica, Milano: Vita e Pensiero, 1988, pp. 74-90 (Engl. Transl.: Sgreccia E. (2012), Personalist Bioethics. Foundations and applications. Broomall PA: National Catholic Bioethics Center, pp. 43-53).
______________________________________
Prof. Antonino Drago: University “Federico II” of Naples, Italy and a member of the TRANSCEND Network. Allied of Ark Community, he teaches at the TRANSCEND Peace University-TPU. Master degree in physics (University of Pisa 1961), a follower of the Community of the Ark of Gandhi’s Italian disciple, Lanza del Vasto, a conscientious objector, a participant in the Italian campaigns for conscientious objection (1964-1972) and the campaign for refusing to pay taxes to finance military expenditure (1983-2000). Owing to his long experience in these activities and his writings on these subjects, he was asked by the University of Pisa to teach Nonviolent Popular Defense in the curriculum of “Science for Peace” (from 2001 to 2012) and also Peacebuilding and Peacekeeping (2009-2013. Then by the University of Florence to teach History and Techniques of Nonviolence in the curriculum of “Operations of Peace” (2004-2010). Drago was the first president of the Italian Ministerial Committee for Promoting Unarmed and Nonviolent Civil Defense (2004-2005). drago@unina.it.
Tags: Artificial Intelligence AI, Communication, Robots
This article originally appeared on Transcend Media Service (TMS) on 17 Mar 2025.
Anticopyright: Editorials and articles originated on TMS may be freely reprinted, disseminated, translated and used as background material, provided an acknowledgement and link to the source, TMS: A Quadruple Ethics in a Society Including Robots, is included. Thank you.
If you enjoyed this article, please donate to TMS to join the growing list of TMS Supporters.
This work is licensed under a CC BY-NC 4.0 License.
Join the discussion!
We welcome debate and dissent, but personal — ad hominem — attacks (on authors, other users or any individual), abuse and defamatory language will not be tolerated. Nor will we tolerate attempts to deliberately disrupt discussions. We aim to maintain an inviting space to focus on intelligent interactions and debates.