Back Share
ChallengeInstances

How can AI model norms to govern its action

AI

Issue 1: Many approaches to norm implementation are currently available, and it is not yet settled which ones are most suitable.BackgroundThe prospect of developing A/IS that are sensitive to human norms and factor them into morally or legally significant decisions has intrigued science fiction writers, philosophers, and computer scientists alike. Modest efforts to realize this worthy goal in limited or bounded contexts are already underway. This emerging field of research appears under many names, including: machine morality, machine ethics, moral machines, value alignment, computational ethics, artificial morality, safe AI, and friendly AI.There are a number of different implementation routes for implementing ethics into autonomous and intelligent systems. Following Wallach and Allen (2008)14, we might begin to categorize these as either:A. Top-down approaches, where the system,e. g., a software agent, has some symbolic representation of its activity, and so can identify specific states, plans, or actions as ethical or unethical with respect to particular ethical requirements (Dennis, Fisher, Slavkovik, Webster 201615; Pereira and Saptawijaya 201616; Rötzer, 201617; Scheutz, Malle, and Briggs 201518); orB. Bottom-up approaches, where the system,e. g., a learning component, builds up, through experience of what is to be considered ethical and unethical in certain situations, an implicit notion of ethical behavior (Anderson and Anderson 201419; Riedl and Harrison 201620).Relevant examples of these two are: (A) symbolic agents that have explicit representations of plans, actions, goals, etc.; and (B) machine learning systems that train subsymbolic mechanisms with acceptable ethical behavior. For more detailed discussion, see Charisi et al.
1. Many of the existing experimental approaches to building moral machines are top-down, in the sense that norms, rules, principles, or procedures are used by the system to evaluate the acceptability of differing courses of action, or as moral standards or goals to be realized. Increasingly, however, A/IS will encounter situations that initially programmed norms do not clearly address, requiring algorithmic procedures to select the better of two or more novel courses of action. Recent breakthroughs in machine learning and perception enable researchers to explore bottom-up approaches in which the A/IS learn about their context and about human norms, similar to the manner in which a child slowly learns which forms of behavior are safe and acceptable. Of course, unlike current A/IS, children can feel pain and pleasure, and empathize with others. Still, A/IS can learn to detect and take into account others’ pain and pleasure, thus at least achieving some of the positive effects of empathy. As research on A/IS progresses, engineers will explore new ways to improve these capabilities.Each of the first two options has obvious limitations, such as option A’s inability to learn and adapt and option B’s unconstrained learning behavior. A third option tries to address these limitations:C. Hybrid approaches, combining (A) and (B).For example, the selection of action might be carried out by a subsymbolic system, but this action must be checked by a symbolic “gateway” agent before being invoked. This is a typical approach for “Ethical Governors” (Arkin, 200822; Winfield, Blum, and Liu 201423) or “Guardians” (Etzioni 201624) that monitor, restrict, and even adapt certain unacceptable behaviors proposed by the system (see Issue 3). Alternatively, action selection in light of norms could be done in a verifiable logical format, while many of the norms constraining those actions can be learned through bottom-up learning mechanisms (Arnold, Kasenberg, and Scheutz 201725).These three architectures do not cover all possible techniques for implementing norms into A/IS. For example, some contributors to the multi-agent systems literature have integrated norms into their agent specifications (Andrighetto et al. 201326), and even though these agents live in societal simulations and are too underspecified to be translated into individual A/IS such as robots, the emerging work can inform cognitive architectures of such A/IS that fully integrate norms. Of course, none of these experimental systems should be deployed outside of the laboratory before testing or before certain criteria are met, which we outline in the remainder of this section and in Section
1. "p.175

##

# RecommendationIn light of the multiple possible approaches to computationally implement norms, diverse research efforts should be pursued, especially collaborative research between scientists from different schools of thought and different disciplines.

##

# Further Resources- M. Anderson, and S. L. Anderson, “GenEth: A General Ethical Dilemma Analyzer,” Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence, Québec City, Québec, Canada, July 27 –31, 2014, pp. 253–261, Palo Alto, CA, The AAAI Press,
1. - G. Andrighetto, G. Governatori, P. Noriega, and L. W. N. van der Torre, eds. Normative Multi-Agent Systems. Saarbrücken/Wadern, Germany: Dagstuhl Publishing,
1. - R. Arkin, “Governing Lethal Behavior: Embedding Ethics in a Hybrid Deliberative/ Reactive Robot Architecture.” Proceedings of the 2008 3rd ACM/IEEE International Conference on Human-Robot Interaction (HRI), Amsterdam, Netherlands, March 12 -15, 2008, IEEE, pp. 121–128,
1. - T. Arnold, D. Kasenberg, and M. Scheutz. “Value Alignment or Misalignment—What Will Keep Systems Accountable?” The Workshops of the Thirty-First AAAI Conference on Artificial Intelligence: Technical Reports, WS-17-02: AI, Ethics, and Society, pp. 81–
1. Palo Alto, CA: The AAAI Press,
1. - V. Charisi, L. Dennis, M. Fisher, et al. “Towards Moral Autonomous Systems,”
1. - A. Conn, “How Do We Align Artificial Intelligence with Human Values?” Future of Life Institute, Feb. 3,
1. - L. Dennis, M. Fisher, M. Slavkovik, and M. Webster, “Formal Verification of Ethical Choices in Autonomous Systems.” Robotics and Autonomous Systems, vol. 77, pp. 1–14,
1. - A. Etzioni and O. Etzioni, “Designing AI Systems That Obey Our Laws and Values.” Communications of the ACM, vol. 59, no. 9, pp. 29–31, Sept.
1. - L. M. Pereira and A. Saptawijaya, Programming Machine Ethics. Cham, Switzerland: Springer International,
1. - M. O. Riedl and B. Harrison. “Using Stories to Teach Human Values to Artificial Agents.” AAAI Workshops
1. Phoenix, Arizona, February 12–13,
1. - F. Rötzer, ed. Programmierte Ethik: Brauchen Roboter Regeln oder Moral? Hannover, Germany: Heise Medien,
1. - M. Scheutz, B. F. Malle, and G. Briggs. “Towards Morally Sensitive Action Selection for Autonomous Social Robots.” Proceedings of the 24th International Symposium on Robot and Human Interactive Communication, RO-MAN 2015 (2015): 492–
1. - U. Sommer, Werte: Warum Man Sie Braucht, Obwohl es Sie Nicht Gibt. [Values. Why we need them even though they don’t exist.] Stuttgart, Germany: J. B. Metzler,
1. - I. Sommerville, Software Engineering. Harlow, U.K.: Pearson Studium,
1. - W. Wallach and C. Allen. Moral Machines: Teaching Robots Right from Wrong. New York: Oxford University Press,
1. - F. T. Winfield, C. Blum, and W. Liu. “Towards an Ethical Robot: Internal Models, Consequences and Ethical Action Selection” in Advances in Autonomous Robotics Systems, Lecture Notes in Computer Science Volume, M. Mistry, A. Leonardis, Witkowski, and C. Melhuish, eds. pp. 85–
1. Springer,
1.

Overarching Principles Merit and Integrity
Principles Merit and Integrity
Sources IEEE
Title How can AI model norms to govern its action