J. Bielby, “Comparative Philosophies in Intercultural Information Ethics,” _Confluence: Online Journal of World Philosophies _2, no. 1, pp. 233–253,
p.124
## RecommendationTo develop A/IS capable of following social and moral norms, the first step is to identify the norms of the specific community in which theA/IS are to be deployed and, in particular, norms relevant to the kinds of tasks and roles that the A/IS are designed for. This norm identification process must use appropriate scientific methods and continue through the system's life cycle.
## Further Resources
Mack, Ed., “Changing social norms.” Social Research: An International Quarterly, 85, no.1, 1–271,
I. Misra, C. L. Zitnick, M. Mitchell, and R. Girshick, (2016). Seeing through the human reporting bias: Visual Classifiers from Noisy Human-Centric Labels. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2930–
doi[:
1109/CVPR.
320](https://doi.org/
1109/CVPR.
320)
I. van de Poel, “[An Ethical Framework for Evaluating Experimental Technology,](https://link.springer.com/article/
1007/s11948-015-9724-3)” Science and Engineering Ethics, 22, no. 3,pp. 667686,
"p.168-169
## RecommendationA/IS developers should identify the ways in which people resolve norm conflicts and the ways in which they expect A/IS to resolve similar norm conflicts. A system’s resolution of norm conflicts must be transparent—that is, documented by the system and ready to be made available to users, the relevant community of deployment, and third-party evaluators.
## Further resources
M. Velasquez, C. Andre, T. Shanks, S.J., and M. J. Meyer, “The Common Good.” Issues in Ethics, vol._ _5, no. 1,
J. Van den Hoven, “Engineering and the Problem of Moral Overload.” _Science and Engineering Ethics, vol. _18, no. 1, pp.143–155,
D. Abel, J. MacGlashan, and M. L. Littman. “Reinforcement Learning as a Framework for Ethical Decision Making.” AAAI Workshop AI, Ethics, and Society, Volume WS-16-02 of 13th AAAI Workshops. Palo Alto, CA: AAAIPress,
O. Bendel, Die Moral in der Maschine: Beiträge zu Roboter- und Maschinenethik. Hannover, Germany: Heise Medien,
Accessible popular-science contributions to philosophical issues and technical implementations of machine ethics
S. V. Burks, and E. L. Krupka. [“A Multimethod Approach to Identifying Norms and Normative Expectations within a Corporate Hierarchy: ](https://pubsonline.informs.org/doi/abs/
1287/mnsc.
1478)[Evidence from the Financial Services Industry.”](https://pubsonline.informs.org/doi/abs/
1287/mnsc.
1478) _Management Science, _vol. 58, pp. 203–217,
Illustrates surveys and incentivized coordination games as methods to elicit norms in a large financial services firm
F. Cushman, V. Kumar, and P. Railton, “Moral Learning,” Cognition, vol._ _167, pp. 1–282,
M. Flanagan, D. C. Howe, and H. Nissenbaum, “Embodying Values in Technology: Theory and Practice.” Information Technology and Moral _Philosophy_, J. van den Hoven and J. Weckert, Eds., Cambridge University Press, 2008, pp. 322–
Cambridge Core, Cambridge University Press. Preprint available at[http://www.nyu.edu/projects/nissenbaum/ papers/Nissenbaum-VID.4-
pdf](http://www.nyu.edu/projects/nissenbaum/papers/Nissenbaum-VID.4-
pdf)
B. Friedman, P. H. Kahn, A. Borning, and A. Huldtgren. “Value Sensitive Design and Information Systems,” in _Early Engagement and New Technologies: Opening up the Laboratory, _N. Doorn, Schuurbiers, I. van de Poel, and M. Gorman, Eds., vol. 16, pp. 55–
Dordrecht: Springer,
A comprehensive introduction into Value Sensitive Design and three sample applications
G. Mackie, F. Moneti, E. Denny, and H. Shakya. “What Are Social Norms? How Are They Measured?” UNICEF Working Paper. University of California at San Diego: UNICEF, Sept.
https://dmeforpeace.org/sites/ default/files/4%2009%2030%20Whole%20 What%20are%20Social%20Norms.pdfA broad survey of conceptual and measurement questions regarding social norms.
J. A. Leydens and J. C. Lucena. Engineering Justice: Transforming Engineering Education and Practice. Hoboken, NJ: John Wiley & Sons,
Identifies principles of engineering for social justice.
B. F. Malle, “Integrating Robot Ethics and Machine Morality: The Study and Design of Moral Competence in Robots.” Ethics and Information Technology, _vol. _18, no. 4, pp. 243–256,
Discusses how a robot’s norm capacity fits in the larger vision of a robot with moral competence.
K. W. Miller, M. J. Wolf, and F. Grodzinsky, “This ‘Ethical Trap’ Is for Roboticists, Not Robots: On the Issue of Artificial Agent Ethical DecisionMaking.” Science and Engineering Ethics, _vol. _23, pp. 389–401,
This article raises doubts about the possibility of imbuing artificial agents with morality, or of claiming to have done so.
Open Roboethics Initiative: www.openroboethics.org. A series of poll results on differences in human moral decision-making and changes in priority order of values for autonomous systems (e.g., on care robots),
A. Rizzo and L. L. Swisher, “Comparing the Stewart–Sprinthall Management Survey and the Defining Issues Test-2 as Measures of Moral Reasoning in Public Administration.” Journal of Public Administration Researchand Theory, _vol. _14, pp. 335–348,
Describes two assessment instruments of moral reasoning (including norm maintenance) based on Kohlberg’s theoryof moral development.
S. H. Schwartz, “An Overview of the SchwartzTheory of Basic Values.” _Online Readings in Psychology and Culture _2, 2012.
Comprehensive overview of a specific theory of values, understood as motivational orientations toward abstract outcomes (e.g., self-direction, power, security).
S. H. Schwartz and K. Boehnke. “Evaluating the Structure of Human Values with Confirmatory Factor Analysis.” _Journal of Research in Personality, _vol. 38, pp. 230–255,
Describes an older method of subjective judgments of relations among valued outcomes and a newer, formal method of analyzing these relations.
W. Wallach and C. Allen. Moral Machines: Teaching Robots Right from Wrong. New York: Oxford University Press,
This book describes some of the challenges of having a one-size-fits-all approach to embedding human values in autonomous systems. "p.172-174