Independent review of research is important for rigour, and to ground the claims made regarding systems. This is an ethical imperative, because claims that are not supported by evidence are at best an opportunity cost, and at worst can cause significant harms. Research institutions such as universities have well established models that may be drawn on.
## RecommendationAn independent, internationally coordinated body—akin to ISO—should be formed to oversee whether A/IS products actually meet ethical criteria, both when designed, developed, deployed, and when considering their evolution after deployment and during interaction with other products. It should also includea certification process.
## Further Resources
M. U. Scherer, “Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies,” Harvard Journal of Law and Technology _vol. _29, no. 2, 354–400,
D. R. Desai and J. A. Kroll, “Trust But Verify: A Guide to Algorithms and the Law.” Harvard Journal of Law and Technology, Forthcoming; Georgia Tech Scheller College of Business Research Paper No. 17-19,
"p.133-134
## RecommendationTo maximize effective evaluation by third parties, e.g., regulators and accident investigators, A/IS should be designed, specified, and documented so as to permit the use of strong verification and validation techniques for assessing the system’s safety and norm compliance, in order to achieve accountability to the relevant communities.
## Further Resources
M. Fisher, L. A. Dennis, and M. P. Webster. “Verifying Autonomous Systems.” Communications of the ACM, vol. 56, pp.84–93,
K. Abney, G. A. Bekey, and P. Lin. Robot Ethics: The Ethical and Social Implications of Robotics. Cambridge, MA: The MIT Press,
M. Anderson and S. L. Anderson, eds.[Machine Ethics](http://assets.cambridge.org/97805211/12352/copyright/9780521112352copyrightinfo.pdf). New York: Cambridge University Press,
M. Boden, J. Bryson, et al. “Principles of Robotics: Regulating Robots in the Real World.” _Connection Science _29, no. 2, pp. 124–129,
M. Coeckelbergh, “[Can We Trust Robots?](https://link.springer.com/article/
1007/s10676-011-9279-1)” _Ethics and Information Technology, _vol.14, pp. 53–60,
L. A. Dennis, M. Fisher, N. Lincoln, A. Lisitsa, and S. M. Veres, “Practical Verification of Decision-Making in Agent-Based Autonomous Systems.” Automated Software Engineering, _vol. _23, no. 3, pp. 305–359,
M. Fisher, C. List, M. Slavkovik, and A. F. T. Winfield. “Engineering Moral Agents—From Human Morality to Artificial Morality” (Dagstuhl Seminar 16222). _Dagstuhl Reports _6, no. 5, pp. 114–137,
K. R. Fleischmann, Information and Human Values. San Rafael, CA: Morgan and Claypool,
G. Governatori and A. Rotolo. “How Do Agents Comply with Norms? ” in Normative Multi-Agent Systems, G. Boella, P. Noriega, G. Pigozzi, and H. Verhagen, eds., Dagstuhl Seminar Proceedings. Dagstuhl, Germany: Schloss Dagstuhl—Leibniz- Zentrum für Informatik,
B. Higgins, “New York City Task Force to Consider Algorithmic Harm.” Artificial Intelligence Technology and the Law Blog, Feb. 7,
[Online]. Available: http:// aitechnologylaw.com/2018/02/new-york-citytask-force-algorithmic-harm/. [Accessed Nov. 1, 2018].
S. L. Jarvenpaa, N. Tractinsky, and L. Saarinen. “[Consumer Trust in an Internet Store: A CrossCultural Validation](https://onlinelibrary.wiley.com/doi/full/
1111/j.1083-
tb
x)” Journal of ComputerMediated Communication, _vol. _5, no. 2, pp. 1–37,
"p.185-186, ieee, 2019