A/IS shall be created and operated to provide an unambiguous rationale for decisions made.
## BackgroundThe programming, output, and purpose of A/IS are often not discernible by the general public. Based on the cultural context, application, and use of A/IS, people and institutions need clarity around the manufacture and deployment of these systems to establish responsibility and accountability, and to avoid potential harm. Additionally, manufacturers of these systems must be accountable in order to address legal issues of culpability. It should, if necessary, be possible to apportion culpability among responsible creators (designers and manufacturers) and operators to avoid confusion or fear within the general public.Accountability and partial accountability are not possible without transparency, thus this principle is closely linked with Principle 5–Transparency.
## RecommendationsTo best address issues of responsibility and accountability:1.Legislatures/courts should clarify responsibility, culpability, liability, and accountability forA/IS, where possible, prior to development and deployment so that manufacturers and users understand their rights and obligations.2.Designers and developers of A/IS should remain aware of, and take into account, the diversity of existing cultural norms among the groups of users of these A/IS.3.Multi-stakeholder ecosystems including creators, and government, civil, and commercial stakeholders, should be developed to help establish norms where they do not exist because A/IS-oriented technology and their impacts are too new. These ecosystems would include, but not be limited to, representatives of civil society, law enforcement, insurers, investors, manufacturers, engineers, lawyers, and users. The norms can mature into best practicesand laws.4.Systems for registration and record-keeping should be established so that it is always possible to find out who is legally responsible for a particular A/IS. Creators, including manufacturers, along with operators,of A/IS should register key, high-level parameters, including:
Intended use,
Training data and training environment,if applicable,
Sensors and real world data sources,
Algorithms,
Process graphs,
Model features, at various levels,
User interfaces,
Actuators and outputs, and
Optimization goals, loss functions,and reward functions.
B. Shneiderman, “Human Responsibility for Autonomous Agents,” _IEEE Intelligent Systems _22, no. 2, pp. 60–61,
A. Matthias, “[The Responsibility Gap: Ascribing Responsibility for the Actions of Learning Automata.”](https://link.springer.com/article/
1007%2Fs10676-004-3422-1) _Ethics and Information Technology _6, no. 3, pp. 175–183,
A. Hevelke and J. Nida-Rümelin, “Responsibility for Crashes of Autonomous Vehicles: An Ethical Analysis,” _Science and Engineering Ethics _21, no. 3, pp. 619–630,
An example of good practice (in relation to Recommendation
#3) can be found in Sciencewise—the U.K. national center for public dialogue in policy-making involving science and technology issues.p.29-30