Back Share
Principles

Transparency

The basis of a particular A/IS decision should always be discoverable.

## BackgroundA key concern over autonomous and intelligent systems is that their operation must be transparent to a wide range of stakeholders for different reasons, noting that the level of transparency will necessarily be different for each stakeholder. Transparent A/IS are ones in which it is possible to discover how and why a system made a particular decision,or in the case of a robot, acted the way it did. The term “transparency” in the context ofA/IS also addresses the concepts of traceability, explainability, and interpretability. A/IS will perform tasks that are far more complex and have more effect on our world than prior generations of technology. Where the task is undertaken in a non-deterministic manner, it may defy simple explanation. This reality will be particularly acute with systems that interact with the physical world, thus raising the potential level of harm that such a system could cause. For example, some A/IS already have real consequences to human safety or well-being, such as medical diagnosis or driverless car autopilots. Systems such as these are safetycritical_ _systems. At the same time, the complexity of A/IS technology and the non-intuitive way in which it may operate will make it difficult for users of those systems to understand the actions of the A/IS that they use, or with which they interact. This opacity, combined with the often distributed manner in which the A/IS are developed, will complicate efforts to determine and allocate responsibility when something goes wrong. Thus, lack of transparency increases the risk and magnitude of harm when users do not understand the systems they are using, or there is a failure to fix faults and improve systems following accidents. Lack of transparency also increases the difficulty of ensuring accountability (see Principle 6— Accountability).Achieving transparency, which may involve a significant portion of the resources required to develop the A/IS, is important to each stakeholder group for the following reasons:1.For users, what the system is doing and why.2.For creators, including those undertaking the validation and certification of A/IS, the systems’ processes and input data.3.For an accident investigator, if accidents occur.4.For those in the legal process, to inform evidence and decision-making.5.For the public, to build confidence inthe technology.Develop new standards that describe measurable, testable levels of transparency, so that systems can be objectively assessed and levels of compliance determined. For designers, such standards will provide a guide for self-assessing transparency during development and suggest mechanisms for improving transparency. The mechanisms by which transparency is provided will vary significantly, including but not limited to, the following use cases:1.For users of care or domestic robots, a “whydid-you-do-that button” which, when pressed, causes the robot to explain the action itjust took.2.For validation or certification agencies, the algorithms underlying the A/IS and how they have been verified.3.For accident investigators, secure storage of sensor and internal state data comparable to a flight data recorder or black box. IEEE P7001™, IEEE Standard for Transparency of Autonomous Systems is one such standard, developed in response to this recommendationp.28-29

Overarching Focus
Overarching Principles Respect for persons
Sources IEEE
Title Transparency