Back Share
Principles

Explicability

“Explicability is crucial for building and maintaining users’ trust in AI systems. This means that processes need to be transparent, the capabilities and purpose of AI systems openly communicated, and decisions – to the extent possible – explainable to those directly and indirectly affected. Without such information, a decision cannot be duly contested. An explanation as to why a model has generated a particular output or decision (and what combination of input factors contributed to that) is not always possible. These cases are referred to as ‘black box’ algorithms and require special attention. In those circumstances, other explicability measures (e.g. traceability, auditability and transparent communication on system capabilities) may be required, provided that the system as a whole respects fundamental rights. The degree to which explicability is needed is highly dependent on the context and the severity of the consequences if that output is erroneous or otherwise inaccurate.33” ([High-Level Expert Group on AI, 2019, p. 13](zotero://select/groups/4907410/items/XPCD8D3T)) ([pdf](zotero://open-pdf/groups/4907410/items/CPFPH28M?page=15&annotation=J8F447TV))

Challenge Instances Decisions with unclear grounding
Overarching Focus
Overarching Principles Merit and Integrity
Title Explicability