“Compliance with this assessment list is not evidence of legal compliance, nor is it intended as guidance to ensure compliance with applicable law. Given the application-specificity of AI systems, the assessment list will need to be tailored to the specific use case and context in which the system operates. In addition, this chapter offers a general recommendation on how to implement the assessment list for Trustworthy AI though a governance structure embracing both operational and management level.” (High-Level Expert Group on AI, 2019, p. 24)“TRUSTWORTHY AI ASSESSMENT LIST (PILOT VERSION) “Accountability “Minimising and reporting negative Impact: Did you carry out a risk or impact assessment of the AI system, which takes into account different stakeholders that are (in)directly affected? Did you provide training and education to help developing accountability practices? Which workers or branches of the team are involved? Does it go beyond the development phase? Do these trainings also teach the potential legal framework applicable to the AI system? Did you consider establishing an ‘ethical AI review board’ or a similar mechanism to discuss overall accountability and ethics practices, including potentially unclear grey areas? Did you foresee any kind of external guidance or put in place auditing processes to oversee ethics and accountability, in addition to internal initiatives? Did you establish processes for third parties (e.g. suppliers, consumers, distributors/vendors) or workers to report potential vulnerabilities, risks or biases in the AI system?” (High-Level Expert Group on AI, 2019, p. 31)
## IEEE recommendationsIssue: Oversight for algorithms
## BackgroundThe algorithms behind A/IS are not subject to consistent oversight. This lack of assessment causes concern because end users have no account of how a certain algorithm or system came to its conclusions. These recommendations are similar to those made in the “General Principles” and “Embedding Values into Autonomous and Intelligent Systems” chapters of Ethically Aligned Design, but here the recommendations are used as they apply to the narrow scope of this chapter .
## RecommendationsAccountability: As touched on in the General Principles chapter of Ethically Aligned Design, algorithmic transparency is an issue of concern. It is understood that specifics relating to algorithms or systems contain intellectual property that cannot, or will not, be released to the general public. Nonetheless, standards providing oversight of the manufacturing process of A/IS technologies need to be created to avoid harm and negative consequences. We can look to other technical domains, such as biomedical, civil, and aerospace engineering, where commercial protections for proprietary technology are routinely and effectively balanced with the need for appropriate oversight standards and mechanisms to safeguard the public.Human rights and algorithmic impact assessments should be explored as a meaningful way to improve the accountability of A/IS.These need to be paired with public consultations, and the final impactassessments must be made public.
## Further Resources
R. Calo, “Artificial Intelligence Policy: A Primer and Roadmap,” UC Davis Law Review, 52: pp. 399–435,
ARTICLE
“Privacy and Freedom of Expression in the Age of Artificial Intelligence,” Privacy International, April
[Online]. Available: [https://www.article
org/wpcontent/uploads/2018/04/Privacy-andFreedom-of-Expression-In-the-Age-of-ArtificialIntelligence-
pdf.](https://www.article
org/wp-content/uploads/2018/04/Privacy-and-Freedom-of-Expression-In-the-Age-of-Artificial-Intelligence-
pdf) [Accessed October 28, 2018].p.132-133