Back Share
Strategies

Considerations in assessing trustworthy AI - Fundamental rights

governance-question sustainability

“Compliance with this assessment list is not evidence of legal compliance, nor is it intended as guidance to ensure compliance with applicable law. Given the application-specificity of AI systems, the assessment list will need to be tailored to the specific use case and context in which the system operates. In addition, this chapter offers a general recommendation on how to implement the assessment list for Trustworthy AI though a governance structure embracing both operational and management level.” (High-Level Expert Group on AI, 2019, p. 24)“TRUSTWORTHY AI ASSESSMENT LIST (PILOT VERSION) 1. Human agency and oversight Fundamental rights: - Did you carry out a fundamental rights impact assessment where there could be a negative impact on fundamental rights? - Did you identify and document potential trade-offs made between the different principles and rights? - Does the AI system interact with decisions by human (end) users (e.g. recommended actions or decisions to take, presenting of options)? - Could the AI system affect human autonomy by interfering with the (end) user’s decision-making process in an unintended way? - Did you consider whether the AI system should communicate to (end) users that a decision, content, advice or outcome is the result of an algorithmic decision? - In case of a chat bot or other conversational system, are the human end users made aware that they are interacting with a non-human agent?” (High-Level Expert Group on AI, 2019, p. 26)

Overarching Principles Beneficence
Principles Beneficence
Title Considerations in assessing trustworthy AI - Fundamental rights