“Compliance with this assessment list is not evidence of legal compliance, nor is it intended as guidance to ensure compliance with applicable law. Given the application-specificity of AI systems, the assessment list will need to be tailored to the specific use case and context in which the system operates. In addition, this chapter offers a general recommendation on how to implement the assessment list for Trustworthy AI though a governance structure embracing both operational and management level.” (High-Level Expert Group on AI, 2019, p. 24)“TRUSTWORTHY AI ASSESSMENT LIST (PILOT VERSION) 1. Human agency and oversight Fundamental rights: - Did you carry out a fundamental rights impact assessment where there could be a negative impact on fundamental rights? - Did you identify and document potential trade-offs made between the different principles and rights? - Does the AI system interact with decisions by human (end) users (e.g. recommended actions or decisions to take, presenting of options)? - Could the AI system affect human autonomy by interfering with the (end) user’s decision-making process in an unintended way? - Did you consider whether the AI system should communicate to (end) users that a decision, content, advice or outcome is the result of an algorithmic decision? - In case of a chat bot or other conversational system, are the human end users made aware that they are interacting with a non-human agent?” (High-Level Expert Group on AI, 2019, p. 26)- "Is there is a self-learning or autonomous AI system or use case? If so, did you put in place more specific mechanisms of control and oversight? - Which detection and response mechanisms did you establish to assess whether something could go wrong” (High-Level Expert Group on AI, 2019, p. 26)- "Did you ensure a stop button or procedure to safely abort an operation where needed? - Does this procedure abort the process entirely, in part, or delegate control to a human?” (High-Level Expert Group on AI, 2019, p. 27)