“Compliance with this assessment list is not evidence of legal compliance, nor is it intended as guidance to ensure compliance with applicable law. Given the application-specificity of AI systems, the assessment list will need to be tailored to the specific use case and context in which the system operates. In addition, this chapter offers a general recommendation on how to implement the assessment list for Trustworthy AI though a governance structure embracing both operational and management level.” (High-Level Expert Group on AI, 2019, p. 24)“TRUSTWORTHY AI ASSESSMENT LIST (PILOT VERSION) Technical robustness and safety“Fallback plan and general safety: - Did you ensure that your system has a sufficient fallback plan if it encounters adversarial attacks or other unexpected situations (for example technical switching procedures or asking for a human operator before proceeding)? - Did you consider the level of risk raised by the AI system in this specific use case? - Did you put any process in place to measure and assess risks and safety? - Did you provide the necessary information in case of a risk for human physical integrity? - Did you consider an insurance policy to deal with potential damage from the AI system? - Did you identify potential safety risks of (other) foreseeable uses of the technology, including accidental or malicious misuse? Is there a plan to mitigate or manage these risks? - Did you assess whether there is a probable chance that the AI system may cause damage or harm to users or third parties? Did you assess the likelihood, potential damage, impacted audience and severity? - Did you consider the liability and consumer protection rules, and take them into account?- Did you consider the potential impact or safety risk to the environment or to animals? - Did your risk analysis include whether security or network problems such as cybersecurity hazards could pose safety risks or damage due to unintentional behaviour of the AI system? - Did you estimate the likely impact of a failure of your AI system when it provides wrong results, becomes unavailable, or provides societally unacceptable results (for example discrimination)? - Did you define thresholds and did you put governance procedures in place to trigger alternative/fallback plans? - Did you define and test fallback plans?” (High-Level Expert Group on AI, 2019, p. 27)