governance-question project-dissemination reflection-questions
The scientific method requires a high degree of transparency and explainability of how research findings were derived. This conflicts to some extent with the complex nature of AI technologies (Ananny & Crawford, 2018; Calo, 2017; Danaher, 2016; Wachter, Mittelstadt & Floridi, 2017; Weller, 2017). Indeed, it can be too complex for a researcher to precisely state how research data was created given the way a neural network with potentially hidden layers operates. However, the concepts of transparency and explainability do not mean strictly understanding the highly specialized technical code and data of the trained neural networks. For example, a richer notion of transparency asks researchers to explain the ends, means, and thought processes that went into developing the code, the model, and how the resulting research data was shaped. Similarly, explainability does not need to be exact, but can learn from interpretations in philosophy, cognitive psychology or cognitive science, and social psychology (Miller, 2017). - Can the researcher give an account of how the model operates and how research data was generated? - Has the researcher explained how the model works to an institutional ethics board, and have they understood the reasons and methods of data processing? - What roles have the developers and researchers played and what choices have they made in constructing the model, choosing and cleaning the training data and how has this affected the results and prediction? - What kind of negotiations have taken place in the decision-making around model selection, adjustments and data modelling in the research process that can affect the result and prediction? (franzke, 2020, p.45)
##
# IEEE Recommendation"When systems are built that could impact the safety or well-being of humans, it is not enough to just presume that a system works. Engineers must acknowledge and assess the ethical risks involved with black box software and implement mitigation strategies.Technologists should be able to characterize what their algorithms or systems are going to do via documentation, audits, and transparent and traceable standards. To the degree possible, these characterizations should be predictive, but given the nature of A/IS, they might need to be more retrospective and mitigation-oriented. As such, it is also important to ensure access to remedy adverse impacts.Technologists and corporations must do their ethical due diligence before deploying A/IS technology. Standards for what constitutes ethical due diligence would ideally be generated by an international body such as IEEE or ISO, and barring that, each corporation should work to generate a set of ethical standards by which their processes are evaluated and modified. Similar to a flight data recorder in the field of aviation, algorithmic traceability can provide insights on what computations led to questionable or dangerous behaviors. Even where such processes remain somewhat opaque, technologists should seek indirect means of validating results and detecting harms.
## Further resources
D. Reisman, J. Schultz, K. Crawford, and M. Whittaker, “Algorithmic Impact Assessments: A Practical Framework for Public Agency Accountability,” AI NOW
[Online]. Available: [https://ainowinstitute.org/ aiareport
pdf](https://ainowinstitute.org/aiareport
pdf).[Accessed October 28, 2018].
J. A. Kroll “The Fallacy of Inscrutability.” Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, C. Cath, S. Wachter, B. Mittelstadt and L. Floridi, Eds., October 15, 2018 DOI:
1098/rsta.
"p.139And recommendation regarding documentation:"Engineers should be required to thoroughly document the end product and related data flows, performance, limitations, and risks ofA/IS. Behaviors and practices that have been prominent in the engineering processes should also be explicitly presented, as well as empirical evidence of compliance and methodology used, such as training data used in predictive systems, algorithms and components used, and results of behavior monitoring. Criteria for such documentation could be: auditability, accessibility, meaningfulness, and readability.Companies should make their systems auditable and should explore novel methods for external and internal auditing.
## Further reading
S. Wachter, B. Mittelstadt, and L. Floridi. “Transparent, Explainable, and Accountable AI for Robotics.” _Science Robotics, vol. _2, no. 6, May 31,
[Online]. Available: DOI:
1126/scirobotics.aan
[Accessed Nov.
S. Barocas, and A. D. Selbst, “Big Data’s Disparate Impact.” California Law Review 104, 671-732,
J. A. Kroll, J. Huey, S. Barocas, E. W. Felten, J. R. Reidenberg, D. G. Robinson, and H. Yu. “[Accountable Algorithms.](https://www.pennlawreview.com/print/165-U-Pa-L-Rev-
pdf)” _University of Pennsylvania Law Review _165, no. 1, 633– 705,
J. M. Balkin, “Free Speech in the Algorithmic Society: Big Data, Private Governance, and New School Speech Regulation.” UC Davis Law Review,
"p.135
## High-level expert group“Compliance with this assessment list is not evidence of legal compliance, nor is it intended as guidance to ensure compliance with applicable law. Given the application-specificity of AI systems, the assessment list will need to be tailored to the specific use case and context in which the system operates. In addition, this chapter offers a general recommendation on how to implement the assessment list for Trustworthy AI though a governance structure embracing both operational and management level.” (High-Level Expert Group on AI, 2019, p. 24)“TRUSTWORTHY AI ASSESSMENT LIST (PILOT VERSION)“Transparency“Explainability: Did you assess: to what extent the decisions and hence the outcome made by the AI system can be understood? to what degree the system’s decision influences the organisation’s decision-making processes? why this particular system was deployed in this specific area? what the system’s business model is (for example, how does it create value for the organisation)? Did you ensure an explanation as to why the system took a certain choice resulting in a certain outcome that all users can understand? Did you design the AI system with interpretability in mind from the start? Did you research and try to use the simplest and most interpretable model possible for the application in question? Did you assess whether you can analyse your training and testing data? Can you change and update this over time? Did you assess whether you can examine interpretability after the model’s training and development, or whether you have access to the internal workflow of the model?” (High-Level Expert Group on AI, 2019, p. 29)
## "RecommendationEngineers should be required to thoroughly document the end product and related data flows, performance, limitations, and risks ofA/IS. Behaviors and practices that have been prominent in the engineering processes should also be explicitly presented, as well as empirical evidence of compliance and methodology used, such as training data used in predictive systems, algorithms and components used, and results of behavior monitoring. Criteria for such documentation could be: auditability, accessibility, meaningfulness, and readability.Companies should make their systems auditable and should explore novel methods for external and internal auditing.
## Further reading
S. Wachter, B. Mittelstadt, and L. Floridi. “Transparent, Explainable, and Accountable AI for Robotics.” _Science Robotics, vol. _2, no. 6, May 31,
[Online]. Available: DOI:
1126/scirobotics.aan
[Accessed Nov.
S. Barocas, and A. D. Selbst, “Big Data’s Disparate Impact.” California Law Review 104, 671-732,
J. A. Kroll, J. Huey, S. Barocas, E. W. Felten, J. R. Reidenberg, D. G. Robinson, and H. Yu. “[Accountable Algorithms.](https://www.pennlawreview.com/print/165-U-Pa-L-Rev-
pdf)” _University of Pennsylvania Law Review _165, no. 1, 633– 705,
J. M. Balkin, “Free Speech in the Algorithmic Society: Big Data, Private Governance, and New School Speech Regulation.” UC Davis Law Review,
"(IEEE, 2019, p.135)