Back Share
Strategies

Questions to consider in key stages of AI and machine learning based research, regarding transparency and explainability

governance-question project-dissemination reflection-questions

The scientific method requires a high degree of transparency and explainability of how research findings were derived. This conflicts to some extent with the complex nature of AI technologies (Ananny & Crawford, 2018; Calo, 2017; Danaher, 2016; Wachter, Mittelstadt & Floridi, 2017; Weller, 2017). Indeed, it can be too complex for a researcher to precisely state how research data was created given the way a neural network with potentially hidden layers operates. However, the concepts of transparency and explainability do not mean strictly understanding the highly specialized technical code and data of the trained neural networks. For example, a richer notion of transparency asks researchers to explain the ends, means, and thought processes that went into developing the code, the model, and how the resulting research data was shaped. Similarly, explainability does not need to be exact, but can learn from interpretations in philosophy, cognitive psychology or cognitive science, and social psychology (Miller, 2017). - Can the researcher give an account of how the model operates and how research data was generated? - Has the researcher explained how the model works to an institutional ethics board, and have they understood the reasons and methods of data processing? - What roles have the developers and researchers played and what choices have they made in constructing the model, choosing and cleaning the training data and how has this affected the results and prediction? - What kind of negotiations have taken place in the decision-making around model selection, adjustments and data modelling in the research process that can affect the result and prediction? (franzke, 2020, p.45)

##

# IEEE Recommendation"When systems are built that could impact the safety or well-being of humans, it is not enough to just presume that a system works. Engineers must acknowledge and assess the ethical risks involved with black box software and implement mitigation strategies.Technologists should be able to characterize what their algorithms or systems are going to do via documentation, audits, and transparent and traceable standards. To the degree possible, these characterizations should be predictive, but given the nature of A/IS, they might need to be more retrospective and mitigation-oriented. As such, it is also important to ensure access to remedy adverse impacts.Technologists and corporations must do their ethical due diligence before deploying A/IS technology. Standards for what constitutes ethical due diligence would ideally be generated by an international body such as IEEE or ISO, and barring that, each corporation should work to generate a set of ethical standards by which their processes are evaluated and modified. Similar to a flight data recorder in the field of aviation, algorithmic traceability can provide insights on what computations led to questionable or dangerous behaviors. Even where such processes remain somewhat opaque, technologists should seek indirect means of validating results and detecting harms.

## Further resources

Overarching Principles Merit and Integrity Respect for persons
Title Questions to consider in key stages of AI and machine learning based research, regarding transparency and explainability