“Perpetuating bias in criminal justice: There are many documented cases of AI gone wrong in the criminal justice system. The use of AI in this context often occurs in two different areas: - risk scoring—evaluating whether or not a defendant is likely to reoffend in order to recommend sentencing and set bail- or so-called “predictive policing,” using insights from various data points to predict where or when crime will occur and direct law enforcement action accordingly.38 In many cases, these efforts are likely well-intentioned. Use of machine learning for risk scoring of defendants is advertised as removing the known human bias of judges in their sentencing and bail decisions.39 And predictive policing efforts seek to best allocate often-limited police resources to prevent crime, though there is always a high risk of mission creep.40 However, the recommendations of these AI systems often further exacerbate the very bias they are trying to mitigate, either directly or by incorporating factors that are proxies for bias.” (Access Now, 2018, p. 15)