Back Share
Challenges

Respect for diversity, non-discrimination, and fairness

AI education

Respect for diversity, non-discrimination, and fairness respect for persons includes respect for difference, non-discrimination, and fairness in treatment of persons with particular protection for vulnerable populations. This concern is particularly salient in the context of AI because models built on historic data may reflect existing societal biases in a way that makes them opaque or unaccountable. Systems should thus provide for stakeholder participation and use regardless of their characteristics.New technologies including AI have potential for bias at multiple phases of a project (problem identification, training data, modeling, implementation), and in multiple ways including through use of data that reflects existing societal biases. While reinforcing existing biases based on historic data is typically focal, the potential to create new forms of bias should also be clear “Because they gain their insights from the existing structures and dynamics of the societies they analyse, datadriven technologies can reproduce, reinforce, and amplify the patterns of marginalisation, inequality, and discrimination that exist in these societies. Likewise, because many of the features, metrics, and analytic structures of the models that enable data mining are chosen by their designers, these technologies can potentially replicate their designers’ preconceptions and biases. Finally, the data samples used to train and test algorithmic systems can often be insufficiently representative of the populations from which they are drawing inferences. This creates real possibilities of biased and discriminatory outcomes, because the data being fed into the systems is flawed from the start.” (Leslie, 2019, p. 4)

##

# IEEE report"*Issue 2: *A/IS can have biases that disadvantage specific groups

## BackgroundEven when reflecting the full system of community norms that was identified, A/IS may show operation biases that disadvantage specific groups in the community or instill biases in users by reinforcing group stereotypes. A system’s bias can emerge in perception. For example, a passport application AI rejected an Asian man’s photo because it insisted his eyes were closed (Griffiths 201651). Bias can emerge in information processing. For instance, speech recognition systems are notoriously less accurate for female speakers than for male speakers (Tatman 201652). System bias can affect decisions, such as a criminal risk assessment device which overpredicts recidivism by African Americans (Angwin et al. 201653). The system’s bias can present itself even in its own appearance and presentation: the vast majority of humanoid robots have white “skin” color and use female voices (Riek and Howard 201454).The norm identification process detailed in Section 1 is intended to minimize individual designers’ biases because the community norms are assessed empirically. The identification process also seeks to incorporate norms against prejudice and discrimination. However, biases may still emerge from imperfections in the norm identification process itself, from unrepresentative training sets for machine learning systems, and from programmers’ and designers’ unconscious assumptions. Therefore, unanticipated or undetected biases should be further reduced by including members of diverse social groups in both the planning and evaluation of A/IS and integrating community outreach into the evaluation process, e.g., DO-IT program and RRI framework. Behavioral scientists and members of the target populations will be particularly valuable when devising criterion tasks for system evaluation and assessing the success of evaluating the A/IS performance on those tasks. Such tasks would assess, for example, whether the A/IS apply norms in discriminatory ways to different races, ethnicities, genders, ages, body shapes, or to people who use wheelchairsor prosthetics, and so on."p.184, IEEE, 2019

Overarching Principles Respect for persons Beneficence
Reference
Title Respect for diversity, non-discrimination, and fairness