%= partial 'templates/components/callout', locals: { type: "alert", title: "to add here", text: "tbc" } %> ## Minimise the risks of harms (non-maleficence) and conduct work that benefits people (beneficence) Risks of harms and likelihood of benefits should be considered with respect to engagement with reserach. In some systems (e.g. the US IRB model), there may be limitations on what committees can consider with respect to risks to society beyond those that individual participants may experience. At most basic, a risk of people engaging with one activity arises from the opportunity costs that such engagement entails; asking students to engage in tasks targeted at research is asking them to do that instead of something else, and must be justified by reasonable assumptions regarding benefit to them and low risk of harm. Such risks may acrue over time where tools are introduced that have impacts on allocation of resources. The Australian National Statement on Ethical Conduct in Human Research summarises beneficence: "1.6 The likely benefit of the research must justify any risks of harm or discomfort to participants. The likely benefit may be to the participants, to the wider community, or to both. 1.7 Researchers are responsible for: 1. (a) designing the research to minimise the risks of harm or discomfort to participants 1. (b) clarifying for participants the potential benefits and risks of the research 1. (c) the welfare of the participants in the research context. 1.8 Where there are no likely benefits to participants, the risk to participants should be lower than would be ethically acceptable where there are such likely benefits. 1.9 Where the risks to participants are no longer justified by the potential benefits of the research, the research must be suspended to allow time to consider whether it should be discontinued or at least modified. This decision may require consultation between researchers, participants, the relevant ethical review body, and the institution. The review body must be notified promptly of such suspension, and of any decisions following it (see paragraphs 1 to 1.1.10)." [(NS 6-1.9)](/recmzjcgkv3ynoxbl) Examples of types of harm that may be of interest in the context of introducing new technologies are: - impacts on human-human interactions and relationships - psychological impacts of engagement with technology - the potential for tools to be misused or targeted at harmful ends such as deception and manipulation or surveillance - the potential for an 'arms race' in release of technologies such that new tools are released without proper evaluation or consideration of ethical issues > "New technologies give rise to greater risk of deliberate or accidental misuse, and this is especially true for A/IS. A/IS increases the impact of risks such as hacking, misuse of personal data, system manipulation, or exploitation of vulnerable users by unscrupulous parties. Cases of A/IS hacking have already been widely reported, with driverless cars, for example. The Microsoft Tay AI chatbot was famously manipulated when it mimicked deliberately offensive users. In an age where these powerful tools are easily available, there is a need for a new kind of education for citizens to be sensitized to risks associated with the misuse of A/IS. The EU’s General Data Protection Regulation (GDPR) provides measures to remedy the misuse of personal data.Responsible innovation requires A/IS creators to anticipate, reflect, and engage with users of A/IS. Thus, citizens, lawyers, governments, etc., all have a role to play through education and awareness in developing accountability structures (see Principle 6), in addition to guiding new technology proactively toward beneficial ends." [(IEEE, 2019, p.32-33)](/reccfsnzpjvw0pcue) > Design and deploy AI systems to foster and to cultivate the welfare of all stakeholders whose interests are affected by their use Do no harm with these technologies and minimise the risks of their misuse or abuse” (Leslie, 2019, p. 10)“ Prioritise the safety and the mental and physical integrity of people when scanning horizons of technological possibility and when conceiving of and deploying AI applications” (Leslie, 2019, p. 11) rec7n2tgrh9rhypqj >“AI systems should neither cause nor exacerbate harm29 or otherwise adversely affect human beings.30 This entails the protection of human dignity The human right to be valued and treated with respect because of one's personhood. as well as mental and physical integrity. AI systems and the environments in which they operate must be safe and secure. They must be technically robust and it should be ensured that they are not open to malicious use. Vulnerable persons should receive greater attention and be included in the development, deployment and use of AI systems. Particular attention must also be paid to situations where AI systems can cause or exacerbate adverse impacts due to asymmetries of power or information, such as between employers and employees, businesses and consumers or governments and citizens. Preventing harm also entails consideration of the natural environment and all living beings.” (High-Level Expert Group on AI, 2019, p. 12) reclpiw2vvnostzv5 #### Specific ethical concepts arising in the context of technology <%= partial 'templates/components/callout', locals: { type: "alert", title: "here to review", text: "check this, these are now drawn from the same table as the concepts in the introduction and so the thread should be clear. **check error** something in the keys_principles template isnt working for beneficence but is working for respect.** " } %> <% keys_principles = partial "templates/components/getkeys", locals: { my_filters: "(item[:category] == 'Principles') && (item[:title]&.match?(/beneficence/i))" } %> <% keys_principles = 'recmzjcGKv3yNOxbl' %> <% callout_content = partial "templates/components/includeset", locals: { my_filters: "item[:category] == 'ChallengeInstances' && (item[:tags].include? 'AI') && (item[:OverarchingPrinciples].is_a?(Array) ? item[:OverarchingPrinciples]&.compact&.any? { |x| x&.match?(/#{keys_principles}/i) } : item[:OverarchingPrinciples]&.match?(/#{keys_principles}/i))" } %> <%= partial 'templates/components/callout', locals: { type: "information", title: "How is AI research different?", text: "#{callout_content}"} %> #### And for education <%= partial 'templates/components/callout', locals: { type: "alert", title: "here to edit", text: "check this, these are now drawn from the same table as the concepts in the introduction and so the thread should be clear" } %> <% callout_content = partial "templates/components/includeset", locals: { my_filters: "item[:category] == 'ChallengeInstances' && (item[:tags].include? 'education') && (item[:OverarchingPrinciples].is_a?(Array) ? item[:OverarchingPrinciples]&.compact&.any? { |x| x&.match?(/#{keys_principles}/i) } : item[:OverarchingPrinciples]&.match?(/#{keys_principles}/i))" } %> <%= partial 'templates/components/callout', locals: { type: "information", title: "How is education research different?", text: "#{callout_content}"} %>