to add here
tbc
Merit and Integrity
"Unless proposed research has merit, and the researchers who are to carry out the research have integrity, the involvement of human participants in the research cannot be ethically justifiable." (NS, preamble)" 1. 1 Research that has merit is: 1. (a) justifiable by its potential benefit, which may include its contribution to knowledge and understanding, to improved social welfare and individual wellbeing, and to the skill and expertise of researchers. What constitutes potential benefit and whether it justifies research may sometimes require consultation with the relevant communities 1. (b) designed or developed using methods appropriate for achieving the aims of the proposal 1. (c) based on a thorough study of the current literature, as well as previous studies. This does not exclude the possibility of novel research for which there is little or no literature available, or research requiring a quick response to an unforeseen situation 1. (d) designed to ensure that respect for the participants is not compromised by the aims of the research, by the way it is carried out, or by the results 1. (e) conducted or supervised by persons or teams with experience, qualifications and competence that are appropriate for the research 1. (f ) conducted using facilities and resources appropriate for the research. 1. 2 Where prior peer review has judged that a project has research merit, the question of its research merit is no longer subject to the judgement of those ethically reviewing the research.
- 3 Research that is conducted with integrity is carried out by researchers with a commitment to:
- (a) searching for knowledge and understanding
- (b) following recognised principles of research conduct
- (c) conducting research honestly
- (d) disseminating and communicating results, whether favourable or unfavourable, in ways that permit scrutiny and contribute to public knowledge and understanding."(NS
- 1-
- 3)
recohnq45fq7ywsro
Specific ethical concepts arising in the context of technology
here to edit
check this, these are now drawn from the same table as the concepts in the introduction and so the thread should be clear.
How is AI research different?
AI has potential to have impact beyond just on the participants involved into society, and it may not be clear what these impacts will be.
Concerns with merit of new systems
How can AI model norms to govern its action
In seeking to protect participants we may assume deidentification, however this may serve to marginalise participant contributions to research
Decisions throughout research processes have implications for risks and benefits; assessment of these is impacted by researcher-participant relationships
There may be gaps in adequacy of institutional ethics review committees to provide oversight of AI research
It may be perceived that as long as legal compliance is met, the responsibility for the impacts of AI is addressed
Work involving commercial actors may be driven by commercial interests over scientific rigour impacting both the conduct of evaluation and validity of claims made regarding system efficacy
How do we embed values throughout AI workflows?
Design fairness and ensuring models address challenges communities agree should be addressed
The intended uses, assumptions, and limitations of tools must be clear to all users including via technical documentation
Potential for harms of AI, or opportunities to maximise benefits, may not be adequately addressed by teams without adequate understanding of sites of use and their distinctive ethical dimensions
Decisions with unclear grounding
AI nudges may be deployed inappropriately or fail to address their target outcome, in ways that are challenging for nudge-receivers to address
Data fairness and bias at the system and the data or input level
And for education
here to edit
check this, these are now drawn from the same table as the concepts in the introduction and so the thread should be clear
How is education research different?
In seeking to protect participants we may assume deidentification, however this may serve to marginalise participant contributions to research
Decisions throughout research processes have implications for risks and benefits; assessment of these is impacted by researcher-participant relationships
Opportunities to engage ethically with AI are entwined with learning opportunities in ways that may lead to inequities in benefits and access
Work involving commercial actors may be driven by commercial interests over scientific rigour impacting both the conduct of evaluation and validity of claims made regarding system efficacy
The intended uses, assumptions, and limitations of tools must be clear to all users including via technical documentation
Decisions with unclear grounding
Professionals have responsibility for their engageement with AI systems
Think big-picture about the wider impacts of the AI technologies you are conceiving and developing. Think about the ramifications of their effects and externalities for others around the globe, for future generations, and for the biosphere as a whole(Turing)
Competence "creators shall specify and operators shall adhere to the knowledge and skill required for safe and effective operation" ("IEEE")
Public participation
Prioritise diversity, participation, and inclusion at all points in the design, development, and deployment processes of AI innovation.recb9jansrmlbd8ee Encourage all voices to be heard and all opinions to be weighed seriously and sincerely throughout the production and use lifecycle recoxk4togqoh2ne3
Trust Workfoce diversity
Trust is key to participation in research, and public confidence in the research process. Researchshould seek to foster trust in the way it is designed and implemented. Inequities in workfoce diversity may lead to loss of expertise and shifrts in research foci, and poorer trust from stakeholder groups.
- Effectiveness The effectiveness of any tool must be established through robust approaches that establish the validity of claims made in a way that is open to interrogation. Creators and operators shall provide evidence of the effectiveness and fitness for purpose of A/IS.
Effectivenes: Creators and operators shall provide evidence of the effectiveness and fitness for purpose of A/IS. responsible adoption and deployment of A/IS are essential if such systems are to realize their many potential benefits to the well-being of both individuals and societies. A/IS will not be trusted unless they can be shown to be effective in use. Harms caused by A/IS, from harm to an individual through to systemic damage, can undermine the perceived value of A/IS and delay or prevent its adoption.Operators and other users will therefore benefit from measurement of the effectiveness of the A/IS in question. To be adequate, effective measurements need to be both valid and accurate, as well as meaningful and actionable. And such measurements must be accompanied by practical guidance on how to interpret and respond to them. (IEEE, 2019, p.26-27) rec5n7pavefrs2o8r
- acuracy of systems is a feature of this effectiveness, although accuracy must be considered in terms of human use and interpretation (improved accuracy may not lead to improved outcomes)
- systems should be predictable and reliable
“Technical robustness and safety including resilience to attack, security and general safety, accuracy, reliability, and reproducibility.” (European Commission, 2022, p. 19) recwlcmwdpe9fd1pe
“Accountability including auditability, minimisation and reporting of negative impact, trade-offs, and redress. The considerations and requirements can help educators, school leaders and technology providers to adequately assess the impact, address the potential risks, and realise the benefits of an AI system deployed and used in education. As such they guide the development, deployment and use of trustworthy AI systems.” (European Commission, 2022, p. 19) reczcrrifbqqpn8ix
A/IS shall be created and operated to provide an unambiguous rationale for decisions made. BackgroundThe programming, output, and purpose of A/IS are often not discernible by the general public. Based on the cultural context, application, and use of A/IS, people and institutions need clarity around the manufacture and deployment of these systems to establish responsibility and accountability, and to avoid potential harm. Additionally, manufacturers of these systems must be accountable in order to address legal issues of culpability. It should, if necessary, be possible to apportion culpability among responsible creators (designers and manufacturers) and operators to avoid confusion or fear within the general public.Accountability and partial accountability are not possible without transparency, thus this principle is closely linked with Principle 5–Transparency. recwlipa3l9qtydws
Creators shall specify and operators shall adhere to the knowledge and skill required for safe and effective operation. BackgroundA/IS can and often do make decisions that previously required human knowledge, expertise, and reason. Algorithms potentially can make even better decisions, by accessing more information, more quickly, and without the error, inconsistency, and bias that can plague human decision-making. As the use of algorithms becomes common and the decisions they make become more complex, however, the more normal and natural such decisions appear.Operators of A/IS can become less likely to question and potentially less able to question the decisions that algorithms make. Operators will not necessarily know the sources, scale, accuracy, and uncertainty that are implicit in applications of A/IS. As the use of A/IS expands, more systems will rely on machine learning where actions are not preprogrammed and that might not leave a clear record of the steps that led the system to its current state. Even if those records do exist, operators might not have access to them or the expertise necessary to decipher those records.Standards for the operators are essential.Operators should be able to understand howA/IS reach their decisions, the information and logic on which the A/IS rely, and the effects of those decisions. Even more crucially, operators should know when they need to question A/IS and when they need to overrule them.Creators of A/IS should take an active role in ensuring that operators of their technologies have the knowledge, experience, and skill necessary not only to use A/IS, but also to use it safely and appropriately, towards their intended ends. Creators should make provisions for the operators to override A/IS in appropriate circumstances.While standards for operator competence are necessary to ensure the effective, safe, and ethical application of A/IS, these standards are not the same for all forms of A/IS. The level of competence required for the safe and effective operation of A/IS will range from elementary, such as “intuitive” use guided by design, to advanced, such as fluency in statistics. reckwrfjzx52axsif
recfvlyl0cmh88lzvThe basis of a particular A/IS decision should always be discoverable.
BackgroundA key concern over autonomous and intelligent systems is that their operation must be transparent to a wide range of stakeholders for different reasons, noting that the level of transparency will necessarily be different for each stakeholder. Transparent A/IS are ones in which it is possible to discover how and why a system made a particular decision,or in the case of a robot, acted the way it did. The term “transparency” in the context ofA/IS also addresses the concepts of traceability, explainability, and interpretability. A/IS will perform tasks that are far more complex and have more effect on our world than prior generations of technology. Where the task is undertaken in a non-deterministic manner, it may defy simple explanation. This reality will be particularly acute with systems that interact with the physical world, thus raising the potential level of harm that such a system could cause. For example, some A/IS already have real consequences to human safety or well-being, such as medical diagnosis or driverless car autopilots. Systems such as these are safetycritical_ _systems. At the same time, the complexity of A/IS technology and the non-intuitive way in which it may operate will make it difficult for users of those systems to understand the actions of the A/IS that they use, or with which they interact. This opacity, combined with the often distributed manner in which the A/IS are developed, will complicate efforts to determine and allocate responsibility when something goes wrong. Thus, lack of transparency increases the risk and magnitude of harm when users do not understand the systems they are using, or there is a failure to fix faults and improve systems following accidents. Lack of transparency also increases the difficulty of ensuring accountability (see Principle 6— Accountability).Achieving transparency, which may involve a significant portion of the resources required to develop the A/IS, is important to each stakeholder group for the following reasons:1.For users, what the system is doing and why.2.For creators, including those undertaking the validation and certification of A/IS, the systems’ processes and input data.3.For an accident investigator, if accidents occur.4.For those in the legal process, to inform evidence and decision-making.5.For the public, to build confidence inthe technology.Develop new standards that describe measurable, testable levels of transparency, so that systems can be objectively assessed and levels of compliance determined. For designers, such standards will provide a guide for self-assessing transparency during development and suggest mechanisms for improving transparency. The mechanisms by which transparency is provided will vary significantly, including but not limited to, the following use cases:1.For users of care or domestic robots, a “whydid-you-do-that button” which, when pressed, causes the robot to explain the action itjust took.2.For validation or certification agencies, the algorithms underlying the A/IS and how they have been verified.3.For accident investigators, secure storage of sensor and internal state data comparable to a flight data recorder or black box. IEEE P7001™, IEEE Standard for Transparency of Autonomous Systems is one such standard, developed in response to this recommendationp.28-29
NS Fjeld Schiff Jobin
and in education
Research worth having
Particular Strategies
These are strategies that relate to merit and integrity or worth.
Overview of considerations in transparency
Principle of Accountability, key considerations
Considerations in assessing trustworthy AI - Access to data
Questions to consider in key stages of AI and machine learning based research, regarding transparency and explainability
governance-question project-dissemination reflection-questions
Questions to consider in key stages of AI and machine learning based research, regarding corporate-academic research interaction
Questions to consider in key stages of AI and machine learning based research, regarding downstream responsibility
internet-research project-dissemination reflection-questions
Considerations in assessing trustworthy AI - Reliability and reproducibility
Systematic analysis of consequences, transparency, user agency, and safeguards are central to implementation of AI nudges
Questions to consider in key stages of AI and machine learning based research, regarding aims and risks
project-design reflection-questions
Considerations in assessing trustworthy AI - Accuracy
Questions to consider in SoTL research professional learning
education-research reflection-questions
Considerations in assessing trustworthy AI - Communication
Key discussion points for planning SoTL research regarding power relationships
education-research reflection-questions
Key discussion points for planning SoTL research regarding secondary analysis of data
education-research reflection-questions
Considerations in assessing trustworthy AI - Minimising and reporting negative impacts
Questions to consider in assessing benefits
Principle of Safety, key considerations
Guiding questions for educators regarding Accountability of AI in Education
Guiding questions for educators regarding Technical Robustness and Safety of AI in Education
Questions to consider in assessing risk of harms
Principle of responsible delivery through human-centred implementation, key considerations
Considerations in assessing trustworthy AI - Traceability
Guiding questions for educators regarding Transparency of AI in Education
Questions to consider in SoTL research inception
education-research reflection-questions
Considerations in assessing trustworthy AI - Quality and integrity of data
Ethics must be embedded in STEM education and professional accreditation
Questions to consider in key stages of AI and machine learning based research, regarding the Research Analytical Process
project-analysis reflection-questions
Questions to consider in key stages of AI and machine learning based research, regarding methodological scope
project-design reflection-questions
Align AI models to a shared vision of education
Questions to consider in key stages of AI and machine learning based research, regarding merit
project-design reflection-questions
Cross-disciplinary work should be incentivised
Questions to consider in key stages of AI and machine learning based research, regarding reproducibility and replicability
project-dissemination reflection-questions
Considerations in assessing trustworthy AI - Auditability
Questions to consider regarding re-identification and disciplinary methodological norms
Particular Cases
Using chatbots to guide learners and parents through administrative tasks
Resource-management-and-administration