Ethical action

How we interact with each other has ethical dimensions. This is true of both direct interactions (e.g. in face-to-face communication), and indirect interactions (e.g., through the ways our decisions impact people at a distance to us in how we use technology, shop, or conduct particular research projects). Ethical conduct involves a responsiveness to these ethical dimensions in acting to respect others. It thus goes beyond rule following; ethical principles and other ethical concepts are intended to provide tools to support shared consideration of human values in fostering respect for persons.

Ethical issues can be rendered invisible, static, or simple (in the sense of unambiguous, straightforward, or uninvolved) in many contexts. This may occur because ethical reflection is not embedded in professional practice and learning contexts, sometimes only considered in stand-alone courses or events, or/and abstracted from the practical contexts we may face in working and living in society.

However, ethical issues are complex, ongoing, and negotiated. That complexity arises due to plurality in the values and principles held within and across contexts, norms regarding practices to navigate ethical issues, and the inherent complexity of navigating tensions in values. Ethical action involves ongoing consideration of these issues. Ethical issues are not always clear-cut, they may involve ongoing negotiation, and while there may be 'better' and 'worse' approaches, there are contexts where it is reasonable to say there is no 'correct' answer.

Ethics vocabulary

Reflection on ethical issues has a long history across societies, reflected in scholarly and cultural materials from religious teachings, scholarly and fiction writings, and plays, slam poetry, paintings, etc. All of these works can form part of our ethical vocabulary, providing shared language, metaphors, or reference points in discussing issues.1

Consider creative works such as movies that reflect ethical issues in educational settings and the use of AI in society

The history of research ethics is often narrated as grounded in key events that have prompted action. This narrative provides important reflections regarding fundamental human rights and their violation; these include the atrocities prosecuted in the Nuremberg tribunals. It is important to remember though, that the development of ethical conduct and guidance does not always go hand-in-hand or causally with events; we can and should be aware of and responsive to ethical concerns outside of the context of clear and significant harms, and we must also be aware and responsive to the concern that significant harms have not always led to better ethical standards, guidelines, or regulation. It is also important to recognise that historic narratives can suggest a universal progress; while there are international similarities in research ethics, development of standards, and harms incurred, have not been universal and are ongoing.2

Key events have certainly increased attention to ethical concerns, and our common understanding of standards for ethics. The 20th Century saw a range of human rights legal instruments emerge internationally, to protect the dignity The human right to be valued and treated with respect because of one's personhood. of humans across contexts. Atrocities prosecuted in the Nuremberg tribunals gave rise to the Nuremberg Code ethical principles for human experimentation with the subsequent Declaration of Helsinki developing these principles across a number of revisions. While these principles focus on biomedical research, ethics guidelines have been developed across professional and research domains.

Research and ethics

What is research?

Definitions of research vary, but in a broad sense it involves systematic creation of knowledge or its synthesis for creative purposes and may occur across sectors of society including research institutions such as universities, and areas of practice from teaching to art. Typically we do not consider something to be research if the activity is grounded in existing routine practices, or is not targeted at knowledge generation (e.g., teaching).

Research holds significant benefit for society, in helping us understand and improve our world. Human research takes humans and their interactions as the focus, by analysing their physical selves (e.g., in medical research), psychological features, and interactions in groups and in society at large.

Across this range of research, there are risks that may arise. These may come about due to ethical violations (cases of unethical action and unethical motivation), or ethical insensitivity (cases of unethical action but unclear motivation). They may also come about through less obvious concerns such as use of methods that are inadequate for the claims being made, or in cases where it is not clear what the best course of action would be.

Research and Trust

Research plays an important role in society, involving navigation of complex ethical issues. For research to be impactful there is a responsibility that it is undertaken ethically. Trust in this process must be maintained both with participants who directly contribute to individual projects often for altruistic reasons, and with wider society.

Although unethical action or ethical violations may be relatively unusual, both hard regulation (e.g. committee approvals) and soft regulation (e.g., guidelines and codes) foster ethical practice beyond compliance of clear unethical behaviour. Ethical reflection and governance provide opportunities for decision-making to incorporate multiple voices, and to foster trust in research through expression of shared values.3

Guidelines are intended to provide support to those involved with research in navigating the ethical dimensions of that research, including:

  • researchers
  • Research Ethics Committees (RECs) and others conducting ethics review of research
  • institutions that set up the processes of ethics review, and whose employees, resources and facilities are involved in research
  • funding organisations
  • agencies that set standards
  • governments

What is human research?

Here we are particularly interested in the further questions:

  • What is human research?
  • When is research involving 'AI' human research?
  • What are the distinctive features of research in education, especially where it involves use of AI?

The National Statement addresses the first question thus: "Human research is conducted with or about people, or their data or tissue. Human participation in research is therefore to be understood broadly, to include the involvement of human beings through:

  • taking part in surveys, interviews or focus groups
  • undergoing psychological, physiological or medical testing or treatment
  • being observed by researchers
  • researchers having access to their personal documents or other materials
  • the collection and use of their body organs, tissues or fluids (for example, skin, blood, urine, saliva, hair, bones, tumour and other biopsy specimens) or their exhaled breath
  • access to their information (in individually identifiable, re-identifiable or non-identifiable form) as part of an existing published or unpublished source or database. (You may find this resource from Future of Priacy Forum useful on the distinction between identifiable, re-identifiable, and anonymous data (pdf) )

The term participants is, therefore, used very broadly [...] to include those who may not even know they are the subjects of research, for example, where the need for their consent for the use of their tissue or data has been waived by a Human Research Ethics Committee (HREC).

In addition, the conduct of human research often has an impact on the lives of others who are not participants. When this impact is reasonably foreseeable, it may raise ethical questions for researchers and for those ethically reviewing research." (NS, 2019, §premable).

What do we mean by Artificial Intelligence?

AI in Education

AI in Education: Where and why?

“...learners and educators increasingly use Artificial Intelligence (AI) systems, sometimes without realising it. Search engines, smart assistants, chatbots, language translation, navigation apps, online videogames and many other applications use Artificial Intelligence in our everyday lives. AI systems rely on data, which is collected in different modalities (e.g. sound, images, text, posts, clicks) and all together form our digital traces. AI has great potential to enhance education and training for learners, educators and school leaders." (European Commission, 2022, p. 10)

These systems are being used across a range of functions including administrative processes, directly by learners through 'tutoring systems' and other personalised learning tools, in classrooms with groups of learners, and in identifying specific learning needs (European Commission, 2022). Evaluating the impact of these tools is important, and to do that effectively a wide range of stakeholders need at least some level of AI literacy (European Commission, 2022). As Cardona et al., (2023) set out, this prompts two questions: 1. What are the shared values that unerpin our understanding of education, and thus the role of AI in promoting that model 2. How can this shared vision guide use of AI as it is introduced over time.

Ethics in education research

“Why do we need these guidelines? The use of AI systems can potentially enhance teaching, learning and assessment, provide better learning outcomes and help schools to operate more efficiently. However, if those same AI applications are not properly designed or used carelessly, this could lead to harmful consequences. Educators need to be aware and ask questions whether AI systems they are using are reliable, fair, safe and trustworthy and that the management of educational data is secure, protects the privacy of individuals and is used for the common good. “Ethical AI” is used to indicate the development, deployment and use of AI that ensures compliance with ethical norms, ethical principles and related core values.” (European Commission, 2022, p. 11). Because of bluring in roles, across teachers, researchers, developers, etc., there are challenges for stakeholders where they may have dual-roles. This issue is paralleled in any context in which educators conduct research on their students, including Scholarship of Teaching and Learning (SoTL) (see Fedoruk, 2017, re: SoTL).

AI in Education: What and How?

“What do we mean by AI and data use in Education? Schools typically process substantial amounts of educational data including personal information about students, parents, staff, management and suppliers. Data collected, used, and processed in education is often referred to as “educational data”. These consist of data recorded in student information systems for example, educational achievements, parent names, assessment grades, as well as micro-level data generated when digital tools are used. When students interact with digital devices, they generate digital traces such as mouse clicks, data on opened pages, the timing of interaction events, or key presses. In the same way when using intelligent tutoring systems (ITS) in classrooms, learning mathematics or modern languages produce learning activity traces. All this data can be combined to capture each student’s online behaviour. This type of trace data (digital usage and learning activity traces) is often used for learning analytics (LA). Data in student information systems can be further used for resource and course planning and to predict dropout and guidance.” (European Commission, 2022, p. 11)

“AI can be defined as 'automation based on associations.' When computers automate reasoning based on associations in data (or associations deduced from expert knowledge), two shifts fundamental to AI occur and shift computing beyond conventional edtech: (1) from capturing data to detecting patterns in data and (2) from providing access to instructional resources to automating decisions about instruction and other educational processes. Detecting patterns and automating decisions are leaps in the level of responsibilities that can be delegated to a computer system. The process of developing an AI system may lead to bias in how patterns are detected and unfairness in how decisions are automated. Thus, educational systems must govern their use of AI systems.” (Cardona et al., 2023, p. 1)

"The definition of an Artificial Intelligence system (AI system) proposed in the draft AI Act is 'software that is developed with one or more of the techniques and approaches [...] and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments it interacts with'” (European Commission, 2022, p. 10)

Types of AI system

In thinking about types of AI systems we can think about technical categorisations as the EC does:

a) "Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning; b) Logic and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems; c) Statistical approaches, Bayesian estimation, search and optimisation methods.” (European Commission, 2022, p. 10)

However, it may be more useful to look at types of use. As Cardona et al., (2023) set out, educational technology can include "both (a) technologies specifically designed for educational use, as well as (b) general technologies that are widely used in educational settings.” (Cardona et al., 2023, p. 1)

“Below we address three additional perspectives on what constitutes AI. Educators will find these different perspectives arise in the marketing of AI functionality and are important to understand when evaluating edtech systems that incorporate AI. One useful glossary of AI for Education terms is the CIRCLS Glossary of Artificial Intelligence Terms for Educators. AI is not one thing but an umbrella term for a growing set of modeling capabilities, as visualized” (Cardona et al., 2023, p. 11)

Components "Components, types, and subfields of AI, from (Cardona et al., 2023, p. 11) based on Regona et al (2002) Regona, Massimo & Yigitcanlar, Tan & Xia, Bo & Li, R.Y.M. (2022). Opportunities and adoption challenges of AI in the construction industry: A PRISMA review. Journal of Open Innovation Technology Market and Complexity, 8(45). https://doi.org/10.3390/joitmc8010045"

Human-Like Reasoning: “The idea of 'human-like' is helpful because it can be a shorthand for the idea that computers now have capabilities that are very different from the capabilities of early edtech applications. Educational applications will be able to converse with students and teachers, co-pilot how activities unfold in classrooms, and take actions that impact students and teachers more broadly. There will be both opportunities to do things much better than we do today and risks that must be anticipated and addressed. The “human-like” shorthand is not always useful, however, because AI processes information differently from how people process information. When we gloss over the differences between people and computers, we may frame policies for AI in education that miss the mark.” (Cardona et al., 2023, p. 12)

An Algorithm that Pursues a Goal: “This second definition emphasizes that AI systems and tools identify patterns and choose actions to achieve a given goal. These pattern recognition capabilities and automated recommendations will be used in ways that impact the educational process, including student learning and teacher instructional decision making. For example, today’s personalized learning systems may recognize signs that a student is struggling and may recommend an alternative instructional sequence.” (Cardona et al., 2023, p. 12)

“Although this perspective can be useful, it can be misleading. A human view of agency, pursuing goals, and reasoning includes our human abilities to make sense of multiple contexts. For example, a teacher may see three students each make the same mathematical error but recognize that one student has an Individualized Education Program to address vision issues, another misunderstands a mathematical concept, and a third just experienced a frustrating interaction on the playground; the same instructional decision is therefore not appropriate. However, AI systems often lack data and judgement to appropriately include context as they detect patterns and automate decisions. Further, case studies show that technology has the potential to quickly derail from safe to unsafe or from effective to ineffective when the context shifts even slightly. For this and other reasons, people must be involved in goal setting, pattern analysis, and decision-making.15” (Cardona et al., 2023, p. 13)

Intelligence Augmentation: “'Intelligence Augmentatio'” (IA) centers 'intelligence' and 'decision making' in humans but recognizes that people sometimes are overburdened and benefit from assistive tools. AI may help teachers make better decisions because computers notice patterns that teachers can miss. For example, when a teacher and student agree that the student needs reminders, an AI system may provide reminders in whatever form a student likes without adding to the teacher’s workload. Intelligence Automation (IA) uses the same basic capabilities of AI, employing associations in data to notice patterns, and, through automation, takes actions based on those patterns. However, IA squarely focuses on helping people in human activities of teaching and learning, whereas AI tends to focus attention on what computers can do.” (Cardona et al., 2023, p. 14)

When is research involving 'AI' human research?

As was noted with the emergence of internet research (Markham and Buchanan, 2012), new technologies have had dramatic impact, and this creates challenges in delineating research. To paraphrase their discussion (p.3-4) of internet research for the context of AI (additions in bold):

"AI research encompasses inquiry that:

(a) utilizes AI technologies to collect data or information, e.g., through synthetic data creation (e.g., of test data in a survey), use of retrieval augmented generation (e.g., automated literature review), or generation of other materials (e.g., synthetic research 'personas', creation of visual stimuli, etc.); (b) studies how people use and access **AI; (c) utilizes or engages AI in data processing, analysis, or storage of datasets, databanks, and/or repositories, e.g., through generated analysis code or analyses, obfuscation of data or its aggregation or deidentification through automated means . (d) studies AI technologies as software, code, and socio-material artefacts (e) examines the design or structures of systems, interfaces, algorithms, and classes, and their use in material context (f) employs visual and textual analysis, semiotic analysis, content analysis, or other methods of analysis to study AI, its purposes, development, implementations, and evaluation. (g) studies inception, development, implementation, evaluation, and regulation of AI by governments, industries, corporations, research institutions, and military forces. ” (Markham and Buchanan, 2012, paraphrased from p. 3-4)

As they further note, AI research is not contingent on the study of a particular technology; given that technologies - including those involving AI - mediate many aspects of life, the scope of AI research is broad.

Broadly, as Leslie (2019) in the Turing Institute guidance flags, to ground work involving AI in an ethical stance, for the public benefit:

  • "You will have to ensure that your AI project is ethically permissible by considering the impacts it may have on the wellbeing of affected stakeholders and communities.
  • You will have to ensure that your AI project is fair and non-discriminatory by accounting for its potential to have discriminatory effects on individuals and social groups, by mitigating biases that may influence your model’s outputs, and by being aware of the issues surrounding fairness that come into play at every phase of the design and implementation pipeline.
  • You will have to ensure that your AI project is worthy of public trust by guaranteeing to the extent possible the safety, accuracy, reliability, security, and robustness of its product.
  • You will have to ensure that your AI project is justifiable by prioritising both the transparency of the process by which your model is designed and implemented, and the transparency and interpretability of its decisions and behaviours. ” (Leslie, 2019, p. 6)

What are the distinctive features of research in education, especially where it involves use of AI?

Across the national statement, and the review materials (Fjeld, Schiff, and High-Level Exper Group on AI), it is acknowledged that ethical action respects fundamental human rights, seeking to promote thse rights throughout the research endeavour.

Ethical action, and the particular activities we engage in - including teaching, research, and data processing - are governed by legal and contractual obligations. These are not the focus of this guideline, and you should ensure you are familiar with and clear about these considerations.

However, whatever our work involves, it occurs in wider society Particularly in the context of research:

ethical reflection should include consideration of “...Societal and environmental wellbeing including sustainability and environmental friendliness, social impact, society, and democracy.” (European Commission, 2022, p. 18).

This includes that:

“AI systems should serve to maintain and foster democratic processes and respect the plurality of values and life choices of individuals. AI systems must not undermine democratic processes, human deliberation or democratic voting systems. AI systems must also embed a commitment to ensure that they do not operate in ways that undermine the foundational commitments upon which the rule of law is founded, mandatory laws and regulation, and to ensure due process and equality before the law.” (High-Level Expert Group on AI, 2019, p. 11)

Crucially:

"A/IS technologies can be narrowly conceived from an ethical standpoint. They can be legal, profitable, and safe in their usage, yet not positively contribute to human and environmental wellbeing. This means technologies created with the best intentions, but without considering wellbeing, can still have dramatic negative consequences on people’s mental health, emotions, sense of themselves, their autonomy, their ability to achieve their goals, and other dimensions of wellbeing."(IEEE, 2019, p.23-24)

Strategies in tackling concerns of AI social and environmental sustainability

Principles overview

Traditionally, research ethics is framed in terms of researcher- participant we use participant to refer to those choosing to participate in research, those where consent-waivers may be in place or where some stakeholders may fulfil participant-researcher roles (e.g., teachers), and those 'data subjects' whose data is used in research often without their knowledge. interaction. In this context, the fundamental principles of research ethics are intended to ground those relationships in respect for persons.

There are well established principles for research ethics, with many commonalities across principles produced in different communities. Underpinning research ethics are three principles of respect for persons, beneficence (and non-maleficence), and justice. A fourth principle of ‘Merit and Integrity’ - I.e., that research should be conducted that is worthwhile and rigorous, and executed with integrity - is implicit in many guidelines and made explicit in the Australian National Statement on research ethics with human participants.

This guideline is not intended to offer a prescriptive model of ethics principles, and so although particular principles are highlighted throughout, readers may wish to consider how the principles in this guideline related to those from their research context. For example, the excellent BERA guidelines largely parallel the principles in this guideline, although different language is used in places. The BERA guidelines also provide helpful overview of the responsibilities of different stakeholders, and thus act as an excellent supplement.

Although these principles are broadly common across research ethics systems, it is worth highlighting that (1) implementation varies, and what is acceptable in one context may not be in another; (2) local values and cultures of research and ethics must be considered in navigating ethical action, and there may be ethical concepts with important nuances in interpretation; and (3) there are differences in ethical and legal context internationally which have an impact on the kind of research conducted or the expression of research ethics in that work. A key variation internationally regards the way in which long-range impacts are considered,

Ethical reflection occurs from the choosing of issues to research, and through research planning, execution, analysis, and dissemination.

  • Respect for persons
  • Beneficence
  • Justice
  • Worth - without this it cannot be justifiable

Adopting an Ethical approach

A range of approaches are used to illustrate use of this guidance, and to connect principles to practice including:

  1. Cases that show how to apply to different kinds of situation, making clear the ethical concepts of relevance to any case, and any tensions present
  2. Scenarios that demonstrate application at different phases of a project
  3. Tools – exemplar forms, technologies, processes, legal instruments, etc. – may be helpful to include, particularly where they connect to existing practice

This page draws content from

Footnotes


  1. Dalton-Brown, S. (18 January 2023) Can reading Australian novels help us become more ethical researchers? Research Ethics Monthly: https://ahrecs.com/can-reading-australian-novels-help-us-become-more-ethical-researchers/ 

  2. Thomson C. (30 July 2018) Research Ethics in Australia: A Story. Research Ethics Monthly. Retrieved from https://ahrecs.com/human-research-ethics/research-ethics-in-australia-a-story 

  3. “It is sometimes argued that since unethical research is not widespread, the present form of regulation constitutes an over-reaction to rare scandalous behaviour in the conduct of research. While it is true that over-regulation may be an obstacle to the pursuit of (ethically and scientifically) good research, it should be pointed out that ethical decision-making is complex, and because of this complexity individual researchers may not be best placed to decide about the ethical issues a project raises. Instead it may need a group of experts, both scientific and ethical, to make a good decision. Likewise, given the plurality of ethical views that are available, a committee decision can be representative in ways that an individual decision about the ethical legitimacy of a course of action cannot.” (European Commission. Directorate-General for Research, 2010, p. 11) and "Multiple judgments are possible, and ambiguity and uncertainty are part of the process. We advocate guidelines rather than a code of practice so that ethical research can remain flexible, be responsive to diverse contexts, and be adaptable to continually changing technologies. When one considers that ethical assessments are always operationalized via some sort of practice (method), and also contextualized institutionally and/or geographically, it becomes clearer that an adaptive, inductive approach can yield potentially more ethically legitimate outcomes than a simple adherence to a set of instantiated rules. The emphasis on a process approach highlights the researcher’s responsibility for making such judgments and decisions within specific contexts and, more narrowly, within a specific research project.” (Markham and Buchanan, 2012, p. 5)