Questions for guideline developers
How should the guideline be licensed and structured to support your targeted uses, for example via a creative-commons license. Is it clear to readers who the authors and organisations disseminating the guidelines are, when the guideline was produced, and where it may be obtained?
Meta-data
This document is released under a Creative Commons Attribution (CC-By) license, except where external material is included and highlighted as under another license.
Simon Knight, authors, (2023). Ethical Guidelines for Research; versioned for research involving Artificial Intelligence, in education and learning contexts.
The materials are intended to support disaggregation and remixing in the following ways:
- The components of the guideline are presented both compiled in a single document, and separately so that each element may be used separately
- The material may be added to, or adjusted e.g. to reflect different disciplinary foci or ethical concepts of relevance to a particular context. For example, a subset of the material focuses directly on applications of AI in learning, but we anticipate this may be added to.
Audience
Research - the systematic creation or synthesis of new knowledge - permeats society.
This guide is thus intended to be written in a way that is broadly accessible. If, as researchers, we want people to engage with and understand the value of research, we must be able to publicly account for that value, through the impact of the work in society, grounded in rigorous, responsible, research.
This guideline is targeted at research contexts, recognising that research occurs in many settings, and may be led by or involve many different stakeholders. For example research is conducted by:
- Universities, higher education institutions, and research institutes formally constituted to undertake research for the public benefit
- Other types of organisations and institutions undertaking research directed at their commercial or non-commercial goals
Whatever the context of research, it should be conducted responsibly with regard to ethical considerations. While private oragnisations may have different needs with respect to the commercial sensitivity of research or its applications and corresponding intellectual property, where they undertake research - for example development of products where outcomes are not known in advance - it is important this is done so ethically.
Use
The lines between different stakeholders in research are often blured, and thus it is important that guides like this are accessible to a broad audience, who may fulfil multiple roles. This guide takes the view that ethics guidelines should support joint navigation of ethical issues, that is, they should provide support for participants who wish to investigate in understanding ethical issues just as they provide support to researchers. This reflects the fluid lines between technology-users, participants, researchers, and providers of various kinds (from policy makers to technology companies).
The guide is structured to provide discussion of key ethics principles. The principles are each connected to key challenges in addressing that principle, strategies you may adopt in navigating and negotiating the ethical issues, and cases that instantiate the challenges into concrete examples.
A particular focus of this guide is AI in the context of learning and education, and we have thus tried to indicate considerations throughout that are specific to this context, compared to those that are more general in nature.
Approach to developing this guideline
to add here
tbc re: draft paper
We have sought to draw on existing material in order to connect to and ground this guideline in practices that have emerged through evidence-informed processes.
The key concerns in the documents drawn on vary, although there are commonalities in the ethical principles they highlight. We have sought to group these principles, the challenges they are applied to, and key strategies, to support navigation. In doing this, some key distinctive features of AI are highlighted, in order to help address the question of why existing guidance isn't enough; these key claims are that:
Assumption 1: There are shifting bounds between participants, researchers, and organisations (including commercial organisations)
Recommendation: applying the lens of research ethics is crucial for ethical practice. More organisations should develop approaches informed by research ethics, and be regulated in doing so.
Assumption 2: There are distinctive features in the nature of AI work as either naturalistic (using data occurring in non-research environments, or directly intervening into those environments), or applied (algorithms developed in research environments, but with clear intent to be adopted and applied in naturalistic contexts)
Recommendation: it is important that this work undergo independent oversight that is risk proportionate, and grounded in evidence (e.g., regarding the nature of risks/benefits, and the rationale or evidence for claims made regarding the possible uses and benefits of any system)
Assumption 3: AI work inevitably involves cross-context activity and understanding
Recommendation: there is an imperative for multidisciplinary and transdisciplinary teams and evidence use in understanding not only the technical algorithm development but the identification of challenges to which algorithms are applied, the practical use in those contexts, and ongoing evaluation of impact. This requires ongoing collaboration that cuts across stakeholder expertise to include researchers and those stakeholders with expertise of the contexts into which tools will be deployed (a transdisciplinary approach); this must be ongoing
1. because evidence is tentative and evolving thus oneshot evaluation are inadequate, and
2. because values and practices change across time, setting, and use and thus must not be treated as monolithic and fixed.