Back Share
Sources

IEEE

AI ethics-guideline research-ethics

"Autonomous and intelligent technical systems are specifically designed to reduce the necessity for human intervention in our day-to-day lives. In so doing, these new systems are also raising concerns about their impact on individuals and societies. Current discussions include advocacy for a positive impact, such as optimization of processes and resource usage, more informed planning and decisions, and recognition of useful patterns in big data. Discussions also include warnings about potential harm to privacy, discrimination, loss of skills, adverse economic impacts, risks to security of critical infrastructure, and possible negative long-term effects on societal well-being.Because of their nature, the full benefit of these technologies will be attained only if they are aligned with society’s defined values and ethical principles." p.7

Challenge Instances AI gives rise to potential harms derived from indirect, long-range, and dual-use Respect for human-human relationships, at individual and collective levels AI has potential to have impact beyond just on the participants involved into society, and it may not be clear what these impacts will be. Data that can be used to support learning may also be used for social profiling of learners Inequities in access to AI may increase, not tackle, inequality Opportunities to engage ethically with AI are entwined with learning opportunities in ways that may lead to inequities in benefits and access Research involving learning may seek to draw on secondary analysis of learner data Data collection or/and analysis often involves methods that challenge individually based participation models It is not always clear how our data is being used, or how we could find out It is not always clear how we can protect our identities to assure privacy and identity verification Generic T&C statements provide limited control to individuals over their own data Non-research platforms such as smart toys may collect data about children with little regulation Implementation of AI may be driven by commercial, not values-based, aims How do we foster AI that contributes to sustainability in context of short-term growth priorities? There may be lack of accountability for centering stakeholder experience and agency in the design of AI driven by commercial interests Work involving commercial actors may be driven by commercial interests over scientific rigour impacting both the conduct of evaluation and validity of claims made regarding system efficacy Potential for harms of AI, or opportunities to maximise benefits, may not be adequately addressed by teams without adequate understanding of sites of use and their distinctive ethical dimensions It may be perceived that as long as legal compliance is met, the responsibility for the impacts of AI is addressed There may be gaps in adequacy of institutional ethics review committees to provide oversight of AI research Agency may be stymied in the context of black box models How do we ensure protection of data during humanitarian emergencies? Defining desired outcomes - including general wellbeing - requires stakeholder involvement Norms, aims, and practices are diverse and may vary by location, change over time, and come into conflict Assessing the impact of AI requires sensitivity to culturally situated human social interaction How can AI model norms to govern its action How do we represent cross-cultural differences in communication through AI systems? How do we embed values throughout AI workflows? How do we provide consistent oversight of AI to ensure they are accountable to end-users for any conclusions? Longstanding concern regarding 'deception' should be applied to the context of affective AI systems AI nudges may be deployed inappropriately or fail to address their target outcome, in ways that are challenging for nudge-receivers to address Management may experience reduced autonomy, through automisation that limits creative, affective, empathetic concerns It is not clear what the impact of AI use will be on psychological and emotional wellbeing Access to increasing amounts of data about each other may impact our interactions in ways that change human autonomy The intended uses, assumptions, and limitations of tools must be clear to all users including via technical documentation Ensuring safe use of AI involves consideration of social and technical issues Considerations in assessing trustworthy AI - Minimising and reporting negative impacts AI poses risks to the right to truthful information AI poses threats to labour Should affective AI nudge users for personal or societal benefit? How might AI deployed in care settings to foster intimate relationships impact on relationships among humans?
Rights
Strategies Tools for individuals to create custom machine-readable dynamic terms and conditions that respect their preferences for data collection and use AI assistants should be created to help users understand and manage how their data is being used Trust identity verification services to validate and protect identity Educational data should be classified as sensitive and held in 'escrow' not available for commercial purposes Ethics must be embedded in STEM education and professional accreditation Cross-disciplinary work should be incentivised Community norms should be identified, and expertise from intercultural information ethics practitioners embedded in ethics committees Identify priorities and develop standards for the governance of AI research Corporations should implement "ethics filters" throughout the development process Roles regarding ethics should have leadership capacity, to foster ethical culture across the organisation Company culture, values, codes, and power structures should enable speaking to ethical concerns Establish corporate ethics review boards Considerations in assessing trustworthy AI - Stakeholder participation Considerations in assessing trustworthy AI - Minimising and reporting negative impacts Independent standards body for ethical criteria Questions to consider in key stages of AI and machine learning based research, regarding transparency and explainability Explicitly consider how AI may foster and hamper SDG progress Wellbeing measures and promotion should be part of AI evaluation Sustainability should be part of AI impact design and evaluation Design recommendations for affective systems Affective systems should be configurable to cultural context Monitor impact of affective AI in care on human-human relationships and implement safeguards Systematic analysis of consequences, transparency, user agency, and safeguards are central to implementation of AI nudges Consider where AI-mediation is, and is not, appropriate with respect to human relationships and autonomy Legislate to prevent misinformation and hate speech spread Foster equitable benefits of AI through global standards, regional investment, and justice with respect to the burdens and benefits of AI development Training in skills for adaptability to rapid technological changes informed by improved data regarding labour pattern shifts Educate the learners in formal education, the public, professionals, and policy makers for designing, and working alongside, AI to advance the SDGs. Principle of transparency, key considerations Principle of Safety, key considerations Consider developing strategies for specific context of humanitarian action Understand the impact of A/IS on society How do we navigate individual choice of technology engagement and school or system-wide procurement? Considerations in assessing trustworthy AI - Respect for privacy and data protection
Title IEEE