internet-research project-dissemination reflection-questions
Models can be used in a variety of ways, or they may influence others to create similar models for other ends. Research ethics frameworks, however, typically require the review process to limit itself to the immediate impact on research stakeholders and not necessarily assess the potential long-term impacts of the research outputs (Zevenbergen, Brown, Wright, & Erdos, 2013).This may be problematic for omni-use technologies such as AI models. Innovations in AI technologies and their inferences on social and human dynamics may be used for a multitude of purposes for instance tailoring microtargeted communication and thus potentially undermining democracy (e.g. issues of fair election & voting discrimination). Models or datasets that were designed to produce positive social ends can be used towards malicious and destructive ends (e.g. facial recognition to clamp down on political gatherings/dissent). Even if the research aims are beneficial for a wide group of stakeholders, research methods and models may need to be published along with the research outcomes and thus set a standard or precedent and initiate function creep and unintended consequences.Once an AI system has left the hands of the original researchers, they may not have any control over how their models are used by others. The same is true for the generated research data: once it has been freely published, it will be difficult to contain its further uses.Legal and constitutional checks and balances on the exertion of power or the use of data may differ around the world (Knapp & VandeCreek, 2007). While it is beyond the scope of an ethics review to assess the political governance in countries around the world, it is useful for researchers to be mindful that their data and models may contain personal and sensitive data that could be used directly or indirectly against individuals in other countries or political systems.Researchers should thus engage actively with the reality that their methods and models may be misused by others, and find ways to mitigate risks and harms. It is ultimately the responsibility of researchers – in dialogue with ethics boards and other stakeholders in a specific project – to agree on the limitations based on a thorough understanding of the project weighed heavily against the knowledge production it produces. - What could be the downstream consequence for data subjects for erroneous identifications, labelling, or categorization?- To what extent is the researcher sensitive to the local norms, values, and legal mechanisms that could adversely affect the subjects of their research? - To what extent can the researcher foresee how the data created through research project inferences may be used in further, third-party systems that make decisions about people?- Is it foreseeable that the methodologies, actions, and resulting knowledge may be used for malicious ends in another context than research and to what extent can this be mitigated?- Which actors will likely be interested to use the methodology for malevolent purposes, and how?- Can the inferred data be directly useful to authoritarian governments who may target or justify crackdowns on minorities or special interest groups based on (potentially erroneously inferred) collected data? Can this be mitigated without destroying the findings for the specific research project? - Is it possible to contain the potential malevolent future uses by design? - Up to which hypothetical data reuse moment is it appropriate to hold the researcher responsible? (franazke, 2020, p45-46)