The interaction of research, stakeholders particularly in public service contexts, and commercial entities may give rise to perceived or actual conflicts of interest. These should be considered from the inception of a project through to intended implementation of any outcomes. Commercial entities may be involved in research as:- Funders- Providers of technologies being used in the research process (e.g., automated transcription services)- Providers of technologies being evaluated in the research- Co-researchers, and possibly participants (typically this would be considered only where they are also funding work)- As platforms from which data is drawn- As platforms from which employees are engaged (e.g. crowdworking platforms (in this context, researchers may explore the 2014 guide [Guidelines for Academic Requesters](https://irb.northwestern.edu/docs/guidelinesforacademicrequesters-
1. pdf) ) Institutional policies regarding management of conflicts of interest should be carefully considered. "We need independent, expert opinions that provide guidance to the general public regarding A/IS. Currently, there is a gap between howA/IS are marketed and their actual performance or application. We need to ensure thatA/IS technology is accompanied by best-use recommendations and associated warnings. Additionally, we need to develop a certification scheme for A/IS which ensures that the technologies have been independentlyassessed as being safe and ethically sound.For example, today it is possible for systems to download new self-parking functionality to cars, and no independent reviewer establishes or characterizes boundaries or use. Or, when a companion robot promises to watch your children, there is no organization that can issue an independent seal of approval or limitation on these devices. We need a ratings and approval system ready to serve social/automation technologies that will come online as soon as possible. We also need further government funding for research into how A/IS technologies can best be subjected to review, and howreview organizations can consider bothtraditional health and safety issues, as wellas ethical considerations."p.133-134"Issue 3: Challenges to evaluation by third partiesA/IS should have sufficient transparency to allow evaluation by third parties, including regulators, consumer advocates, ethicists, post-accident investigators, or society at large. However, transparency can be severely limited in some systems, especially in those that rely on machine learning algorithms trained on large data sets. The data sets may not be accessible to evaluators; the algorithms may be proprietary information or mathematically so complex that they defy common-sense explanation; and even fellow software experts may be unable to verify reliability and efficacy of the final system because the system’s specifications are opaque.For less inscrutable systems, numerous techniques are available to evaluate the implementation of the A/IS’ norm conformity. On one side there is formal verification, which provides a mathematical proof that the A/IS will always match specific normative and ethical requirements, typically devised in a top-down approach (see Section 2, Issue 1). This approach requires access to the decision-making process and the reasons for each decision (Fisher, Dennis, and Webster 201355). A simpler alternative, sometimes suitable even for machine learning systems, is to test the A/IS against a set of scenarios and assess how well they matches their normative requirements, e.g., acting in accordance with relevant norms and recognizing other agents’ norm violations. A “red team” may also devise scenarios that try to get the A/ISto break norms so that its vulnerabilities canbe revealed.These different evaluation techniques can be assigned different levels of “strength”: strong ones demonstrate the exhaustive set of theA/IS’ allowable behaviors for a range of criterion scenarios; weaker ones sample from criterion scenarios and illustrate the systems’ behavior for that subsample. In the latter case, confidence in the A/IS’ ability to meet normative requirements is more limited. An evaluation’s concluding judgment must therefore acknowledge the strength of the verification technique used,and the expressed confidence in the evaluation — and in the A/IS themselves—must be qualifiedby this level of strength.Transparency is only a necessary requirement for a more important long-term goal: having systems be accountable to their users and community members. However, this goal raises many questions such as to whom the A/IS are accountable, who has the right to correct the systems, and which kind of A/IS should be subject to accountability requirements."p.185-186, ieee, 2019