Back Share
Strategies

Systematic analysis of consequences, transparency, user agency, and safeguards are central to implementation of AI nudges



# "Recommendations1.Systematic analyses are needed that examine the ethics and behavioral consequences of designing affective systems to nudge human beings prior to deployment.2.The user should be empowered, through an explicit opt-in system and readily available, comprehensible information, to recognize different types of A/IS nudges, regardless of whether they seek to promote beneficial social manipulation or to enhance consumer acceptance of commercial goals. The user should be able to access and check facts behind the nudges and then make a conscious decision to accept or reject a nudge. Nudging systems must be transparent, with a clear chain of accountability that includes human agents: data logging is required so users can know how, why, and by whom they were nudged.3.A/IS nudging must not become coercive and should always have an opt-in system policy with explicit consent.4.Additional protections against unwanted nudging must be put in place for vulnerable populations, such as children, or when informed consent cannot be obtained. Protections against unwanted nudging should be encouraged when nudges alter long-term behavior or when consent alone may not bea sufficient safeguard against coercionor exploitation.5.Data gathered which could reveal an individual or groups’ susceptibility to a nudge or their emotional reaction to a nudge should not be collected or distributed without opt-in consent, and should only be retained transparently, with access restrictions in compliance with the highest requirements of data privacy and law.

## Further Resources

Overarching Principles Respect for persons Merit and Integrity
Sources IEEE
Title Systematic analysis of consequences, transparency, user agency, and safeguards are central to implementation of AI nudges