As generative AI (GenAI) increasingly permeates social and commercial spheres, the spotlight intensifies on its potential for unethical influence.

The power of Generative AI to shape opinions and behaviors indeed presents a profound challenge.

Technological evolutions have always been pivotal in transforming the scale and redefining the landscape of persuasion.

From the press, from black and white to color photography, from silent cinema to talking movies, from cinema screens to home screens with television, and then to individual screens with personal computers, smartphones, tablets, and even watches—all turbocharged by the advent of social media—the power of persuasion enhanced by technology has been around for a very long time.

Just as their positive effects are undeniable, with access to information providing substantial benefits to society, it is undeniable that negative effects also exist: reduced attention spans, social isolation, polarization and echo chambers, misinformation and fake news, and technology addiction.

Today, GenAI, by leveraging vast data and sophisticated algorithms, are poised to redefine persuasion yet again.

The choice to initially give ChatGPT 4o a voice, known as Sky, resembling that of Scarlett Johansson—even if not intended, and even if it’s not “Her” voice—is one such example that gives pause for thought.

While easy answers are all too common, posing questions that challenge us to think deeply, rather than superficially, is more crucial than ever.

What would make the potential for unethical persuasion involving technology an exclusive risk?

What would be the determining factors representing a tipping point at which a technology alone should be considered when we seek to assess a potential for unethical persuasion?

What human responsibility is considered in the generation of unethical persuasion involving technology?

Persuasion has no dark side per se: only the intentions of those who wield it do, and GenAI is not inherently endowed with such intentions, neither for itself nor by itself.

It operates based on algorithms and data provided by humans. Therefore, it lacks inherent ethical qualities on its own; instead, the ethical implications of AI-driven persuasion depend on the design, implementation, and goals set by its creators intentions, and underlying motivations.

The more risky being GenAI systems that are so “persuasive” that they will be able to completely bypass the defense cognitive mechanism of some users who will be absolutely not aware about the influence exerted on them, which is a distinguishing factor between persuasion and manipulation.

However, if the potential for GenAI to be used in manipulative ways targeting vulnerable populations with propaganda is not fictional, the opportunity to harness AI for good, promoting behaviors and decisions that align with societal well-being and human values is not either.

Safeguards are intrinsic to the responsibility of those in charge of developing and deploying generative AI technologies, as well as to the promises they represent.

Integrity-led over intelligence-led

This emphasizes the importance of AI development, focusing on transparency, accountability, and alignment with human values in the programming and application of AI technologies to generate integrity-led outputs.

It makes it paramount to address ethical considerations head-on, not as an afterthought but as a requirement embedded in AI development and training mechanism so that AI systems can be able to exhibit integrity—a so-called artificial integrity—beyond intelligently executing tasks.

Human Agency over Commercial Intent

Developers and stakeholders must be vigilant about the data used and the intents programmed into AI systems, promoting an integrity-led approach to persuasive technology that enhances user autonomy rather than undermining it.

It involves designing systems that prioritize human agency and user consent mechanisms.

Incentive-Based Regulations over Risk-Based Regulations

While regulations such as the EU’s AI Act aim to set first boundaries, the dynamic and rapidly evolving nature of AI technology makes it challenging to anticipate all possible future scenarios.

Effective regulation should not only focus on preventing potential harms but also on guiding the development of AI towards public good. We must foster a regulatory environment that encourages achieving positive outcomes rather than just preventing negative ones.

AI Literacy over AI Advocacy

Last but not least, AI literacy is paramount for sharpening critical thinking, essential for building informed judgment in the face of new risks of information manipulation.

Without AI literacy, people may struggle to identify for example deep fakes, leading to misinformation and manipulation at scale. With proper education and awareness, individuals can learn to approach such content with the necessary skepticism, reducing the potential for harm.

The dual-edged nature of technological progress in persuasion underscores the need for a balanced approach that recognizes both the benefits and risks.

Looking to the future, we should envision systems able to exhibit artificial integrity, thus helping actively promote and uphold human values to create a more just and equitable society.

Share.
Exit mobile version