Good AI is better than bad AI, but bad AI can be pretty bad. Without impactful regulation – which will not happen in 2024 – bad AI will get much worse. So here’s a short list of how bad AI can – and will – get in 2024. The 2024 US presidential election will accelerate AI’s badness in ways we have yet to comprehend. Those who believe 2024 will be incrementally bad are naive. Badness will reach new heights energized by the high stakes of a national election. Cheaters will also rejoice in 2024. Cyber ninjas too.
Let’s look at just three ways AI will be bad in 2024:
1. Misinformation Will Explode
AI is the best misinformation machine the world has ever seen – and it’s getter better. The automation of increasingly complex and effective misinformation is proceeding on schedule for those who want to weaponize the technology. “Fake news” is just the tip of the iceberg. Much more sophisticated activities are well underway – have been for years – that undermine “truth” and “objectivity” in every possible way. Propaganda is now digital. Can you imagine how weird – and effective – this can get?Listen to this one described by Pranshu Verma:
“One AI-generated article recounted a made-up story about Benjamin Netanyahu’s psychiatrist, a NewsGuard investigation found, alleging that he had died and left behind a note suggesting the involvement of the Israeli prime minister. The psychiatrist appears to have been fictitious, but the claim was featured on an Iranian TV show, and it was recirculated on media sites in Arabic, English and Indonesian, and spread by users on TikTok, Reddit and Instagram.”
How bad can it get?
“Well-dressed AI-generated news anchors are spewing pro-Chinese propaganda, amplified by bot networks sympathetic to Beijing. In Slovakia, politicians up for election found their voices had been cloned to say controversial things they never uttered, days before voters went to the polls. A growing number of websites, with generic names such as iBusiness Day or Ireland Top News, are delivering fake news made to look genuine, in dozens of languages from Arabic to Thai.”
Without significant legislation, this kind of activity will grow – and the likelihood of significant legislation is near-zero. No, let’s just admit that it’s zero in 2024.
2. Cheaters Rejoice (But What’s “Cheating”?)
Cheaters love ChatGPT, Bard and all the rest. It makes their lives easier. If you haven’t already tried to “cheat” with large language models you’re behind the curve. Students started cheating in 2022 and ChatGPT has made their lives easier ever since, but here’s a twist (the bold is mine):
“In addition to being most likely to use AI tools, business majors are least likely to say that using AI tools to complete assignments or exams is cheating or plagiarism. Just 51% of business majors consider using AI tools to complete assignments or exams as cheating or plagiarism, compared to 57% of humanities majors and 55% of STEM majors.”
This means that using tools like ChatGPT are becoming acceptable.
Marketing professionals can develop press releases and marketing campaigns. Professors can use ChatGPT to write syllabi and case studies. Project managers can enlist Chat for help.
But the real question is about “cheating” itself. What is it?
An important way to think about the role that GenAI can play is to develop a “task participation continuum.” “Automation” is now commonplace, “assistance” has evolved and “augmentation” is well on its way. Full partnerships are well within reach.
The relationship between task complexity and its “assignment” is still developing. The questions “how complex is the task?,” and “who does the work?” are good questions today but over time the relationship will change. What’s considered “human” — and ethical — today will change as GenAI develops. Task complexity will also be redefined as the number of use GenAI cases increases. Watch this one carefully: “cheating” will likely be redefined as “consulting” or “partnering” in just a few years. The distinction between cheating and productivity will blur and disappear altogether within three years.
3. Cyberattacks Will Multiply
Data bases, processes, content – you name it – are the targets of cyberattacks. AI will expand the number of targets and the effectiveness of the attacks. Morgan Stanley lists some threats:
“AI allows cybercriminals to automate many of the processes used in social-engineering attacks, as well as create more personalized, sophisticated and effective messaging to fool unsuspecting victims.
“Cybercriminals exploit AI to improve the algorithms they use for deciphering passwords. The enhanced algorithms provide quicker and more accurate password guessing, which allows hackers to become more efficient and profitable.
“AI’s ability to easily manipulate visual or audio content and make it seem legitimate … the doctored content can then be broadly distributed online in seconds – including on influential social media platforms.
“Hackers ‘poison’ or alter the training data used by an AI algorithm to influence the decisions it ultimately makes. In short, the algorithm is being fed with deceptive information, and bad input leads to bad output.”
While AI enables cyberattacks, it can also be used to thwart attacks – which is some good news.
Bad to Worse, Unless …
AI, machine learning and generative AI can hurt us. But they can also obviously help us. While balance makes intuitive sense, it should not be the desired outcome. The good must outweigh the bad. How we get there is another question altogether. Part of the answer is regulation even though most analysts do not believe the US will develop and enforce meaningful regulation. In fact, legislation may well come from US states before the US implements federal legislation.
Much regulatory action depends upon how quickly the power of generative AI is revealed – and feared. We know, for example, that orders of performance magnitude separate ChatGPT-3.5 from ChatGPT-4 Plus. What tasks and processes will ChatGPT-5 or 6 enable? As more and more industries, functions and processes yield to LLMs, there will be additional pressure to “regulate” at some level. On the other hand, if there’s sufficient coverage of the limitations of GAI and a few high-profile regulations that quell the major fears, then broad regulatory efforts will slow.
Decisions around regulation will not be completely anchored in technology capabilities; social, political and economic concerns about the impact of regulation will exert as much if not more influence upon whatever regulatory scenarios emerge. This changes the game, the players and the rules. All of the activity around draft and proposed regulations will have several filters through which proposed regulations must pass. This means that meaningful legislation will be slow to proceed. It’s also likely that the US will lose the regulatory race to other countries that are already outpacing the US’s regulatory efforts.
While predictions are impossible to make in areas as complicated as the regulation of AI, machine learning and GenAI, it’s safe to say that there will be a lag between regulatory policy and the growing power of the technology. This means that regulations will lag applications for years and perhaps even permanently. This happens when technology moves as fast as AI is moving – and is likely to move in the future. The old ways of treading lightly in the regulatory world will not work for GenAI. This technology represents a sea change. Treating this technology as just another incremental advance is a huge mistake. That warning aside, all of this assumes that there’s a real desire to regulate the technology. While there may be an honest desire to regulate the technology in several countries and a few US states, it remains to be seen if the US is capable of hitting a fast-moving technological target.