OpenAI CEO Sam Altman says it’s not a fear of “killer robots,” or any other Frankenstein-tech creature that AI could power that keeps him up at night. Instead, it’s the technology’s ability to derail society, insidiously and subtly, from the inside. 

Without adequate international regulations, the software could take society by storm when “very subtle societal misalignments” are not addressed, Altman said while speaking virtually at the World Governments Summit in Dubai on Tuesday. The tech billionaire stressed “through no particular ill intention, things just go horribly wrong.” 

AI can, and already is, helping people work smarter and faster. It can also help people live easier with options for personalized education, medical advice, and financial literacy training. But as the new technology continues to infiltrate, well, everything, many are concerned about how it’s growing largely unchecked by authoritative regulators, and what the aftermath might be on important sectors like elections, media misinformation and global relations.

To his credit, Altman has consistently and loudly vocalized such concerns, even though his company unleashed the disruptive chatbot known as ChatGPT onto the world.

“Imagine a world where everyone gets a great personal tutor, great personalized medical advice,” Altman asked the crowd in Dubai. People can now use AI tools, like software that analyzes medical data, stores patient records on the cloud, and design classes and lectures “to discover all sorts of new science, cure diseases and heal the environment,” he said. 

Those are some ways AI can help people on a personal level, but global impact is a much bigger picture. AI’s relevance is its ability to be of the times, and our times right now are clouded with disinformation-afflicted elections, media misinformation, and military operations—all of which AI offers up use cases for, too. 

This year, elections will be held in more than 50 countries, where voting polls will open to more than half the planet’s population. In a statement last month, OpenAI wrote that AI tools should be used “safely and responsibly, and elections are no different.” Abusive content, like “misleading ‘deepfakes’” (a.k.a. fake, AI-generated photos and videos), or “chatbots impersonating candidates,” are all issues the company hopes to anticipate and prevent. 

Altman didn’t specify how many people would be working on election-troubleshooting issues, according to Axios, but did reject the idea that a large election team would help avoid these trappings in elections coverage. Axios says Altman’s company has far fewer people dedicated to election security than other tech companies, like Meta or TikTok. But OpenAI announced it’s working with the National Association of Secretaries of State, the country’s oldest nonpartisan organization for public officials, and will direct users to authoritative websites for U.S. voting information in response to election questions. 

The waters are muddy for media companies as well: At the end of last year, The New York Times Company sued OpenAI for copyright infringement, while other media outlets, including Axel Springer and the Associated Press, have been cutting deals with AI companies in arrangements that pay newsrooms in exchange for the right to use their content to train language-based AI models. With more media-backed AI training, the potential to spread misinformation is of concern, too. 

Last month, OpenAI quietly removed the fine print that prohibits the technology’s military use. The move follows the company’s announcement that it will work with the U.S. Department of Defense on AI tools, according to an interview with Anna Makanju, the company’s vice president of global affairs, as reported by Bloomberg.  

Previously, OpenAI’s policy prohibited activities with “high risk of physical harm,” including weapons development, military, and warfare. The company’s updated policies, devoid of any mention of military and warfare guidelines, suggest military use is now acceptable. An OpenAI spokesperson told CNBC that “our policy does not allow our tools to be used to harm people, develop weapons,” or for communications surveillance, but that there are “national security cases that align with our mission.” 

Activities that may significantly impair the “safety, wellbeing or rights of others,” are written clearly on OpenAI’s list of ‘Don’ts,’ but the words are little more than a warning as it becomes clear that regulating AI will be an enormous challenge that few are rising to. 

Last year, Altman gave testimony at a Senate Judiciary subcommittee meeting on the oversight of AI, asking for governmental collaboration to establish safety requirements that are also flexible enough to adapt to new technical developments. He’s been vocal about how important it is to regulate AI to keep the software’s strength and power out of the wrong hands, like computer scammers, online abusers, bullies, and misinformation campaigns. But common ground is hard to find. Even as he supports more regulation, Atlman has issues with regulation proposals from the European Union’s AI Act, the world’s first comprehensive AI law, over terms like data and training transparency. Meanwhile, the White House has outlined a bill for AI rights, which emphasizes algorithmic discrimination, data privacy, transparency, and human alternatives as key areas that need regulation.

Subscribe to the Eye on AI newsletter to stay abreast of how AI is shaping the future of business. Sign up for free.
Share.
Exit mobile version