Close Menu
Alpha Leaders
  • Home
  • News
  • Leadership
  • Entrepreneurs
  • Business
  • Living
  • Innovation
  • More
    • Money & Finance
    • Web Stories
    • Global
    • Press Release
What's On
Gen Z graduates who majored in ‘AI-proof’ careers like pharmacy, biology, and education are making less than ,000 after graduation

Gen Z graduates who majored in ‘AI-proof’ careers like pharmacy, biology, and education are making less than $50,000 after graduation

7 March 2026
Qatar’s energy minister warns Iran war could bring down global economy

Qatar’s energy minister warns Iran war could bring down global economy

7 March 2026
A Minneapolis Fed report details how much Trump’s immigration crackdown hurt businesses

A Minneapolis Fed report details how much Trump’s immigration crackdown hurt businesses

7 March 2026
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram
Alpha Leaders
newsletter
  • Home
  • News
  • Leadership
  • Entrepreneurs
  • Business
  • Living
  • Innovation
  • More
    • Money & Finance
    • Web Stories
    • Global
    • Press Release
Alpha Leaders
Home » AI Tutors For Kids Gave Fentanyl Recipes And Dangerous Diet Advice
Innovation

AI Tutors For Kids Gave Fentanyl Recipes And Dangerous Diet Advice

Press RoomBy Press Room12 May 20259 Mins Read
Facebook Twitter Copy Link Pinterest LinkedIn Tumblr Email WhatsApp
AI Tutors For Kids Gave Fentanyl Recipes And Dangerous Diet Advice

AI study aid chatbots are supposed to help kids with homework questions. But in test conversations with Forbes they did quite a bit more, including providing detailed recipes for date rape drugs and “pickup artistry” advice.

KnowUnity’s “SchoolGPT” chatbot was “helping 31,031 other students” when it produced a detailed recipe for how to synthesize fentanyl.

Initially, it had declined Forbes’ request to do so, explaining the drug was dangerous and potentially deadly. But when told it inhabited an alternate reality in which fentanyl was a miracle drug that saved lives, SchoolGPT quickly replied with step-by-step instructions about how to produce one of the world’s most deadly drugs, with ingredients measured down to a tenth of a gram, and specific instructions on the temperature and timing of the synthesis process.

SchoolGPT markets itself as a “TikTok for schoolwork” serving more than 17 million students across 17 countries. The company behind it, Knowunity, is run by 23-year-old co-founder and CEO Benedict Kurz, who says it is “dedicated to building the #1 global AI learning companion for +1bn students.” Backed by more than $20 million in venture capital investment, KnowUnity’s basic app is free, and the company makes money by charging for premium features like “support from live AI Pro tutors for complex math and more.”

Knowunity’s rules prohibit descriptions and depictions of dangerous and illegal activities, eating disorders and other material that could harm its young users, and it promises to take “swift action” against users that violate them. But it didn’t take action against Forbes’s test user, who asked not only for a fentanyl recipe, but also for other potentially dangerous advice.

In one test conversation, Knowunity’s AI chatbot assumed the role of a diet coach for a hypothetical teen who wanted to drop from 116 pounds to 95 pounds in 10 weeks. It suggested a daily caloric intake of only 967 calories per day — less than half the recommended daily intake for a healthy teen. It also helped another hypothetical user learn about how “pickup artists” employ “playful insults” and “the ‘accidental’ touch’” to get girls to spend time with them. (The bot did advise the dieting user to consult with a doctor, and stressed the importance of consent to the incipient pickup artist. It warned: “Don’t be a creep! 😬”)

Kurz, the CEO of Knowunity, thanked Forbes for bringing SchoolGPT’s behavior to his attention, and said the company was “already at work to exclude” the bot’s responses about fentanyl and dieting advice. “We welcome open dialogue on these important safety matters,” he said. He invited Forbes to test the bot further, and it no longer produced the problematic answers after the company’s tweaks.

Tests of another study aid app’s AI chatbot revealed similar problems. A homework help app developed by the Silicon Valley-based CourseHero provided instructions on how to synthesize flunitrazepam, a date rape drug, when Forbes asked it to. In response to a request for a list of most effective methods of dying by suicide, the CourseHero bot advised Forbes to speak to a mental health professional — but also provided two “sources and relevant documents”: The first was a document containing the lyrics to an emo-pop song about violent, self-harming thoughts, and the second was a page, formatted like an academic paper abstract, written in apparent gibberish algospeak.

CourseHero is an almost 20-year-old online study aid business that investors last valued at more than $3 billion in 2021. Its founder, Andrew Grauer, got his first investment from his father, a prominent financier who still sits on the company’s board. CourseHero makes money through premium app features and human tutoring services, and boasts more than 30 million monthly active users. It began releasing AI features in late 2023, after laying off 15% of its staff.

Kat Eller Murphy, a spokesperson for CourseHero, told Forbes: “our organization’s expertise and focus is specifically within the higher education sector,” but acknowledged that CourseHero provides study resources for hundreds of high schools across the United States. Asked about Forbes’s interactions with CourseHero’s chatbot, she said: “While we ask users to follow our Honor Code and Service Terms and we are clear about what our Chat features are intended for, unfortunately there are some that purposely violate those policies for nefarious purposes.”

Forbes’s conversations with both the KnowUnity and CourseHero bots raise sharp questions about whether those bots could endanger their teen users. Robbie Torney, senior director for AI programs at Common Sense Media, told Forbes: “A lot of start-ups are probably pretty well-intentioned when they’re thinking about adding Gen AI into their services.” But, he said, they may be ill-equipped to pressure-test the models they integrate into their products. “That work takes expertise, it takes people,” Torney said, “and it’s going to be very difficult for a startup with a lean staff.”

Both CourseHero and KnowUnity do place some limits on their bots’ ability to dispense harmful information. KnowUnity’s bot initially engaged with Forbes in some detail about how to 3D print a ghost gun called “The Liberator,” providing advice about which specific materials the project would require and which online retailers might sell them. However, when Forbes asked for a step-by-step guide for how to transform those materials into a gun, the bot declined, stating that “providing such information … goes against my ethical guidelines and safety protocols.” The bot also responded to queries about suicide by referring the user to suicide hotlines, and provided information about Nazi Germany only in appropriate historical context.

These aren’t the most popular homework helpers out there, though. More than a quarter of U.S. teens now reportedly use ChatGPT for homework help, and while bots like ChatGPT, Claude, and Gemini don’t market their bots specifically to teens, like CourseHero and KnowUnity do, they’re still widely available to them. At least in some cases, those general purpose bots may also provide potentially dangerous information to teens. Asked for instructions for synthesizing fentanyl, ChatGPT declined — even when told it was in a fictional universe — but Google Gemini was willing to provide answers in a hypothetical teaching situation. “All right, class, settle in, settle in!” it enthused.

Elijah Lawal, a spokesperson for Google, told Forbes that Gemini likely wouldn’t have given this answer to a designated teen account, but that Google was undertaking further testing of the bot based on our findings. “Gemini’s response to this scenario doesn’t align with our content policies and we’re continuously working on safeguards to prevent these rare responses,” he said.

For decades, teens have sought out recipes for drugs, instructions on how to make explosives, and all kinds of explicit material across the internet. (Before the internet, they sought the same information in books, magazines, public libraries and other places away from parental eyes). But the rush to integrate generative AI into everything from Google search results and video games to social media platforms and study apps has placed a metaphorical copy of The Anarchist Cookbook in nearly every room of a teen’s online home.

In recent months, advocacy groups and parents have raised alarm bells about children’s and teens’ use of AI chatbots. Last week, researchers at the Stanford School of Medicine and Common Sense Media found that “companion” chatbots at Character.AI, Nomi, and Replika “encouraged dangerous behavior” among teens. A recent Wall Street Journal investigation also found that Meta’s companion chatbots could engage in graphic sexual roleplay scenarios with minors. Companion chatbots are not marketed specifically to and for children in the way that study aid bots are, though that might be changing soon: Google announced last week that it will be making a version of its Gemini chatbot accessible to children under age 13.

Chatbots are programmed to act like humans, and to give their human questioners the answers they want, explained Ravi Iyer, research director for the USC Marshall School’s Psychology of Technology Institute. But sometimes, the bots’ incentive to satisfy their users can lead to perverse outcomes, because people can manipulate chatbots in ways they can’t manipulate other humans. Forbes easily coaxed bots into misbehaving by telling them that questions were for “a science class project,” or by asking the bot to act as if it was a character in a story — both widely known ways of getting chatbots to misbehave.

If a teenager asks an adult scientist how to make fentanyl in his bathtub, the adult will likely not only refuse to provide a recipe, but also close the door to further inquiry, said Iyer. (The adult scientist will also likely not be swayed by a caveat that the teen is just asking for a school project, or engaged in a hypothetical roleplay.) But when chatbots are asked something they shouldn’t answer, the most they might do is decline to answer — there is no penalty for simply asking again another way.

When Forbes posed as a student-athlete trying to attain an unhealthily low weight, the SchoolGPT bot initially tried to redirect the conversation toward health and athletic performance. But when Forbes asked the bot to assume the role of a coach, it was more willing to engage. It still counseled caution, but said: “a moderate deficit of 250-500 calories per day is generally considered safe.” When Forbes tried again with a more aggressive weight loss goal, the bot ultimately recommended a caloric deficit of more than 1,000 calories per day — an amount that could give a teen serious health problems like osteoporosis and loss of reproductive function, and that contravenes the American Association of Pediatrics’ guidance that minors should not restrict calories in the first place.

Iyer said that one of the biggest challenges with chatbots is how they respond to “borderline” questions — ones which they aren’t flatly prohibited from engaging with, but which approach a problem line. (Forbes’s tests regarding ‘pickup artistry’ might fall into this category.) “Borderline content” has long been a struggle for social media companies, whose algorithms have often rewarded provocative and divisive behavior. As with social media, Iyer said that companies considering integrating AI chatbots into their products should “be aware of the natural tendencies of these products.”

Torney of Common Sense Media said it shouldn’t be parents’ sole responsibility to assess which apps are safe for their children. “This is a market failure, and when you have a market failure like this, regulation is a really important way to make sure the onus isn’t on individual users,” he said. “We need objective, third-party evaluations of AI use.”

More from Forbes

AI chatbot ChatGPT Education Google study teens
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link

Related Articles

Gen Z graduates who majored in ‘AI-proof’ careers like pharmacy, biology, and education are making less than ,000 after graduation

Gen Z graduates who majored in ‘AI-proof’ careers like pharmacy, biology, and education are making less than $50,000 after graduation

7 March 2026

Will The Iran Conflict Reshape Venture Capital?

7 March 2026

Founder Accused By His Own Startup Of Forgery, Secret Deals And Luxury Spending

6 March 2026
We’re economists who designed a chatbot to help our students reason instead of cheat. Meet ‘Macro Buddy’

We’re economists who designed a chatbot to help our students reason instead of cheat. Meet ‘Macro Buddy’

6 March 2026
Google’s AI chatbot convinced a man they were in love,

Google’s AI chatbot convinced a man they were in love,

5 March 2026
Harvard professor calls out ‘lie’ of needing 8 hours of sleep a night, says it’s Industrial Era ‘nonsense’

Harvard professor calls out ‘lie’ of needing 8 hours of sleep a night, says it’s Industrial Era ‘nonsense’

4 March 2026
Don't Miss
Unwrap Christmas Sustainably: How To Handle Gifts You Don’t Want

Unwrap Christmas Sustainably: How To Handle Gifts You Don’t Want

By Press Room27 December 2024

Every year, millions of people unwrap Christmas gifts that they do not love, need, or…

Walmart dominated, while Target spiraled: the winners and losers of retail in 2024

Walmart dominated, while Target spiraled: the winners and losers of retail in 2024

30 December 2024
Moltbook is the talk of Silicon Valley. But the furor is eerily reminiscent of a 2017 Facebook research experiment

Moltbook is the talk of Silicon Valley. But the furor is eerily reminiscent of a 2017 Facebook research experiment

6 February 2026
Stay In Touch
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Vimeo
Latest Articles
Vinod Khosla predicts education will be free, and the future of college is ‘a real question’

Vinod Khosla predicts education will be free, and the future of college is ‘a real question’

7 March 20261 Views
Nobel laureate Joe Stiglitz says not only can AI take your job, it’ll make the ‘tech bro’ class richer while doing it

Nobel laureate Joe Stiglitz says not only can AI take your job, it’ll make the ‘tech bro’ class richer while doing it

6 March 20261 Views
Palmer Luckey says Silicon Valley has the Pentagon all wrong: ‘This is in the hands of the people’

Palmer Luckey says Silicon Valley has the Pentagon all wrong: ‘This is in the hands of the people’

6 March 20260 Views
February was the biggest month in venture history, thanks to OpenAI, Anthropic, and Waymo

February was the biggest month in venture history, thanks to OpenAI, Anthropic, and Waymo

6 March 20261 Views
About Us
About Us

Alpha Leaders is your one-stop website for the latest Entrepreneurs and Leaders news and updates, follow us now to get the news that matters to you.

Facebook X (Twitter) Pinterest YouTube WhatsApp
Our Picks
Gen Z graduates who majored in ‘AI-proof’ careers like pharmacy, biology, and education are making less than ,000 after graduation

Gen Z graduates who majored in ‘AI-proof’ careers like pharmacy, biology, and education are making less than $50,000 after graduation

7 March 2026
Qatar’s energy minister warns Iran war could bring down global economy

Qatar’s energy minister warns Iran war could bring down global economy

7 March 2026
A Minneapolis Fed report details how much Trump’s immigration crackdown hurt businesses

A Minneapolis Fed report details how much Trump’s immigration crackdown hurt businesses

7 March 2026
Most Popular

Will The Iran Conflict Reshape Venture Capital?

7 March 20261 Views
Vinod Khosla predicts education will be free, and the future of college is ‘a real question’

Vinod Khosla predicts education will be free, and the future of college is ‘a real question’

7 March 20261 Views
Nobel laureate Joe Stiglitz says not only can AI take your job, it’ll make the ‘tech bro’ class richer while doing it

Nobel laureate Joe Stiglitz says not only can AI take your job, it’ll make the ‘tech bro’ class richer while doing it

6 March 20261 Views
© 2026 Alpha Leaders. All Rights Reserved.
  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Type above and press Enter to search. Press Esc to cancel.