healthcarereimagined

Envisioning healthcare for the 21st century

  • About
  • Economics

Will AI make us crazy? – Bulletin of the Atomic Scientists

Posted by timmreardon on 01/04/2026
Posted in: Uncategorized.

By Dawn Stover | September 11, 2023

Critics of artificial intelligence, and even some of its biggest fans, have recently issued urgent warnings that a malevolently misaligned AI system could overpower and destroy humanity. But that isn’t what keeps Jaron Lanier, the “godfather of virtual reality,” up at night.

In a March interview with The Guardian, Lanier said that the real danger of artificial intelligence is that humans will “use our technology to become mutually unintelligible.” Lacking the understanding and self-interest necessary for survival, humans will “die through insanity, essentially,” Lanier warned (Hattenstone 2023).

Social media and excessive screen time are already being blamed for an epidemic of anxiety, depression, suicide, and mental illness among America’s youth. Chatbots and other AI tools and applications are expected to take online engagement to even greater levels.

But it isn’t just young people whose mental health may be threatened by chatbots. Adults too are increasingly relying on artificial intelligence for help with a wide range of daily tasks and social interactions, even though experts—including AI creators—have warned that chatbots are not only prone to errors but also “hallucinations.” In other words, chatbots make stuff up. That makes it difficult for their human users to tell fact from fiction.

While researchers, reporters, and policy makers are focusing a tremendous amount of attention on AI safety and ethics, there has been relatively little examination of—or hand-wringing over—the ways in which an increasing reliance on chatbots may come at the expense of humans using their own mental faculties and creativity.

To the extent that mental health experts are interested in AI, it’s mostly as a tool for identifying and treating mental health issues. Few in the healthcare or technology industries—Lanier being a notable exception—are thinking about whether chatbots could drive humans crazy.

A mental health crisis

Mental illness has been rising in the United States for at least a generation.

A 2021 survey by the Substance Abuse and Mental Health Services Administration found that 5.5 percent of adults aged 18 or older—more than 14 million people—had serious mental health illness in the past year (SAMHSA 2021). Among young adults aged 18 to 25, the rate was even higher: 11.4 percent.

Major depressive episodes are now common among adolescents aged 12 to 17. More than 20 percent had a major depressive episode in 2021 (SAMHSA 2021).

According to the Centers for Disease Control and Prevention, suicide rates increased by about 36 percent between 2000 and 2021 (CDC 2023). More than 48,000 Americans took their own lives in 2021, or about one suicide every 11 minutes. “The number of people who think about or attempt suicide is even higher,” the CDC reports. “In 2021, an estimated 12.3 million American adults seriously thought about suicide, 3.5 million planned a suicide attempt, and 1.7 million attempted suicide.”

Suicide is the nation’s 11th leading cause of death in the United States for people of all ages. For those aged 10 to 34, it is the second leading cause of death (McPhillips 2023).

Emergency room visits for young people in mental distress have soared, and in 2019 the American Academy of Pediatrics reported that “mental health disorders have surpassed physical conditions as the most common reasons children have impairments and limitations” (Green et al. 2019).

Many experts have pointed to smartphones and online life as key factors in mental illness, particularly among young people. In May, the US Surgeon General issued a 19-page advisory warning that “while social media may have benefits for some children and adolescents, there are ample indicators that social media can also have a profound risk of harm to the mental health and well-being of children and adolescents” (Surgeon General, 2023).

A study of adolescents aged 12 to 15 found that those who spent more than three hours per day on social media faced “double the risk of experiencing poor mental health outcomes including symptoms of depression and anxiety.” Most adolescents report using social media, and at least a third say they do so “almost constantly” (Surgeon General 2023).

Although the Surgeon General did not mention chatbots, tools based on generative artificial intelligence are already being used on social-media platforms. In a recent letter published in the journal Nature, David Greenfield of the Center for Internet & Technology Addiction and Shivan Bhavnani of the Global Institute of Mental & Brain Health Investment noted that these AI tools “stand to boost learning through gamification and highlighting personalized content, for example. But they could also compound the negative effects of social media on mental health in susceptible individuals. User guidelines and regulations must factor in these strong negative risks” (Green and Bhavnani 2023).

Chatbots can learn a user’s interests and emotional states, wrote Greenfield and Bhavnani, “which could enable social media to target vulnerable users through pseudo-personalization and by mimicking real-time behaviour.” For example, a chatbot could recommend a video featuring avatars of trusted friends and family endorsing an unhealthy diet, which could put the user at risk of poor nutrition or an eating disorder. “Such potent personalized content risks making generative-AI-based social media particularly addictive, leading to anxiety, depression and sleep disorders by displacement of exercise, sleep and real-time socialization” (Greenfield and Bhavnani 2023).

RELATED:Introduction: Disruptive Technology

Many young people see no problem with artificial intelligence generating content that keeps them glued to their screens. In June, Chris Murphy, a US senator from Connecticut who is sponsoring a bill that would ban social media’s use of algorithmic boosting to teens, tweeted about a “recent chilling conversation with a group of teenagers.” Murphy told the teens that his bill might mean that kids “have to work a little harder to find relevant content. They were concerned by this. They strongly defended the TikTok/YouTube/ algorithms as essential to their lives” (Murphy 2023)

Murphy was alarmed that the teens “saw no value in the exercise of exploration. They were perfectly content having a machine spoon-feed them information, entertainment and connection.” Murphy recalled that as the conversation broke up, a teacher whispered to him, “These kids don’t realize how addicted they are. It’s scary.”

“It’s not just that kids are withdrawing from real life into their screens,” Murphy wrote. They’re also missing out on childhood’s rituals of discovery, which are being replaced by algorithms.

Rise of the chatbots

Generative AI has exploded in the past year. Today’s chatbots are far more powerful than digital assistants like Siri and Alexa, and they have quickly become some of the most popular tech applications of all time. Within two months of its release in November 2022, Open AI’s ChatGPT already had an estimated 100 million users. ChatGPT’s growth began slowing in May, but Google’s Bard and Microsoft’s Bing are picking up speed, and a number of other companies are also introducing chatbots.

A chatbot is an application that mimics human conversation or writing and typically interacts with users online. Some chatbots are designed for specific tasks, while others are intended to chat with humans on a broad range of subjects.

Like the teacher Murphy spoke with, many observers have used the word “addictive” to describe chatbots and other interactive applications. A recent study that examined the transcripts of in-depth interviews with 14 users of an AI companion chatbot called Replika reported that “under conditions of distress and lack of human companionship, individuals can develop an attachment to social chatbots if they perceive the chatbots’ responses to offer emotional support, encouragement, and psychological security. These findings suggest that social chatbots can be used for mental health and therapeutic purposes but have the potential to cause addiction and harm real-life intimate relationships” (Xie and Pentina 2022).

In parallel with the spread of chatbots, fears about AI have grown rapidly. At one extreme, some tech leaders and experts worry that AI could become an existential threat on a par with nuclear war and pandemics. Media coverage has also focused heavily on how AI will affect jobs and education.

For example, teachers are fretting over whether students might use chatbots to write papers that are essentially plagiarized, and some students have already been wrongly accused of doing just that. In May, a Texas A&M University professor handed out failing grades to an entire class when ChatGPT—used incorrectly—claimed to have written every essay that his students turned in. And at the University of California, Davis, a student was forced to defend herself when her paper was falsely flagged as AI-written by plagiarism-checking software (Klee 2023).

Independent philosopher Robert Hanna says cheating isn’t the main problem chatbots pose for education. Hanna’s worry is that students “are now simply refusing—and will increasingly refuse in the foreseeable future—to think and write for themselves.” Turning tasks like thinking and writing over to chatbots is like taking drugs to be happy instead of achieving happiness by doing “hard” things yourself, Hanna says.

Can chatbots be trusted?

Ultimately, the refusal to think for oneself could cause cognitive impairment. If future humans no longer need to acquire knowledge or express thoughts, they might ultimately find it impossible to understand one another. That’s the sort of “insanity” Lanier spoke of.

The risk of unintelligibility is heightened by the tendency of chatbots to give occasional answers that are inaccurate or fictitious. Chatbots are trained by “scraping” enormous amounts of content from the internet—some of it taken from sources like news articles and Wikipedia entries that have been edited and updated by humans, but much of it collected from other sources that are less reliable and trustworthy. This data, which is selected more for quantity than for quality, enables chatbots to generate intelligent-sounding responses based on mathematical probabilities of how words are typically strung together.

In other words, chatbots are designed to produce text that sounds like something a human would say or write. But even when chatbots are trained with accurate information, they still sometimes make inexplicable errors or put words together in a way that sounds accurate but isn’t. And because the user typically can’t tell where the chatbot got its information, it’s difficult to check for accuracy.

Chatbots generally provide reliable information, though, so users may come to trust them more than they should. Children may be less likely than adults to realize when chatbots are giving incorrect or unsafe answers.

When they do share incorrect information, chatbots sound completely confident in their answers. And because they don’t have facial expressions or other human giveaways, it’s impossible to tell when a chatbot is BS-ing you.

RELATED:The risks in the protocol connecting AI to the digital world 

AI developers have warned the public about these limitations. For instance, OpenAI acknowledges that ChatGPT “sometimes writes plausible-sounding but incorrect or nonsensical answers.” This problem is difficult to fix, because chatbots are not trained to distinguish truth from lies, and training a chatbot to make it more cautious in its answers would also make it more likely to decline to answer (OpenAI undated).

Tech developers euphemistically refer to chatbot falsehoods as “hallucinations.” For example, all three of the leading chatbots (ChatGPT, Bard, and Bing) repeatedly gave detailed but inaccurate answers to a question about when The New York Times first reported on artificial intelligence. “Though false, the answers seemed plausible as they blurred and conflated people, events and ideas,” the newspaper reported (Weise and Metz 2023).

AI developers do not understand why chatbots sometimes make up names, dates, historical events, answers to simple math problems, and other definitive-sounding answers that are inaccurate and not based on training data. They hope to eliminate these hallucinations over time by, ironically, relying on humans to fine-tune chatbots in a process called “reinforcement learning with human feedback.”

But as humans come to rely more and more on tuned-up chatbots, the answers generated by these systems may begin to crowd out legacy information created by humans, including the original content that was used to train chatbots. Already, many Americans cannot agree on basic facts, and some are ready to kill each other over these differences. Add artificial intelligence to that toxic stew—with its ability to create fake videos and narratives that seem more realistic than ever before—and it may eventually become impossible for humans to sort fact from fiction, which could prove maddening. Literally.

It may also become increasingly difficult to tell the difference between humans and chatbots in the online world. There are currently no tools that can reliably distinguish between human-generated and AI-generated content, and distinctions between humans and chatbots are likely to become further blurred with the continued development of emotion AI—a subset of artificial intelligence that detects, interprets, and responds to human emotions. A chatbot with these capabilities could read users’ facial expressions and voice inflections, for example, and adjust its own behavior accordingly.

Emotion AI could prove especially useful for treating mental illness. But even garden-variety AI is already creating a lot of excitement among mental health professionals and tech companies.

The chatbot will see you now

Googling “artificial intelligence” plus “mental health” yields a host of results about AI’s promising future for researching and treating mental health issues. Leaving aside Google’s obvious bias toward AI, healthcare researchers and providers mostly view artificial intelligence as a boon to mental health, rather than a threat.

Using chatbots as therapists is not a new idea. MIT computer scientist Joseph Weizenbaum created the first digital therapist, Eliza, in 1966. He built it as a spoof and was alarmed when people enthusiastically embraced it. “His own secretary asked him to leave the room so that she could spend time alone with Eliza,” The New Yorker reported earlier this year (Khullar 2023).

Millions of people already use the customizable “AI companion” Replika or other chatbots that are intended to provide conversation and comfort. Tech startups focused on mental health have secured more venture capital in recent years than apps for any other medical issue.

Chatbots have some advantages over human therapists. Chatbots are good at analyzing patient data, which means they may be able to flag patterns or risk factors that humans might miss. For example, a Vanderbilt University study that combined a machine-learning algorithm with face-to-face screening found that the combined system did a better job at predicting suicide attempts and suicidal thoughts in adult patients at a major hospital than face-to-face screening alone (Wilimitis, Turer, and Ripperger 2022).

Some people feel more comfortable talking with chatbots than with doctors. Chatbots can see a virtually unlimited number of clients, are available to talk at any hour, and are more affordable than seeing a medical professional. They can provide frequent monitoring and encouragement—for example, reminding a patient to take their medication.

However, chatbot therapy is not without risks. What if a chatbot “hallucinates” and gives a patient bad medical information or advice? What if users who need professional help seek out chatbots that are not trained for that?

That’s what happened to a Belgian man named Pierre, who was depressed and anxious about climate change. As reported by the newspaper La Libre, Pierre used an app called Chai to get relief from his worries. Over the six weeks that Pierre texted with one of Chai’s chatbot characters, named Eliza, their conversations became increasingly disturbing and turned to suicide. Pierre’s wife believes he would not have taken his life without encouragement from Eliza (Xiang 2023).

Although Chai was not designed for mental health therapy, people are using it as a sounding board to discuss problems such as loneliness, eating disorders, and insomnia (Chai Research undated). The startup company that built the app predicts that “in two years’ time 50 percent of people will have an AI best friend.”

References

Centers for Disease Control (CDC). 2023. “Facts About Suicide,” last reviewed May 8. https://www.cdc.gov/suicide/facts/index.html

Chai Research. Undated. “Chai Research: Building the Platform for AI Friendship.” https://www.chai-research.com/

Green, C. M., J. M. Foy, M. F. Earls, Committee on Psychosocial Aspects of Child and Family Health, Mental Health Leadership Work Group, A. Lavin, G. L. Askew, R. Baum et al. 2019. Achieving the Pediatric Mental Health Competencies. American Academy of Pediatrics Technical Report, November 1. https://publications.aap.org/pediatrics/article/144/5/e20192758/38253/Achieving-the-Pediatric-Mental-Health-Competencies

Greenfield, D. and S. Bhavnani. 2023. “Social media: generative AI could harm mental health.” Nature, May 23. https://www.nature.com/articles/d41586-023-01693-8

Hanna, R. 2023. “Addicted to Chatbots: ChatGPT as Substance D.” Medium, July 10. https://bobhannahbob1.medium.com/addicted-to-chatbots-chatgpt-as-substance-d-3b3da01b84fb

Hattenstone, T. 2023. “Tech guru Jaron Lanier: ‘The danger isn’t that AI destroys us. It’s that it drives us insane.” The Guardian, March 23. https://www.theguardian.com/technology/2023/mar/23/tech-guru-jaron-lanier-the-danger-isnt-that-ai-destroys-us-its-that-it-drives-us-insane

Khullar, D. 2023. “Can A.I. Treat Mental Illness?” The New Yorker, February 27. https://www.newyorker.com/magazine/2023/03/06/can-ai-treat-mental-illness

Klee, M. 2023. “She Was Falsely Accused of Cheating with AI—And She Won’t Be the Last.” Rolling Stone, June 6. https://www.rollingstone.com/culture/culture-features/student-accused-ai-cheating-turnitin-1234747351/

McPhillips, D. 2023. “Suicide rises to 11th leading cause of death in the US in 2021, reversing two years of decline.” CNN, April 13.

Murphy, C. 2023. Twitter thread, June 2. https://twitter.com/ChrisMurphyCT/status/1664641521914634242

OpenAI. Undated. “Introducing ChatGPT.”

Substance Abuse and Mental Health Services Administration (SAMHSA). 2022. Key substance use and mental health indicators in the United States: Results from the 2021 National Survey on Drug Use and Health. Center for Behavioral Health Statistics and Quality, Substance Abuse and Mental Health Services Administration. https://www.samhsa.gov/data/report/2021-nsduh-annual-national-report

Surgeon General. 2023. Social Media and Youth Mental Health: The U.S. Surgeon General’s Advisory. https://www.hhs.gov/sites/default/files/sg-youth-mental-health-social-media-advisory.pdf

Wilimitis, D., R. W. Turer, and M. Ripperger. 2022. “Integration of Face-to-Face Screening with Real-Time Machine Learning to Predict Risk of Suicide Among Adults.” JAMA Network Open, May 13. https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2792289

Weise, K. and C. Metz. 2023. “When A.I. Chatbots Hallucinate.” The New York Times, May 1. https://www.nytimes.com/2023/05/01/business/ai-chatbots-hallucination.html

Xiang, C. 2023. “‘He Would Still Be Here’: Man Dies by Suicide After Talking With AI Chatbot, Wife Says.” Motherboard, March 30. https://www.vice.com/en/article/pkadgm/man-dies-by-suicide-after-talking-with-ai-chatbot-widow-says

By Xie, T. and I. Pentina. 2022. “Attachment Theory as a Framework to Understand Relationships with Social Chatbots: A Case Study of Replika.” In: Proceedings of the 55th Hawaii International Conference on System Sciences. https://scholarspace.manoa.hawaii.edu/items/5b6ed7af-78c8-49a3-bed2-bf8be1c9e465

Article link: https://thebulletin.org/premium/2023-09/will-ai-make-us-crazy/?

Share this:

  • Click to share on X (Opens in new window) X
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
Like Loading...

Related

Posts navigation

← Decisions about AI will last decades. Researchers need better frameworks – Bulletin of the Atomic Scientists
  • Search site

  • Follow healthcarereimagined on WordPress.com
  • Recent Posts

    • Will AI make us crazy? – Bulletin of the Atomic Scientists 01/04/2026
    • Decisions about AI will last decades. Researchers need better frameworks – Bulletin of the Atomic Scientists 12/29/2025
    • Quantum computing reality check: What business needs to know now – MIT Sloan 12/29/2025
    • AI’s missing ingredient: Shared wisdom – MIT Sloan 12/21/2025
    • Hype Correction – MIT Technology Review 12/15/2025
    • Semantic Collapse – NeurIPS 2025 12/12/2025
    • The arrhythmia of our current age – MIT Technology Review 12/11/2025
    • AI: The Metabolic Mirage 12/09/2025
    • When it all comes crashing down: The aftermath of the AI boom – Bulletin of the Atomic Scientists 12/05/2025
    • Why Digital Transformation—And AI—Demands Systems Thinking – Forbes 12/02/2025
  • Categories

    • Accountable Care Organizations
    • ACOs
    • AHRQ
    • American Board of Internal Medicine
    • Big Data
    • Blue Button
    • Board Certification
    • Cancer Treatment
    • Data Science
    • Digital Services Playbook
    • DoD
    • EHR Interoperability
    • EHR Usability
    • Emergency Medicine
    • FDA
    • FDASIA
    • GAO Reports
    • Genetic Data
    • Genetic Research
    • Genomic Data
    • Global Standards
    • Health Care Costs
    • Health Care Economics
    • Health IT adoption
    • Health Outcomes
    • Healthcare Delivery
    • Healthcare Informatics
    • Healthcare Outcomes
    • Healthcare Security
    • Helathcare Delivery
    • HHS
    • HIPAA
    • ICD-10
    • Innovation
    • Integrated Electronic Health Records
    • IT Acquisition
    • JASONS
    • Lab Report Access
    • Military Health System Reform
    • Mobile Health
    • Mobile Healthcare
    • National Health IT System
    • NSF
    • ONC Reports to Congress
    • Oncology
    • Open Data
    • Patient Centered Medical Home
    • Patient Portals
    • PCMH
    • Precision Medicine
    • Primary Care
    • Public Health
    • Quadruple Aim
    • Quality Measures
    • Rehab Medicine
    • TechFAR Handbook
    • Triple Aim
    • U.S. Air Force Medicine
    • U.S. Army
    • U.S. Army Medicine
    • U.S. Navy Medicine
    • U.S. Surgeon General
    • Uncategorized
    • Value-based Care
    • Veterans Affairs
    • Warrior Transistion Units
    • XPRIZE
  • Archives

    • January 2026 (1)
    • December 2025 (11)
    • November 2025 (9)
    • October 2025 (10)
    • September 2025 (4)
    • August 2025 (7)
    • July 2025 (2)
    • June 2025 (9)
    • May 2025 (4)
    • April 2025 (11)
    • March 2025 (11)
    • February 2025 (10)
    • January 2025 (12)
    • December 2024 (12)
    • November 2024 (7)
    • October 2024 (5)
    • September 2024 (9)
    • August 2024 (10)
    • July 2024 (13)
    • June 2024 (18)
    • May 2024 (10)
    • April 2024 (19)
    • March 2024 (35)
    • February 2024 (23)
    • January 2024 (16)
    • December 2023 (22)
    • November 2023 (38)
    • October 2023 (24)
    • September 2023 (24)
    • August 2023 (34)
    • July 2023 (33)
    • June 2023 (30)
    • May 2023 (35)
    • April 2023 (30)
    • March 2023 (30)
    • February 2023 (15)
    • January 2023 (17)
    • December 2022 (10)
    • November 2022 (7)
    • October 2022 (22)
    • September 2022 (16)
    • August 2022 (33)
    • July 2022 (28)
    • June 2022 (42)
    • May 2022 (53)
    • April 2022 (35)
    • March 2022 (37)
    • February 2022 (21)
    • January 2022 (28)
    • December 2021 (23)
    • November 2021 (12)
    • October 2021 (10)
    • September 2021 (4)
    • August 2021 (4)
    • July 2021 (4)
    • May 2021 (3)
    • April 2021 (1)
    • March 2021 (2)
    • February 2021 (1)
    • January 2021 (4)
    • December 2020 (7)
    • November 2020 (2)
    • October 2020 (4)
    • September 2020 (7)
    • August 2020 (11)
    • July 2020 (3)
    • June 2020 (5)
    • April 2020 (3)
    • March 2020 (1)
    • February 2020 (1)
    • January 2020 (2)
    • December 2019 (2)
    • November 2019 (1)
    • September 2019 (4)
    • August 2019 (3)
    • July 2019 (5)
    • June 2019 (10)
    • May 2019 (8)
    • April 2019 (6)
    • March 2019 (7)
    • February 2019 (17)
    • January 2019 (14)
    • December 2018 (10)
    • November 2018 (20)
    • October 2018 (14)
    • September 2018 (27)
    • August 2018 (19)
    • July 2018 (16)
    • June 2018 (18)
    • May 2018 (28)
    • April 2018 (3)
    • March 2018 (11)
    • February 2018 (5)
    • January 2018 (10)
    • December 2017 (20)
    • November 2017 (30)
    • October 2017 (33)
    • September 2017 (11)
    • August 2017 (13)
    • July 2017 (9)
    • June 2017 (8)
    • May 2017 (9)
    • April 2017 (4)
    • March 2017 (12)
    • December 2016 (3)
    • September 2016 (4)
    • August 2016 (1)
    • July 2016 (7)
    • June 2016 (7)
    • April 2016 (4)
    • March 2016 (7)
    • February 2016 (1)
    • January 2016 (3)
    • November 2015 (3)
    • October 2015 (2)
    • September 2015 (9)
    • August 2015 (6)
    • June 2015 (5)
    • May 2015 (6)
    • April 2015 (3)
    • March 2015 (16)
    • February 2015 (10)
    • January 2015 (16)
    • December 2014 (9)
    • November 2014 (7)
    • October 2014 (21)
    • September 2014 (8)
    • August 2014 (9)
    • July 2014 (7)
    • June 2014 (5)
    • May 2014 (8)
    • April 2014 (19)
    • March 2014 (8)
    • February 2014 (9)
    • January 2014 (31)
    • December 2013 (23)
    • November 2013 (48)
    • October 2013 (25)
  • Tags

    Business Defense Department Department of Veterans Affairs EHealth EHR Electronic health record Food and Drug Administration Health Health informatics Health Information Exchange Health information technology Health system HIE Hospital IBM Mayo Clinic Medicare Medicine Military Health System Patient Patient portal Patient Protection and Affordable Care Act United States United States Department of Defense United States Department of Veterans Affairs
  • Upcoming Events

Blog at WordPress.com.
  • Reblog
  • Subscribe Subscribed
    • healthcarereimagined
    • Join 153 other subscribers
    • Already have a WordPress.com account? Log in now.
    • healthcarereimagined
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Copy shortlink
    • Report this content
    • View post in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...
 

    %d