Chairman Joyce Delivers Opening Statement at Subcommittee on Oversight and Investigations Hearing on Artificial Intelligence (AI) Chatbots

WASHINGTON, D.C. – Congressman John Joyce, M.D. (PA-13), Chairman of the Subcommittee on Oversight and Investigations, delivered the following opening statement at today's hearing titled Innovation with Integrity: Examining the Risks and Benefits of AI Chatbots.

Subcommittee Chairman Joyce’s opening statement as prepared for delivery: 

"Good afternoon, and welcome to today's hearing entitled 'Innovation with Integrity: Examining the Risks and Benefits of AI Chatbots'

“Generative artificial intelligence (AI) chatbots are computer programs powered by large language models that simulate human conversation with a user. AI chatbots are increasingly integrated into the devices we use daily. For example, chatbots are built into search engines, social media platforms, and even some vehicle onboard software systems.  

“Moreover, chatbots are widely accessible and easy to use. A user simply enters a prompt—usually in the form of a command or question—and the chatbot responds almost instantaneously with human-like responses. With advanced processing capabilities, chatbots can summarize complex concepts, streamline customer service inquiries, and generate content on demand. Beyond their practical research and business uses, chatbots are also utilized for entertainment, therapy, and companionship by both adults and young people.  

“With continual prompts, users can cultivate a dialogue with a chatbot that can feel like a real interpersonal relationship. Through natural language processing, chatbots are designed to effectively engage with users in a human-like way that can instill a sense of comfort and companionship for the user.  

“Additionally, Americans are increasingly engaging with chatbots for mental health support. For some, turning to a chatbot for therapy can be helpful in limited circumstances when they have nowhere else to go. Under the wrong circumstances, however, these chatbot relationships can go wrong.  

“First, users can feel a false sense of anonymity with chatbots, sharing personal or sensitive information that is not protected by confidentiality obligations. Moreover, chatbots retain data to enhance their ‘memory,’ which improves the quality of their interactions with users. This data is also used to train the chatbot’s base model to improve the accuracy of responses across the platform.  

“In addition to chatbots retaining data to improve their models, AI chatbots have been subject to data breaches and if conversation data falls into the wrong hands, sensitive personal information can be obtained by malicious actors.  

“Second, chatbots are designed to maximize engagement with users. As a result, sycophantic chatbots have been found to affirm harmful or illogical beliefs, providing vulnerable users with perceived support for unhealthy behaviors such as self-harm, eating disorders, and suicide. For children and adults with a propensity for mental illness, this can be particularly problematic.  

“Many of us are familiar with recent cases where a relationship with chatbots has proven harmful –sometimes deadly—for some users. Since AI chatbots emerged, there have been cases of adults and teens attempting or committing suicide after long-term relationships with chatbots. In some cases, the chatbots encouraged or affirmed suicidal ideations.  

“Two months ago, the Federal Trade Commission launched an inquiry to understand what steps seven major AI chatbot companies are taking to protect children and teens from harm.1 I am hopeful that this inquiry will shed light on ways that these technologies can be improved to keep children safe.  

“My goal today is to have a balanced, frank conversation about the potential benefits and harms of AI chatbots to Americans. It is important that we consider the implications of these technologies as we balance the benefits of AI innovation with protecting the most vulnerable among us.  

“I want to thank the witnesses for being here today, and I look forward to hearing from you on this important topic.” 

 ###