In a bipartisan effort to protect young users from AI risks, two U.S. senators unveiled legislation on Tuesday that would prohibit anyone under 18 from accessing chatbots and require companies to verify users’ ages through government-issued IDs or facial scans.
The GUARD Act , introduced by Republican Josh Hawley of Missouri and Democrat Richard Blumenthal of Connecticut, marks the latest effort to regulate artificial intelligence amid mounting concerns over its impact on children.
The bill arrives just weeks after a Senate hearing where parents and safety experts detailed cases of teens forming harmful bonds with AI companions, including instances tied to self-harm.
Under the proposal, platforms like ChatGPT or Character.AI must confirm a user’s age before granting access. They would also need to remind users every half hour that they’re speaking with a machine and face stiff penalties for generating adult content aimed at minors or encouraging suicide. Violations could trigger criminal charges or civil fines.
“Big Tech has betrayed any claim that we should trust companies to do the right thing on their own,” Blumenthal said in a statement. “They consistently put profit first ahead of child safety.”
The measure echoes California’s recent AI safety law , which already bars bots from impersonating humans. It also builds on the Kids Online Safety Act , signed this summer, which compels social platforms to curb mental health risks for minors.
Critics warn that the age-verification mandate could raise privacy concerns or block teens from using AI for homework and learning. Supporters counter that the safeguards are overdue, pointing to surveys showing nearly a third of teenagers now turn to chatbots for serious emotional support.
If enacted, the GUARD Act would force a major overhaul of how AI firms handle underage users. With lawsuits already piling up against chatbot makers, the bill signals lawmakers are ready to impose tougher rules on an industry long accustomed to self-regulation.