A bipartisan push in Congress seeks to shield children from the unseen risks of AI chatbots, raising urgent questions about tech accountability. This initiative, driven by two senators from opposite sides of the aisle, targets the emotional and psychological dangers these tools may pose to young users.
As reported by Time, Senators Josh Hawley of Missouri and Richard Blumenthal of Connecticut have introduced the GUARD Act, a bill that would mandate age verification for users of AI companions like ChatGPT and prohibit access to minors. The legislation aims to curb the potential for chatbots to manipulate emotions or encourage harmful behavior among vulnerable youth.
The bill defines AI companions broadly, encompassing any system offering human-like responses designed to simulate emotional or interpersonal interaction. This could apply to major players like OpenAI and Anthropic, as well as niche platforms like Character.ai, which craft specific personas for users to engage with. The scope signals a serious intent to rein in an industry often criticized for prioritizing engagement over safety.
Last month, Hawley chaired a Senate Judiciary subcommittee hearing on the harms of AI chatbots, where parents shared devastating stories of young men who self-harmed or took their own lives after interacting with these tools. These accounts, tied to platforms like OpenAI and Character.ai, underscore the bill’s urgency for those who see tech as outpacing ethical boundaries.
The legislation also follows Hawley’s August investigation into Meta’s AI policies, sparked by internal documents revealing chatbots could engage children in romantic or sensual conversations. Such revelations fuel skepticism about whether Big Tech can self-regulate when profit motives clash with child welfare. It’s a glaring reminder that innovation without guardrails can exact a steep human cost.
Adding to the momentum, a coalition including the Young People’s Alliance praised the GUARD Act, stating, “This bill is one part of a national movement to protect children and teens from the dangers of companion chatbots.” Yet, their call to tighten definitions and focus on platform design suggests even this bill may not fully address how addictive features exploit young minds.
The GUARD Act doesn’t stop at age checks, demanding more than a simple birthdate input with requirements like government-issued ID or other reliable verification methods. This hard line aims to ensure no minor slips through the cracks, though it raises practical questions about implementation and privacy.
More striking, the bill would criminalize designing chatbots that risk encouraging minors toward sexual conduct, self-injury, or violence, with fines up to $100,000 for violations. It’s a bold move to hold companies accountable, signaling that negligence won’t be met with a mere slap on the wrist. The threat of real consequences might finally force tech giants to rethink their priorities.
Additionally, the legislation mandates periodic reminders to users that chatbots aren’t human and don’t offer professional services like medical or psychological advice. This transparency clause tackles the deceptive intimacy these tools can foster, a small but necessary step to ground users in reality.
Some companies have already taken steps, with OpenAI announcing in September an age-prediction system to direct minors to a teen-friendly ChatGPT version. Their policy includes avoiding flirtatious or self-harm discussions and contacting parents or authorities if suicidal ideation is detected, a reactive measure that still leaves room for doubt about enforcement.
Meta and OpenAI also introduced parental controls this month, allowing oversight of children’s interactions with AI models. While these moves show a nod to responsibility, they often feel like Band-Aids on a deeper wound when lawsuits, like one filed in August by a grieving family against OpenAI, allege intentional relaxation of safeguards for engagement’s sake.
California’s recent law, signed by Governor Gavin Newsom and effective January 2026, mirrors the GUARD Act’s spirit by requiring AI firms to implement child safety protocols, including addressing suicidal ideation. This state-level action suggests a growing consensus that federal inertia can’t persist while children remain at risk.
The GUARD Act represents a rare bipartisan effort to confront a tech frontier where emotional manipulation lurks behind every algorithm. It’s not about stifling innovation but ensuring that progress doesn’t come at the expense of our most impressionable minds.
Critics of unchecked AI often point to a culture that celebrates disruption without considering who gets hurt along the way. This bill, alongside state actions and industry adjustments, hints at a tipping point where society demands tech serve humanity, not exploit it.
Ultimately, protecting minors from AI’s darker potentials isn’t just a policy issue; it’s a moral imperative. As Hawley and Blumenthal lead this charge, the question remains whether Congress and companies will match their resolve with action before more families bear the unthinkable cost.