Protecting Children from AI Chatbots: What the GUARD Act Means

Protecting Children from AI Chatbots: What the GUARD Act Means

Protecting Children from AI Chatbots: What the GUARD Act Means

Protecting Children from AI Chatbots: What the GUARD Act Means

NEWNow you can listen to Fox News articles!

A new bipartisan bill introduced by Sens. Josh Hawley, R-Mo., and Richard Blumenthal, D-Conn., would prohibit minors (under 18) from interacting with certain AI chatbots. It capitalizes on the growing alarm over children’s use of “AI companions” and the risks these systems can pose.

Sign up to receive my FREE CyberGuy report
Get my best tech tips, urgent security alerts, and exclusive offers delivered right to your inbox. Plus, you’ll get instant access to my Ultimate Guide to Surviving Scams, free when you join me CYBERGUY.COM information sheet.

What’s the deal with the proposed GUARD Act?

Here are some of the key features of the proposed Guard Law:

  • AI companies would be required to verify user age with “reasonable age verification measures” (e.g., a government ID) rather than simply asking for a date of birth.
  • If a user is found to be under 18 years of age, a company must prohibit them from accessing an “AI companion.”
  • The bill also requires that Chatbots clearly reveal that they are No human and not having professional credentials (therapeutic, medical, legal) in every conversation.
  • Create new criminal and civil sanctions for companies that knowingly provide chatbots to minors that solicit or facilitate sexual content, self-harm or violence.
A girl looks at a smartphone in front of an indigo background.

Bipartisan lawmakers, including Senators Josh Hawley and Richard Blumenthal, introduced the GUARD Act to protect minors from unregulated AI chatbots. (Kurt “CyberGuy” Knutsson)

The motivation: Lawmakers cite testimony from parents, child welfare experts and growing lawsuits alleging that some chatbots manipulated minors, encouraged self-harm or worse. The basic framework of the GUARD Act is clear, but the details reveal how broad its reach could be for both tech companies and families.

META AI EXPOSED DOCUMENTS, WHICH ALLOW CHATBOTS TO FLIRT WITH CHILDREN

Why is this so important?

This bill is more than just another technology standard. It is at the center of a growing debate about how far artificial intelligence should come into children’s lives.

Rapid growth of AI + child safety concerns

AI chatbots are no longer toys. Many children are using them. Hawley cited that more than 70 percent of American children use these products. These chatbots can provide human-like responses, emotional mimicry, and sometimes invite ongoing conversations. For minors, these interactions can blur the boundaries between machine and human, and they may seek guidance or emotional connection from an algorithm rather than a real person.

What is at stake in legal, ethical and technological terms

If this bill passes, it could reshape how the AI ​​industry handles minors, age verification, disclosures, and liability. It shows that Congress is willing to move away from voluntary self-regulation and toward firm guardrails when children are involved. The proposal may also open the door to similar laws in other high-risk areas, such as mental health robots and educational assistants. Overall, it marks a shift from waiting to see how AI develops to acting now to protect young users.

A girl uses a smartphone.

Parents across the country are demanding stronger safeguards as more than 70 percent of children use AI chatbots that can mimic empathy and emotional support. (Kurt “CyberGuy” Knutsson)

Industry pushback and concerns about innovation

Some tech companies argue that such regulation could stifle innovation, limit beneficial uses of conversational AI (education, supporting the mental health of older teens), or impose heavy compliance burdens. This tension between security and innovation is at the center of the debate.

What the GUARD Act requires of AI companies

If passed, the GUARD Act would impose strict federal standards on how AI companies design, verify and manage their chatbots, especially when minors are involved. The bill outlines several key obligations aimed at protecting children and holding companies accountable for harmful interactions.

  • The first important requirement focuses on age verification. Companies must use trusted methods, such as a government-issued ID or other verified tools, to confirm that a user is at least 18 years old. It is no longer enough to ask for the date of birth.
  • The second rule implies clear revelations. Each chatbot must tell users at the beginning of each conversation, and at regular intervals, that it is an artificial intelligence system, not a human being. The chatbot must also clarify that it does not possess professional credentials such as medical, legal, or therapeutic licenses.
  • Another provision establishes a prohibition of access to minors. If a user is verified to be under 18, the company must block access to any “AI companion” features that simulate friendship, therapy, or emotional communication.
  • The bill also introduces civil and criminal sanctions for companies that violate these rules. Any chatbot that encourages or engages in sexually explicit conversations with minors, promotes self-harm, or incites violence could result in significant fines or legal consequences.
  • Finally, the GUARD Act defines a AI Companion as a system designed to foster interpersonal or emotional interaction with users, such as friendship or therapeutic dialogue. This definition makes clear that the law addresses chatbots capable of forming human-like connections, not assistants with limited purposes.
A child holds a smartphone in a horizontal position.

The proposed GUARD Act would require chatbots to verify users’ ages, reveal that they are not humans, and block users under 18 from AI add-on features. (Kurt “CyberGuy” Knutsson)

OHIO LAWYER PROPOSES COMPREHENSIVE BAN ON MARRIING WITH AI SYSTEMS AND GRANTING LEGAL PERSONALITY

How to stay safe in the meantime

Technology often advances faster than laws, which means families, schools and caregivers must take the lead in protecting young users right now. These steps can help create safer online habits as lawmakers debate how to regulate AI chatbots.

1) Know which bots your kids use

Start by finding out which chatbots your kids talk to and what they are designed to do. Some are made for entertainment or education, while others focus on emotional support or companionship. Understanding the purpose of each bot helps you detect when a tool goes from harmless fun to something more personal or manipulative.

2) Establish clear rules about interaction

Even if a chatbot is labeled safe, decide together when and how it can be used. Encourage open communication by asking your child to show you their chats and explain what they like about them. Framing this as curiosity, not control, builds trust and keeps the conversation going.

3) Use parental controls and age filters.

Take advantage of built-in security features whenever possible. Turn on parental controls, turn on kid-friendly modes, and block apps that allow private or unsupervised chats. Small changes to your settings can make a big difference in reducing your exposure to harmful or suggestive content.

4) Teach children that robots are not humans

Remind kids that even the most advanced chatbot is still software. He can imitate empathy, but he doesn’t understand or care in a human sense. Help them recognize that advice about mental health, relationships, or safety should always come from trusted adults, not an algorithm.

5) Watch for warning signs

Stay alert for changes in behavior that could indicate a problem. If a child becomes withdrawn, spends long hours chatting privately with a robot, or repeats harmful ideas, intervene early. Talk openly about what is happening and, if necessary, seek professional help.

6) Stay informed as laws evolve

Regulations like the GUARD Act and new state measures, including California’s SB 243, are still taking shape. Stay up to date with updates to know what protections are in place and what questions to ask app developers or schools. Awareness is the first line of defense in a fast-moving digital world.

Take my quiz: How safe is your online security?

Do you think your devices and data are really protected? Take this quick quiz to see where you stand digitally. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing well and what you need to improve. Take my quiz here: Cyberguy.com.

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

Kurt’s Key Takeaways

The GUARD Act represents a bold step toward regulating the intersection of minors and AI chatbots. It reflects growing concerns that unmoderated AI companionship could harm vulnerable users, especially children. Of course, regulation alone will not solve all problems; industry practices, platform design, parental involvement, and education are important. But this bill indicates that the “build it and see what happens” era for conversational AI may be ending when kids are involved. As technology continues to evolve, our laws and personal practices must evolve as well. For now, staying informed, setting boundaries, and treating chatbot interactions with the same scrutiny as we treat human interactions can make a real difference.

If a law like the GUARD Act becomes a reality, should we expect similar regulation for all emotional AI tools aimed at children (tutors, virtual friends, games) or are chatbots fundamentally different? Let us know by writing to us at Cyberguy.com.

Sign up to receive my FREE CyberGuy report
Get my best tech tips, urgent security alerts, and exclusive offers delivered right to your inbox. Plus, you’ll get instant access to my Ultimate Guide to Surviving Scams, free when you join me CYBERGUY.COM information sheet.

Copyright 2025 CyberGuy.com. All rights reserved.

Reference: Read Latest News in Spanish