- DISCOVERING AI: Learn It. Live It. Lead with It!
- Posts
- The First AI Safety Laws for Kids Are Here. Now It’s Parents’ Turn to Lead
The First AI Safety Laws for Kids Are Here. Now It’s Parents’ Turn to Lead

By Amy D. Love, Founder of DISCOVERING AI and Author of RAISING ENTREPRENEURS: Preparing Kids for Success in the Age of AI
Last week, Governor Gavin Newsom signed six new laws making California the first state in the nation to establish formal protections for children in the age of artificial intelligence.
For parents, this moment matters. It signals that lawmakers are beginning to recognize what families have already experienced…technology is shaping our children’s learning, relationships, and mental health faster than schools or policies can keep up.
California’s action carries weight far beyond its borders. Many of the world’s leading AI companies, including Google, Meta, Anthropic, and OpenAI, are headquartered or operate here. When California acts, the ripple effects often shape national policy and global industry standards. These laws will likely influence how AI products are built, marketed, and monitored for children everywhere.
These laws are an important start, yet they also reveal something else. Government action is slow, and childhood moves quickly. Families cannot wait for national standards or curriculum reform. The next generation is already growing up with AI in their hands, and they need guidance now.
Why Families Must Lead While Policy Catches Up
In a recent conversation I had with Pat Yongpradit, Chief Academic Officer for Code.org and Lead for TeachAI, as part of the DISCOVERING AI National Back-to-School Game Plan discussion, he shared what progress looks like and why parents play such a vital role.
“As of October 2024, every U.S. state has adopted some policy to promote K–12 computer science education,” he said. “We found that 60% of high schools are offering a foundational computer science course. That is a significant improvement, yet it still means 40% of schools do not. Only twelve states currently require computer science for graduation, often for just one semester or one year.”
He went on to explain what that means for AI readiness.
“The road ahead for AI education will be tough. It will take effort to provide access for every student and to help them move beyond being passive consumers of AI to becoming active creators as part of their educational experience.”
His words underscore the challenge. California may now be the first state to legislate AI safety and accountability for minors, yet the reality remains that most schools across the country are still catching up on computer science itself. If only twelve states currently require computer science for graduation, how can we expect AI literacy to reach every child?
The issue is not a lack of interest from schools. Change takes time. Updating curriculum, training teachers, and finding space for new subjects requires coordination and tradeoffs. That is why families matter so deeply in this transition.
As Pat noted,
“Even if schools are not teaching AI yet or have limited access, kids are still using AI outside of school. They need to learn about it, and that is where parents come in. Families can help build AI literacy and encourage schools to move faster.”
This is the defining moment. Legislation can create boundaries. Schools can build curriculum. Parents can help children develop understanding and confidence right now. Together, those efforts can prepare the Glass Generation, children born between 2010 and 2025, for a future that rewards discernment, empathy, and creativity.
What California’s Action Means for Families
California’s leadership marks a cultural shift. The state’s new laws emphasize that technology must evolve with human values at its core. They also establish that children’s digital safety cannot depend on corporate promises alone.
This progress is not about rejecting technology. It is about learning to use it wisely, teaching it responsibly, and ensuring that innovation never outpaces humanity. Parents, educators, and policymakers now share a collective responsibility…to help children understand AI, use it ethically, and develop skills that support both curiosity and conscience.
Here is a summary of what these six new laws mean for families and for the future of raising children in the age of AI.
SB 243 – AI Chatbot Safeguards
SB 243 requires AI chatbot developers to block discussions about suicide and self-harm, prevent bots from posing as therapists or generating sexual content, provide links to crisis resources, and remind minors every three hours that they are interacting with AI. Families can take legal action if companies fail to comply.
Imagine a teenager struggling late at night who turns to an AI companion for comfort. Under this law, the chatbot must redirect the conversation away from self-harm and connect the user to real human help lines. This helps ensure that AI remains a supportive tool rather than a substitute for genuine connection.
AB 56 requires social platforms to display Surgeon General–style mental health warnings for users under 18. These appear at login and after extended use, reminding teens that social media carries risks of anxiety, depression, and addiction.
Picture a 14-year-old who has been scrolling for hours. A message appears saying, “Social media use is associated with significant mental health harms and has not been proven safe for young users.” It provides a moment of reflection and awareness in a space built for constant engagement. When paired with family discussions, these reminders can turn unconscious scrolling into mindful use.
AB 316 – AI Accountability
AB 316 clarifies that companies cannot claim an AI “acted autonomously” when harm occurs. Humans remain responsible for product design, testing, and oversight. The law places accountability where it belongs…with the creators, not the code.
Imagine an AI learning platform that begins offering biased or misleading results to students. Under this law, the company must investigate and correct those errors rather than deflect responsibility. This ensures that human oversight remains central in how technology impacts learning.
AB 621 – Deepfake Penalties
AB 621 expands liability for anyone who creates or shares non-consensual, AI-generated sexual content. Penalties reach $250,000 for malicious acts, and platforms that knowingly host or profit from deepfakes can also be held accountable.
Consider a high school student whose photo is manipulated and shared without consent. This law strengthens the ability of prosecutors to act swiftly, remove harmful material, and protect the dignity of young people. It reinforces that innovation must serve humanity, not exploit it.
SB 53 and AB 853 – Transparency for AI Developers
These companion bills require large AI developers to submit annual transparency reports explaining how their systems work, what risks they pose, and how bias and safety are addressed. The goal is to bring visibility to an industry that has often operated without public accountability.
For parents and educators, this provides insight into the technology students are using. Imagine a district evaluating an AI tutoring app. With transparency reports available, schools can understand how the system was trained, whether it safeguards student data, and if it aligns with educational and ethical standards.
SB 772 – Cyberbullying and AI Harassment Policies for Schools
SB 772 requires schools to update cyberbullying policies each year and include education for students, teachers, and parents about digital harassment, AI-enabled impersonation, and deepfake abuse. The law also encourages restorative justice and mental health support to repair harm when incidents occur.
Picture a student whose likeness is replicated by an AI image generator and shared as a joke. With this law, schools must respond quickly and appropriately, not only through discipline but also through education and emotional repair. It reframes digital safety as part of learning, not simply enforcement.
Protecting the Glass Generation
In Raising Entrepreneurs: Preparing Kids for Success in the Age of AI, I describe today’s youth as the Glass Generation, children growing up behind glass screens and under constant observation. Their lives are visible, measurable, and shaped by algorithms that often know more about them than the adults guiding them.
These laws acknowledge the reality of that experience. They set new expectations for how technology interacts with children and provide the foundation for healthier engagement. The next step is for families to turn awareness into daily action at home.
From DISCOVERING AI: Building the Guardrails at Home
At DISCOVERING AI, we believe technology can be a powerful catalyst for learning and creativity when guided with intention. The key is not to fear AI, rather to learn how to use it wisely.
That is why we created the FAMILY AI GAME PLAN™, a ten-minute framework that helps families turn awareness into action. It walks parents and caregivers through:

Identifying where AI already appears in daily life
Setting clear, values-aligned boundaries
Building AI literacy together through shared experiences
Creating a written “AI Agreement” that grows as your child does
It is free, practical, and designed for real families. Legislation like SB 243 can make the internet safer, while the FAMILY AI GAME PLAN™ helps make homes stronger.
Build yours today at DISCOVERINGAI.org.
Learn It. Live It. Lead with It.
DISCOVERING AI: Learn It. Live It. Lead with It! At home, at work, and in the world.
Please Spread the Word:
Tag one person in the comments who you believe would find value in reading this article.
Want to learn more, follow 💠 Amy Love 💠, and subscribe to the DISCOVERING AI newsletter.
BONUS:
Launching October 23, 2025: DISCOVERING AI: A Parent’s Guide to Raising Future-Ready Kids, a clear roadmap to guide your family with confidence in the Age of AI.
