A new bill introduced in California aims to temporarily halt the sale of AI toys, specifically those with chatbot capabilities, to minors. Senator Steve Padilla (D-CA) proposed the legislation, SB 867, on Monday, seeking a four-year ban on the manufacture and sale of these toys to individuals under 18. The move comes amid growing concerns about the potential for harmful interactions between children and artificial intelligence.
The bill originates in Sacramento, California, and will need to pass through the state legislature and be signed by the governor to become law. It reflects a broader national conversation about regulating artificial intelligence, particularly as it relates to vulnerable populations like children. The proposed ban is intended to allow regulators time to develop comprehensive safety standards for AI-powered toys.
Growing Concerns Over AI Toy Safety
The impetus for SB 867 stems from a series of troubling incidents and reports regarding children’s interactions with AI chatbots. Several lawsuits have been filed against chatbot developers by families alleging that prolonged conversations with AI contributed to their children’s suicides. These cases have highlighted the potential for AI to provide harmful advice or exacerbate existing mental health issues.
Additionally, consumer advocacy groups have demonstrated the ease with which these toys can be prompted to generate inappropriate or dangerous content. In November 2025, PIRG Education Fund reported that toys like Kumma, an AI-powered bear, could be manipulated into discussing sensitive topics such as weapons and sexual content. This raises questions about the safeguards currently in place to protect children from exposure to harmful material.
Political Context and Federal Action
The timing of the bill is also influenced by a recent executive order from former President Trump. The order directed federal agencies to challenge state-level AI regulations, but notably included an exception for laws focused on child safety. This suggests a degree of bipartisan recognition regarding the unique vulnerabilities of children in the context of AI development.
Senator Padilla also recently co-authored California’s SB 243, which mandates chatbot operators to implement measures protecting children and vulnerable users. This demonstrates a proactive approach by California lawmakers to address the risks associated with AI technology. The new bill builds on this foundation by specifically targeting the toy industry.
The concerns aren’t limited to content generation. Reports have also surfaced regarding potential biases embedded within AI systems used in children’s toys. For example, NBC News reported that the Miiloo “AI toy for kids,” manufactured by a Chinese company, sometimes exhibited programming aligned with the values of the Chinese Communist Party. This raises questions about the influence of political ideologies on AI-driven interactions with children.
Industry Response and Delayed Releases
The toy industry itself is grappling with the challenges of integrating AI into its products. OpenAI and Mattel, the maker of Barbie, had planned to release an “AI-powered product” in 2025, but have since delayed the launch. Neither company has publicly explained the reason for the delay, fueling speculation about potential safety concerns or regulatory hurdles.
This delay underscores the complexities of developing and deploying AI technology responsibly, particularly when it comes to products marketed to children. Companies are facing increased scrutiny from regulators, advocacy groups, and the public regarding the potential risks associated with AI-powered toys. The development of artificial intelligence is rapidly evolving, and the toy industry is struggling to keep pace with the ethical and safety implications.
Senator Padilla emphasized the need for caution, stating that children should not be “used as lab rats” for technology experimentation. He argues that a temporary pause in sales is necessary to allow for the development of appropriate safety guidelines and a robust regulatory framework. This framework would need to address issues such as data privacy, content filtering, and the potential for emotional harm.
The broader implications of this bill extend beyond the toy industry. It could set a precedent for other states to consider similar regulations, and it may influence federal policymakers to take a more active role in overseeing the development and deployment of AI technology. The debate over AI regulation is likely to intensify as AI becomes more pervasive in everyday life.
Looking ahead, SB 867 will be debated in the California State Senate and Assembly. If passed, it would likely face legal challenges from industry groups. The bill’s success will depend on the ability of its proponents to demonstrate a clear and present danger to children posed by unregulated AI toys. The next key date will be the committee hearing scheduled for early March 2026, where the bill will be first discussed and potentially amended. The outcome of this legislation, and similar efforts nationwide, will significantly shape the future of child safety in the age of AI.
Techcrunch event
San Francisco
|
October 13-15, 2026

