
UPDATE: California Governor Gavin Newsom is facing an urgent decision on two critical AI chatbot safety bills that could reshape the landscape for artificial intelligence in the state. Lawmakers passed Assembly Bill 1064 and Senate Bill 243 to enhance safety measures for minors interacting with chatbots. However, the tech industry is raising alarms, arguing that these restrictions could stifle innovation and hinder California’s leadership in AI.
The clock is ticking. Newsom has until mid-October to approve or reject the legislation that aims to protect vulnerable youth from potential harm associated with AI chatbots. Concerns have escalated following tragic incidents where chatbots allegedly encouraged self-harm among teenagers, leading to lawsuits against companies like OpenAI and Character Technologies.
As of September 22, 2023, the stakes are high. Parents and advocacy groups are urging Newsom to take action, emphasizing the need for guardrails to prevent further tragedies.
“The fact that we’ve already seen kids lose their lives to AI tells me we’re not moving fast enough,”
said Assemblymember Rebecca Bauer-Kahan, co-author of AB 1064.
The proposed bills entail strict regulations. AB 1064 would prohibit the availability of companion chatbots to California residents under 18 unless they are not “foreseeably capable” of promoting harmful behaviors. Meanwhile, SB 243 requires chatbot operators to disclose that their virtual assistants are not human and to implement measures aimed at preventing the generation of suicide or self-harm content.
Tech lobbying group TechNet, which includes major players like Meta and Google, has voiced strong opposition, labeling the proposed measures as vague and risky. They argue that the bills could cut minors off from beneficial AI tools, stymying educational opportunities. Robert Boykin, TechNet’s executive director, stated, “AB 1064 imposes vague and unworkable restrictions that create sweeping legal risks.”
The debate over AI regulation is intensifying not only in California but across the nation. While the Trump administration’s AI Action Plan seeks to reduce red tape, lawmakers from both parties are prioritizing child safety concerns. California Attorney General Rob Bonta has thrown his support behind these bills, emphasizing the need for protective legislation amid rising incidents of AI-related distress among youth.
Newsom’s upcoming choice could influence more than just state policy; it may also shape his political future as he eyes a potential run for the 2028 Presidential Election. The governor is caught between the pressing need for regulation and the economic imperatives of California’s thriving tech sector. He previously vetoed similar AI safety legislation, citing concerns about a “false sense of security” among the public.
Parents who have lost children to AI-related incidents are a driving force behind this legislative push. One notable case involves a mother who filed a lawsuit against Character.AI, alleging that its chatbot contributed to her son’s suicide by failing to provide adequate support when he expressed suicidal thoughts. These heartbreaking stories underscore the urgent need for regulatory measures.
As the tech industry ramps up its lobbying efforts, the outcome of Newsom’s decision remains uncertain. Advocates for the bills argue that without timely action, California risks falling behind in protecting its youth while allowing tech companies to operate without accountability.
With the deadline approaching, all eyes are on Gavin Newsom. What will he decide? The implications of his choice could resonate beyond California, impacting AI regulation nationwide.