• Daily Zaps
  • Posts
  • California wants AI chatbots to remind users they aren’t people

California wants AI chatbots to remind users they aren’t people

California moves to regulate AI chatbots, requiring reminders that they aren't human. Anthropic bans AI-assisted job applications. AI distillation emerges as a key technique in model development. Meta considers halting high-risk AI projects.

In partnership with

Welcome back to Daily Zaps, your regularly-scheduled dose of AI news ⚡️ 

Here’s what we got for ya today:

  • 🤖 California wants AI chatbots to remind users they aren’t people

  •  Don’t use AI to apply for jobs at this AI company

  • 🧪 What is AI distillation?

  • 🛑 Meta may halt development of high-risk AI systems

Let’s get right into it!

LEGAL

California wants AI chatbots to remind users they aren’t people

California’s proposed SB 243 bill seeks to regulate AI chatbots that interact with children by requiring periodic reminders that they are machines, not humans, aiming to prevent unhealthy emotional attachments. Introduced by Senator Steve Padilla, the bill also bans companies from using engagement-boosting rewards and mandates reports on how often minors exhibit suicidal ideation while using chatbots.

While chatbots can provide a safe space for self-expression, their potential for isolation and manipulation remains a concern. However, experts argue that the real issue is the lack of real-world support systems for kids, including overcrowded classrooms, disappearing community spaces, and a shortage of mental health resources. While reminding kids that chatbots aren’t real is a step forward, providing them with meaningful human connections would be a far more effective solution.

CAREERS

Don’t use AI to apply for jobs at this AI company

Anthropic, the maker of the popular AI assistant Claude, requires job applicants to confirm they did not use AI to write their applications, emphasizing the need to assess non-AI-assisted communication skills and genuine interest in the company. This requirement appears in nearly 150 job listings, except for some technical roles.

The policy highlights a paradox: Anthropic develops AI tools that make human-like writing nearly undetectable, yet seeks to limit AI reliance in hiring. As AI advances, it is also replacing many of the very roles Anthropic is hiring for, particularly in communications and coding.

FROM OUR PARTNER NOTION

Free Notion and Unlimited AI

Thousands of startups use Notion as a connected workspace to create and share docs, take notes, manage projects, and organize knowledge—all in one place. We’re offering 3 months of new Plus plans + unlimited AI (worth up to $3,000)! To redeem the Notion for Startups offer:

  1. Submit an application using our custom link and select Beehiiv on the partner list.

  2. Include our partner key, STARTUP4110P67801.

AI TECH

What is AI distillation?

The AI industry is abuzz with distillation, a technique where a smaller AI model learns from a larger one by analyzing its reasoning process. While OpenAI restricts distillation of its models, the technique is widespread, with Berkeley researchers recently training a high-performing reasoning model for just $450 by leveraging Alibaba’s Qwen AI to generate training data.

DeepSeek similarly refined its R1 model using math and coding problems, optimizing problem-solving through repeated attempts and self-evaluation. As DeepSeek openly shares its chain of thought, allowing developers to replicate its approach, OpenAI faces a dilemma: increase transparency or further restrict access to its models' reasoning processes.

BIG TECH

Meta may halt development of high-risk AI systems

Meta CEO Mark Zuckerberg has pledged to eventually make artificial general intelligence (AGI) openly available, but the company’s newly released Frontier AI Framework PDF outlines scenarios where it may withhold highly capable AI systems due to security risks. The framework categorizes AI into "high-risk" and "critical-risk" systems, with the latter posing potentially catastrophic, unmitigable threats such as cyberattacks or the proliferation of biological weapons.

Rather than relying on empirical tests, Meta assesses risk based on expert input and internal review. High-risk AI will be restricted internally until mitigations lower its danger, while critical-risk AI may be halted entirely. This policy appears to address concerns over Meta’s open AI approach, particularly as its Llama models have been widely used—including, reportedly, by adversarial nations. The framework also distinguishes Meta from firms like DeepSeek, whose open AI models have fewer safeguards. Meta argues that balancing openness with responsible oversight will enable AI’s benefits while minimizing risks.

In case you’re interested — we’ve got hundreds of cool AI tools listed over at the Daily Zaps Tool Hub. 

If you have any cool tools to share, feel free to submit them or get in touch with us by replying to this email.

🕸 Tech tidbits from around the web

How much did you enjoy this email?

Login or Subscribe to participate in polls.