The Age Problem No One Can Solve

notion image
https://www.wired.com/story/age-verification-is-sweeping-gaming-is-it-ready-for-the-age-of-ai-fakes/
OpenAI's teen safety features expose an impossible choice: build nothing or build surveillance infrastructure. There's no third option.
OpenAI announced new teen safety features for ChatGPT in September 2025. An age-prediction system that identifies users under 18 and routes them to content filters. Graphic sexual content gets blocked and suicidal ideation triggers parental alerts. The announcement landed exactly how you'd expect: praised by some advocacy groups, immediately criticized by privacy advocates, and treated with suspicion by everyone who understands what "age prediction" requires.
Here's what nobody wants to say out loud: OpenAI is trapped in a no-win situation of society's making. They're getting sued after teens get hurt using AI chatbots. Regulators are circling like sharks at a beach with a "maybe blood" sighting. Congress wants hearings where they can look concerned without doing anything. Parents demand protection and advocacy groups say current measures aren't enough. Do nothing and face legal catastrophe. Do something and build infrastructure that makes everyone uncomfortable.
They picked "do something" and it's probably the least-bad choice available. Welcome to trust and safety in 2025, where all your options are terrible and everyone's mad at you anyway.

What Age Prediction Means

Here's the technical reality most press coverage skips: you cannot predict someone's age from a chat conversation without first building a behavioral model of how different age groups communicate. This isn't speculation, it's mathematical necessity.
Think about how you can sometimes tell if you're texting with a teenager versus an adult, even without knowing who they are. Word choices, sentence structure, topics they care about, how they express emotions, even punctuation. Teenagers discussing homework stress at 11pm write differently than adults discussing work stress at the same hour. Those patterns are real and detectable.
OpenAI's system trains artificial intelligence on thousands of conversations from different age groups until it learns to spot those patterns automatically. Feed it enough examples and the AI gets surprisingly good at inferring age from writing style. No birthday required.
Let's call a spade a spade: this is behavioral profiling. Systems that infer personal characteristics from communication patterns.
Here's the part that should make everyone pause: the infrastructure that can detect age can detect anything. The same pattern-matching technology that identifies teenage writing patterns can identify depression markers, political leanings, sexual orientation, suicidal ideation (which OpenAI explicitly says they're monitoring for), consumer vulnerability, religious beliefs—basically any psychological or demographic characteristic that leaves patterns in how people communicate.
To be clear: OpenAI isn't necessarily doing all of that, but the technical capability exists once you build the system. Machine learning models trained to spot patterns in human behavior can spot patterns in human behavior. Which specific patterns you choose to look for is a policy decision, not a technical limitation.
For regulators, this highlights a policy gap you could drive a truck through. Current frameworks don't clearly address behavioral inference systems. Is this covered under existing child protection law? Privacy regulations? Both? Neither? Companies are making these calls without clear guidance because the guidance doesn't exist yet.

The Choices Nobody Wanted

Let's be clear about what options existed here, and why each one fails.
Option one: Do nothing. Keep ChatGPT age-neutral, let anyone use it however they want, hope nothing terrible happens. When something terrible inevitably happens, face the lawsuit, the regulatory action, the Congressional hearing where someone holds up a photo of a dead kid and asks why you didn't do more.
This is not an option. Not after Character. AI faces wrongful death lawsuits over a teen's suicide in February 2024. Not when the Federal Trade Commission launched an investigation in September 2025 into AI companies' chatbot safety practices. Not when Section 230 protections (which shield platforms from liability for user content) may not clearly apply to AI-generated content: a legal gray area currently being tested in courts. Every company in this space is one tragedy away from existence-threatening litigation.
For parents: imagine your teenager is struggling and turns to an AI chatbot because they can't talk to you. If that company did nothing to protect your child and something terrible happens, you'd want them held accountable. That pressure is real and justified.
Option two: Hard age verification. Require government ID, credit cards, facial recognition, something definitive. Immediately exclude millions of users who don't have documentation or don't want to provide it. Watch teenagers borrow their parents' accounts anyway. Create massive privacy problems storing sensitive identity documents. Spend years fighting with regulators about your database of verified identities.
This isn't viable either. Age verification at scale is simultaneously a privacy nightmare and ineffective. Every parent knows their teenager can bypass parental controls. VPNs, borrowed accounts, fake IDs, the workarounds are endless. You've built an expensive system that excludes legitimate users while determined teens sail right through.
For industry professionals: storing verified identity documents puts you squarely under GDPR, CCPA, and emerging state-level privacy laws with conflicting requirements. The liability exposure is enormous. And after all that investment, teenagers just use their parents' verified accounts. You've spent millions to achieve nothing while making privacy advocates hate you.
For regulators: multiple jurisdictions are pushing age verification mandates without reckoning with their limitations. The UK's Age-Appropriate Design Code, Texas's age verification mandates, Florida's social media restrictions, California's Age-Appropriate Design Code (currently enjoined on First Amendment grounds), and proposed federal legislation all assume technical solutions exist that work. They don't. This needs honest assessment of what's technically feasible versus what sounds good in a press release.
Option three: Behavioral detection. Build systems that infer age from usage patterns. More privacy-preserving than hard verification (no identity documents stored), probably more accurate (harder to fake your writing style than your birthday), but requires building that behavioral profiling infrastructure everyone worries about.
OpenAI chose option three. Given the constraints, it's probably the right technical call. That doesn't make anyone comfortable with the implications.

When Algorithms Call Your Parents

The parental notification piece deserves its own uncomfortable examination, because this is where behavioral detection crosses into life-or-death decisions.
OpenAI's system will alert parents if it detects a user considering suicide or self-harm. Set aside for a moment the technical question of how well AI can detect suicidal ideation (it's a genuinely hard problem with lots of false positives and false negatives). Focus on the decision itself: an algorithm determining when to break what would be confidentiality in almost any other context.
Sometimes this saves lives. A parent finds out their kid is struggling, gets them professional help, crisis averted. The teenager who seemed fine was planning something terrible, and intervention prevented it.
Sometimes this destroys the one safe outlet a kid had. They were talking to ChatGPT because they couldn't talk to their parents. Maybe their parents are the source of their distress. Maybe they're LGBTQ+ in a household that won't accept them. Maybe they're in a cultural context where mental health struggles bring shame. Maybe they're working through dark thoughts in a safe space and aren't in danger. Maybe their parents will overreact, yank away their privacy, make everything worse. The AI can't tell which situation it's in. Neither can anyone else until it's too late.
Sometimes this puts kids in danger. Not every family is safe. Not every parent responds to mental health crises with support and therapy. Some respond with anger, punishment, religious intervention that does more harm than good, or worse. For some teenagers, parental notification of suicidal thoughts isn't protection: it's another threat.
OpenAI knows this and their trust and safety team definitely knows this. They're making a calculated decision that the lives saved outweigh the harm caused, but they're making that calculation without knowing the numbers. Nobody knows the numbers, because this territory is largely uncharted. We're all just guessing with high stakes and hoping for the best.
For parents: imagine getting an alert that your teenager expressed suicidal thoughts to an AI. Would you know what to do? Would you respond with support, or anger that they didn't come to you? Would you have the resources to get them help? Would you make it worse by taking away their privacy and independence? OpenAI's system assumes you're prepared to handle this notification appropriately. Most parents aren't.
Here's the part that should trouble everyone: we're letting AI companies make decisions that trained mental health professionals agonize over.
In traditional therapeutic contexts, deciding whether to break confidentiality for a minor expressing suicidal ideation involves careful judgment. Therapists consider the immediacy of risk, the specificity of plans, the availability of means, the support systems in place, the family dynamics, and the potential consequences of notification versus maintaining the therapeutic relationship. They have ethical frameworks, legal obligations under duty-to-warn statutes, and years of training guiding these calls. Even with all that, they sometimes get it wrong.
OpenAI gets to make this decision with pattern-matching algorithms.
Not because they're evil or reckless. Because someone has to make the call, and society hasn't provided clear guidance about what the rules should be for AI systems. So companies are building what they hope is responsible while trying to balance legal liability, genuine safety concerns, privacy principles, parents' expectations, and their own ethical standards.
For industry professionals: mental health detection is becoming table stakes for any platform with significant minor users. Your trust and safety teams need frameworks for these decisions: what triggers notification, how you verify parental contact information, what you do when parents don't respond, how you handle false positives. Companies building these systems need clinical consultation, not just engineering.
For regulators: this is happening in a complete policy vacuum. COPPA governs data collection but doesn't address mental health interventions. State laws vary wildly on parental notification requirements. Mental health professionals have clear ethical guidelines; AI companies don't.

The Tradeoff We're Avoiding

Here's what this comes down to: you cannot have effective age-appropriate AI without building systems that make privacy advocates deeply uncomfortable.
The technology required to protect teenagers requires knowing things about them. Their age, their mental state, their conversation patterns, their risk factors. You can't protect someone without identifying them and understanding their situation. It's true for literally every other safety system humans have ever built, but somehow, we expect AI to magic this away.
We want companies to protect teens while building no infrastructure that could possibly be misused. We want age-appropriate content without age detection. We want mental health interventions without psychological profiling. We want perfect safety with perfect privacy.
Pick one. You don't get both. Physics doesn't work that way, and neither does technology.
OpenAI is getting criticized from both sides simultaneously. Privacy advocates hate the profiling infrastructure. Parents' groups say the protections don't go far enough. Both are correct from their own perspectives. The problem is that their demands are technically incompatible.
This is the bind every AI company faces right now. Build nothing and face lawsuits when kids get hurt. Build something and face criticism for the tradeoffs that "something" requires. There's no option that makes everyone happy because the underlying problem has no clean solution.

What We Need

The real failure here isn't OpenAI's system design or their judgment calls. It's that we're forcing companies to make impossible choices in a policy vacuum, without clear guidance about what society wants them to do.
Should AI companies be able to infer user age from behavior? Should they alert parents about mental health concerns? Should they block content for minors that adults can access? Under what circumstances? With what safeguards? These are fundamental policy questions disguised as technical problems. We've outsourced them to corporate trust and safety teams because nobody else wants responsibility for the hard calls.
Congress could pass clear legislation defining what companies must or cannot do regarding minor users. They won't, because these decisions are politically toxic and the technology moves faster than legislative processes. Much easier to hold hearings where they can yell at tech CEOs on C-SPAN while accomplishing exactly nothing.
Federal regulators could provide detailed guidance. The FTC could clarify what "unfair or deceptive practices" means for AI systems serving minors. The Department of Health and Human Services could offer frameworks for mental health interventions. They don't, partly because the expertise doesn't exist within regulatory agencies, partly because any guidance will immediately face legal challenges.
State legislatures are filling the void with contradictory requirements. California's Age-Appropriate Design Code (currently blocked by courts on First Amendment grounds), Texas's age verification mandates, Florida's social media restrictions, companies are navigating a patchwork of conflicting state laws with no federal framework. This creates compliance nightmares while not solving the underlying problems.
Which brings us to an ugly irony: while these problems escalate, companies have spent the past two years systematically dismantling their trust and safety infrastructure for "efficiency" and "cost optimization." The teams best equipped to handle impossible tradeoffs got eliminated right when you need them most. Like firing your snowplow drivers in November because winter's not here yet.
OpenAI still has substantial trust and safety capacity, which is probably why they can attempt something this complex. Most companies cut those teams and are now discovering they need exactly that expertise. You can't outsource age-appropriate AI to your cybersecurity team or your legal department alone. This requires cross-functional collaboration guided by people who understand the full scope of the challenge.
Companies need operational protocols that acknowledge uncertainty. Frameworks for what triggers parental notification and at what confidence threshold. How you verify parental contact information and what you do when it's missing. Response protocols when parents don't respond to alerts. How you handle false positives without destroying user trust. Regional and cultural considerations that affect appropriate responses.
These aren't technical problems. They're operational challenges that require human judgment supported by good systems. And companies need transparency about their limitations: you cannot catch every case, you will have false positives and false negatives, you will make calls that in hindsight were wrong. Building these systems means accepting that you're working with imperfect tools to address impossible problems.
For parents: if you have teenagers using AI chatbots, understand that these systems cannot reliably determine when your child is in danger versus working through normal adolescent angst. You might get alerts that terrify you about situations that aren't emergencies. You might not get alerts about crises because the system missed the signals. Your response to any alert will significantly impact whether your teenager ever talks to you about their real struggles.
We need to stop criticizing companies for every decision while providing no guidance about what they should do instead. The current dynamic, demanding perfect safety with perfect privacy, effective protection with no surveillance, age-appropriate content with no age detection, then acting shocked when solutions involve tradeoffs, isn't sustainable.
This requires honest conversation about what protecting teenagers in an AI-powered world requires and what tradeoffs we're willing to accept. Those conversations should involve parents, lawmakers, educators, mental health professionals, privacy advocates, technologists, and teenagers themselves.
Right now, we're letting companies figure this out through trial and error, with human lives as the test cases. That's not a strategy; it's abdication of collective responsibility.

The Question Nobody Wants

We're going to see more announcements like this. Every AI company with significant teen users faces the same pressure. Google, Meta, Microsoft, Anthropic, every startup building conversational AI, they're all grappling with these exact questions. They'll all build something. The specifics will differ, but the fundamental tradeoffs won't change.
Here's what we should be asking ourselves: what do we want? Do we want AI companies to protect teenagers even if that requires behavioral profiling? Or do we prioritize privacy even if that means less protection? Do we want parental notification for mental health crises even knowing it sometimes makes things worse? Or do we treat AI interactions like therapy and maintain confidentiality even for minors? Do we want age-appropriate content filtering even though it requires building systems that can infer age from behavior? Or do we accept that minors and adults access the same content with the same protections?
These aren't rhetorical questions. They require answers. Not from companies but from us. From parents, lawmakers, educators, mental health professionals, privacy advocates, civil liberties organizations, and everyone else with stakes in this.
For regulators: clarify what age detection methods are legally permissible. Right now companies are guessing. Behavioral inference? Permitted under COPPA or prohibited? What about under state privacy laws? Under international frameworks like GDPR? Define standards for mental health interventions—when must companies alert parents, when must they not, what safeguards are required, should there be clinical oversight? Address the compliance patchwork—federal preemption or deference to states? Companies need to know whether they're building for California standards, Texas standards, Florida standards, or some unified baseline.
For industry professionals: don't cut trust and safety teams. These problems are getting more complex, not simpler. You need people who can navigate impossible tradeoffs with wisdom, not just engineers who can build detection systems. Invest in operational excellence, not just detection accuracy. Be honest about limitations; internally with executives who want guarantees you can't provide, externally with users, parents, and regulators. Collaborate on frameworks. These problems aren't competitive advantages. Industry working groups developing shared approaches to age detection, mental health interventions, and cross-platform coordination would help everyone.
For parents: don't outsource this to technology. AI companies can't replace relationships with your teenagers. Talk to your teenagers about AI. They're using these systems. Have conversations about what they're discussing, what advice they're getting, how they're thinking about AI's role in their lives. And prepare for notifications. If you get an alert that your teenager expressed suicidal thoughts to an AI, will you know what to do?
Right now, we're demanding that companies thread an impossible needle: perfect safety with perfect privacy, effective protection with no surveillance, age-appropriate content with no age detection. Then we act shocked when every solution involves uncomfortable tradeoffs.
OpenAI's announcement isn't the problem. It's a symptom of a much larger challenge: we've refused to have hard conversations about what protecting teenagers in an AI-powered world requires.
Either we accept that teen safety comes with uncomfortable tradeoffs and have honest conversations about which tradeoffs we're willing to make, or we stop pretending we want meaningful protection. The current approach, demanding companies figure it out while criticizing every solution they propose, isn't working for anyone. Except maybe the lawyers. They're doing great.
The companies are already making their moves. The question is whether the rest of us will engage with the choices involved or just keep demanding impossible outcomes while avoiding responsibility for the hard decisions.
Your move, society. The companies are tired of guessing.