With 2025 poised to be a pivotal year for child safety online, we’re shifting Quire to a biweekly format to ensure you stay ahead of the latest developments shaping the online experiences of young people. This change reflects our commitment to keeping you informed in real-time.
Our mission remains the same: to provide timely, actionable insights into the evolving landscape of online child safety. We analyse the expectations of parents, regulators, and young people to support you in building trust and creating age-appropriate experiences.
Let us know if this new format is helpful, and feel free to share what you’d like to see in future issues!
Australia’s social media ban for U16s sends shockwaves through youth tech debates
Note: Quire does not take positions on specific legislation. Our role is to illuminate expectations, tradeoffs, and practical implications for stakeholders seeking to improve child safety online.
Last week, Australia passed the world’s first social media ban for children under 16. The move was widely celebrated by Australian parents, policymakers, and domestic media, and strongly criticised by tech giants, researchers, and civil society groups.
Some key elements to note:
- The ban targets platforms like TikTok, Facebook, Instagram, Snapchat, Reddit, and X, with fines up to AUD 50 million for non-compliance
- Gaming and messaging platforms are exempt from this ban, as are platforms that are accessible without any sign-in accounts
- YouTube is also exempt, with advocacy groups, parents, and educators flagging its widespread educational value for young people
- The law lacks clear guidance around how platforms are expected to verify age, with further guidance expected in the coming months
There are many great pieces to read about whether the ban is necessary (CNBC) or effective (ABC Australia), as well as articles that call for similar measures in the US (NY Post), or portray the anguish of parents who have lost children (AP). Notably, countries like the UK and Norway have already expressed interest in similar measures, signaling a broader trend toward stricter age verification and parental control requirements worldwide.
We have frequently discussed the growing role of regulation in child safety online, and the demands that it places on companies’s product and policy teams to revisit any experience that involves children as an audience:
- In February, I previewed these regulatory expectations at a Responsible Tech DC panel hosted by All Tech Is Human
- In March, we dove into the brewing storm around industry-driven vs regulator-driven protections
- In June, we discussed how regulators and industry can work together (instead of in opposition) to promote responsible product development
- In September, we explored how even industry giants can find themselves playing catch-up when unprepared for evolving regulations around youth safety
But how did we get here?
Child safety regulations reveal more than what’s in their legalese; they showcase the increasingly frayed relationship between industry, parents and educators, governments, civil society, and young people themselves. Rebuilding this eroded trust will take a significant shift in product and policy priorities at platforms.
For years, platforms primarily focused on digital literacy—teaching children and caregivers to navigate the online world more safely. While this is an important approach, it sidestepped the real issue caregivers and regulators were flagging: the design choices embedded in platforms. Parental controls then became the centrepiece of safety efforts, allowing caregivers to monitor and restrict their children’s activities online. These tools, while helpful, did little to address broader concerns about addictive design, algorithmic ranking, and inappropriate content.
When asked about what platforms themselves were doing to keep young people safe, companies typically would offer two solutions: (a) New individual features or policy decisions to improve young people’s experiences and (b) Voluntary codes of conduct and industry partnerships as proof of their commitment to safety. Yet again, while these are valuable - and I say this as someone who worked on many of these interventions, codes of conduct, and partnerships - they did not address the root source of public frustration around a seemingly untouchable set of design choices.
Australia’s social media ban has been described as a hastily rushed through bill, which may be true. However, it has the support of 77% of the population, reflecting a deeper frustration among regulators and the public:
- Regulators are demanding stricter age assurance measures, more transparent algorithms, and independent audits that prove platforms are actively mitigating harm
- Caregivers and educators are seeking tools that go beyond parental monitoring. They want platforms to demonstrate they are addressing child safety risks at the source
- Young people themselves—when their voices are heard—often articulate a desire for safer, more empowering online spaces. They want platforms to provide meaningful protections without curbing their ability to explore, connect, and grow
Our next issue will explore the pressures we anticipate companies facing in 2025 and beyond. For now, it’s evident that the relationship between platforms and parents has deteriorated significantly. Rebuilding this trust will require time and transparency.
In the news
A study by Digitalt Ansvar suggests that Instagram's algorithms facilitated the proliferation of self-harming content among teens. Researchers created a network of fake profiles, including those of 13-year-olds, and shared increasingly graphic self-harm related content. None of the images were proactively detected and removed; instead, Instagram recommended other accounts that were discussing or promoting self-harm. Instagram just rolled out its IG Teen Accounts which we analyzed in a previous issue, yet does not seem to have incorporated more proactive content detection or protective recommendation thresholds for teens.
TikTok announced plans to restrict the use of beauty filters for users under 18, limiting access to filters that significantly alter appearance. The role of filters on adolescent body image and mental health comes up every few years - Instagram announced something similar a few years ago, then reversed position, and then simply sunset its AR filters altogether. The debate highlights growing expectations of companies to both assess the psychological impacts of their features on minors as well as implement age-appropriate restrictions.
Roblox introduced a number of youth safety updates, including blocking users under 13 from messaging others online and providing parents with tools to monitor and control their children's activities on the platform. The news builds on other recent Roblox safety updates such as parental controls, and adding more descriptions to its game rating labels. The news coincides with greater scrutiny of child-focussed platforms like Roblox, with concerns around exposure to predators and encouraging gambling.