eSafety Report: Tech Giants Lag in Proactive Child Abuse Detection
The Australian eSafety Commissioner's latest periodic transparency report for the first half of 2025 indicates that major technology companies have shown limited progress in proactively detecting live online child sexual exploitation and abuse (CSEA) and newly created harmful material across their platforms. The report, which mandates transparency notices from firms including Apple, Discord, Google, Meta, Microsoft, and Snap, highlights ongoing safety deficiencies despite some noted improvements in other areas.
It also underscores calls for a legislated Digital Duty of Care to enhance online child safety.
Critical Detection Gaps Identified
The eSafety Commissioner identified several key areas where major technology companies demonstrate insufficient action:
Live and Newly Created Abuse Material
Limited application of tools across widely used platforms and services to proactively detect live or newly created online CSEA, particularly in video calls and encrypted environments.
- Companies specifically named in this context include Meta (Facebook Messenger & WhatsApp), Microsoft (Teams, OneDrive, Outlook), Google (Meets, Chat, Messages, Gmail), Apple (including Facetime), Discord, and Snapchat.
- Microsoft's pilot of AI-powered detection tools for Teams, noted in eSafety's August 2025 report, had no further update in the latest findings.
Insufficient Sexual Extortion Detection
Insufficient application of language analysis systems to identify cases of sexual extortion.
- Apple, Discord, Google’s Chat, Meet and Messages, Microsoft Teams, and Snap were noted for not currently utilizing available software to detect the sexual extortion of children.
eSafety Commissioner Julie Inman Grant stated that these companies possess the necessary resources and technical capabilities to enhance safety measures for all users, including children. She emphasized that the public expects tech companies to innovate and embed safety by design, describing it as a matter of corporate conscience and accountability.
Barbie, a Philippine Network Survivor leader, stated that "traffickers utilize technology for child abuse across continents and asserted that tech companies have not implemented adequate measures to prevent repeated live sexual abuse on their platforms."
Noted Improvements and Industry Response
Despite the identified gaps, the report acknowledged several improvements made by platforms since the first transparency report:
- Increased response times to reports of sexual abuse material.
- Improved industry-wide information sharing.
- Expansion of systems that blur sexually explicit videos and images.
- Enhanced detection tools for already identified CSAM and other abusive material, including AI-generated content, online grooming, and sexual extortion.
Specific company improvements included Snap (SnapChat) reducing its child sexual exploitation and abuse moderation response time from 90 minutes to 11 minutes, and Microsoft expanding its detection of known abuse material within Outlook.
Alarming Contextual Data on Online Abuse
The Australian Centre to Counter Child Exploitation received nearly 83,000 reports of online child sexual abuse material (CSAM) in the 2024–25 financial year, marking a 41% increase from the previous year. These reports primarily originated from mainstream platforms.
A report published by IJM in partnership with Childlight East Asia & Pacific Hub indicated that 6.5% of 1,939 Australian men surveyed had engaged in or would engage in livestreamed child sexual abuse. David Braga, CEO of IJM Australia, called for greater efforts to protect children globally from Australian offenders.
Calls for Digital Duty of Care and Future Directives
The eSafety Commissioner advocates for a shift from merely recording harms to preventing them through improved design, consistent with calls for a legally mandated Digital Duty of Care. This concept, supported by UNICEF Australia's Head of Digital Policy John Livingston, aims to make tech companies legally responsible for ensuring their systems are safe by design before launch.
The Albanese Government announced its commitment to legislate a digital duty of care in November 2024. Mr. Livingston also highlighted the growing risks of deepfakes and AI-generated content, while suggesting AI could contribute to detection and removal solutions.
The CSAM Deterrence Centre, a collaboration between Jesuit Social Services and the University of Tasmania, has found that proactive safety measures, such as real-time warning messages, can reduce harmful behaviors and guide users to support services like Australia’s Stop It Now! helpline.
Tech companies are mandated to submit further reports to eSafety in March and August 2026. Non-compliance with mandatory transparency notices can result in daily fines of up to $825,000. The eSafety Commissioner called on the industry to collaborate on developing new technologies to detect online harm and address existing safety gaps.