Australia Fines $450K After Mr. Deepfakes’ Shutdown on AI Porn

By Ashley Belanger – May 27, 2025
Overview
Australia’s eSafety commissioner, Julie Inman Grant, has recommended a historic fine of $400,000–$450,000 against Anthony Rotondo, a 53-year-old online creator who defied a federal court order by hosting AI-generated sexualized images of public figures. This unprecedented penalty aims to deter others from exploiting open-source generative AI tools to produce and distribute non‐consensual deepfake pornography.
Background: Mr. Deepfakes Shutdown
Earlier this month, the controversial website Mr. Deepfakes permanently ceased operations after a critical service provider terminated its account. The platform once hosted tens of thousands of videos, leveraging generative adversarial networks (GANs) like StyleGAN2 and recent diffusion models to superimpose faces on explicit content. With over 1.5 billion views of non-consensual deepfakes, the site exemplified the global challenge governments and platforms now face.
Legal Battle and Proposed Sanction
Rotondo was initially ordered in December 2023 to remove AI-generated deepfake pornography targeting prominent Australian women. Instead, he forwarded the court order—complete with victims’ names—to nearly 50 email addresses, including media outlets, and uploaded new illicit content. Queensland authorities have since charged him with multiple counts of obscene publication, including images involving minors.
“He showed no remorse or contrition for his conduct,” Justice Roger Derrington noted, underscoring Rotondo’s claim that Australian injunctions were unenforceable in the Philippines.
Technical Deep Dive: AI Models and Hosting Infrastructure
Modern deepfake platforms employ complex AI pipelines. First, facial datasets—often scraped from social media—are preprocessed and aligned using algorithms like OpenCV’s facial landmarks. Next, encoder–decoder architectures train on paired data, enabling high‐fidelity face swaps at resolutions up to 1080p. Cloud hosting typically uses GPU instances (NVIDIA A100 or V100) orchestration via Kubernetes alongside object storage (AWS S3 or Google Cloud Storage) for rapid content delivery. Hidden services may leverage Tor or I2P networks to avoid takedown.
- GAN backbone: StyleGAN2, CycleGAN improvements
- Diffusion models: Improved training stability, better eye/teeth artifacts
- Inference APIs: Flask or FastAPI microservices on Docker
- Storage: Content-addressable systems with SHA-256 hashing
Global Legal Efforts Against Deepfake Porn
Australia isn’t alone in ramping up legislation. In the U.S., the Take It Down Act imposes fines up to $50,000 per violation for platforms failing to remove reported non‐consensual deepfakes within 48 hours. Denmark is seeking extradition of a Canadian suspect linked to Mr. Deepfakes, potentially under defamation statutes carrying up to 6 months’ imprisonment. The U.K. is drafting amendments to its Online Safety Bill to classify deepfake porn as a priority illegal content category, subject to mandatory rapid removal and record-keeping.
Emerging Countermeasures by Platforms and Researchers
Technology companies and academic teams are racing to develop detection and deterrence tools:
- Video Authenticator: Microsoft’s tool analyzes subtle pixel-level inconsistencies in GAN outputs.
- Cryptographic Watermarking: Embedding imperceptible signatures at frame level to trace illicit re-uploads.
- Reverse Image Search: Hash-based similarity detection using perceptual hashing (pHash) and fuzzy matching.
“Open-source AI is double-edged: it democratizes creativity but also fuels harmful deepfakes at scale,” says Dr. Nivedita Rao, cybersecurity researcher at the University of Melbourne.
Impact on Victims and Mitigation Strategies
Non-consensual deepfake victims endure significant mental health trauma, victim-blaming, and digital harassment. Inman Grant stresses the “incalculable devastation” such content inflicts, particularly on women. Australian law now criminalizes creating and sharing deepfake pornography, with sentences up to 6 years. Telecommunication providers and social media platforms are required under the Online Safety Act to implement proactive AI scanning and swift removal protocols.
Future of AI Detection and Regulatory Compliance
Looking ahead, public and private sectors must collaborate on standardized reporting APIs (e.g., IETF’s DEEPFAKE-RL), threat intelligence sharing, and mandatory developer disclosures for generative AI models. The integration of blockchain-based provenance tracking and advanced adversarial watermarking could offer scalable defenses. As Rotondo’s case demonstrates, robust enforcement mechanisms and significant penalties are critical to deter repeat offenders in the evolving AI landscape.
This article is part of Ars Technica’s ongoing coverage of AI’s social impact and regulatory responses.