Rethinking AI Police Reports: Accountability Issues in Axon’s Draft One

On July 10, 2025, the Electronic Frontier Foundation (EFF) released a comprehensive investigation exposing how Axon’s Draft One—an AI-driven tool for generating police reports from body-worn camera audio—systematically deletes evidence of its own involvement. By design, the tool discards intermediate drafts and conceals which narrative segments were written by AI, raising serious concerns about transparency, accountability, and the integrity of the criminal justice process.
Key Findings by the EFF
Opaque Versioning and Audit Trails
The EFF report reveals that Draft One does not retain any record of initial AI-generated drafts. Unlike typical versioning systems that store diffs or snapshots, Axon’s solution permanently deletes interim reports. This means neither supervisors, defense attorneys, nor oversight bodies can reconstruct the evolution of a report to determine which words originated from a human officer and which were synthesized by the AI.
Risk of Misinterpretation and Bias
Because Draft One relies on a specialized ChatGPT variant to parse slang, accents, or fast-paced dialogue, it can introduce inaccuracies—mishearing key terms or inferring events that never occurred. The EFF warns that officers might rubber-stamp these drafts to save time, effectively outsourcing narrative control to an LLM with no built-in mechanism for error correction beyond manual edits.
Technical Architecture and Logging Details
Draft One’s pipeline begins with a speech-to-text module (based on a deep-learning model akin to DeepSpeech v0.9) that transcribes bodycam audio into text. This transcript is sent over TLS 1.3 to Axon’s backend, where an LLM—fine-tuned from a 175 billion-parameter foundation model—generates an initial narrative. While all API calls produce unique request IDs and write JSON-formatted logs to an AWS S3 bucket with server-side encryption (AES-256), the UI exposes only final narratives to users. Intermediate drafts are automatically purged, and S3 versioning is disabled by default, making forensic reconstruction of edits next to impossible.
Challenges in Auditing and Compliance
Under frameworks such as CJIS and ISO 27001, agencies must maintain immutable logs and versioned records. By contrast, Axon’s approach to purge drafts conflicts with best practices in software version control (e.g., Git) and document management systems (DMS). A security architect interviewed by Ars observed:
“Without a complete audit trail, you cannot perform a differential analysis or rollback to see what changes were made at each step.”
Exporting usage metrics requires custom scripts or tedious manual review of individual user logs—an operation that can balloon forensic effort by an order of magnitude.
Legal and Ethical Considerations
Under the Daubert standard, expert testimony and documentary evidence must be both reliable and reproducible. Professor Jane Doe of Harvard Law School cautions:
“The burden shifts to defense attorneys to prove that errors originated with the AI, but without logs, this becomes a near-impossible task.”
Moreover, the chain of custody for digital evidence is blurred when AI edits are untraceable, potentially undermining due process and eroding public trust.
Policy Recommendations and Future Outlook
- Mandate granular versioning: Require retention of every AI-generated draft in an immutable datastore.
- Enable exportable audit logs: Provide CSV/JSON exports of comprehensive usage metrics and change histories.
- Require AI attribution: Embed metadata tags and disclaimers highlighting AI-written sections.
California’s proposed SB 345 would compel agencies to preserve drafts and disclose AI usage, while Utah’s draft regulation takes a similar—but narrower—approach. If enacted, these measures could set a national precedent for transparency in AI-assisted policing.
Industry Response and Recent Developments
In September 2025, Axon announced optional versioning toggles, allowing agencies to configure retention windows for intermediate drafts. Concurrently, the U.S. Department of Justice is evaluating pilot programs that integrate transparent LLM audit frameworks to ensure alignment with FOIA and evidence rules.
Conclusion
As AI systems like Draft One proliferate in law enforcement, robust technical safeguards, transparent audit mechanisms, and clear legal standards are essential to protect civil liberties and maintain public confidence in the justice system.