States Lead AI Regulation; Federal Government Stays Away

With little momentum at the federal level, US state legislatures have become the primary architects of artificial intelligence policy, introducing a diverse array of bills to govern AI deployment across public and private sectors. In 2025 alone, all 50 states proposed over 600 AI-related measures, ranging from transparency mandates to risk management requirements.
Overview of State Activity
The decisive failure of Congress to approve a moratorium on state-level AI regulation in mid-2025 cleared the way for an unprecedented wave of local oversight. Four domains dominate these efforts:
- Government use of AI
- AI in health care
- Facial recognition and surveillance
- Generative AI and foundation models
Government Use of AI
State agencies are deploying predictive analytics for social services eligibility, criminal justice risk assessments, and traffic enforcement. Yet algorithmic decision-making can embed hidden biases and obscure accountability.
- Transparency & Disclosure: The Colorado Artificial Intelligence Act mandates public posting of fairness metrics (e.g., demographic parity, equal opportunity difference) and model error rates for any system influencing “consequential decisions.”
- Risk Management Frameworks: Montana’s “Right to Compute” law requires AI developers to adopt NIST AI Risk Management Framework v2.0 controls—such as AC-3 (Access Enforcement) and SI-7 (Software, Firmware, and Information Integrity)—for systems tied to critical infrastructure.
- Oversight Bodies: New York’s SB 8755 established a State AI Regulatory Council with authority to conduct third-party audits using open-source tools like IBM AI Fairness 360 and MLflow for compliance verification.
AI in Health Care
By mid-2025, legislators in 34 states considered over 250 bills addressing AI-driven diagnostics, treatment recommendations, and insurer claim adjudications.
- Disclosure Requirements: Bills such as Massachusetts HB 392 require health tech vendors to share algorithmic performance data (ROC curves, sensitivity/specificity) and data provenance with patients.
- Consumer Protection: Florida legislation mandates appeal rights for patients disputing an AI-based denial of coverage or diagnosis under the Administrative Procedure Act.
- Insurer Oversight: California’s SB 458 subjects payers’ AI systems to periodic fairness audits, referencing ISO/IEC 27001 for information security.
- Clinical Use Regulation: In Texas, HB 1201 requires FDA-approved AI medical devices to integrate clinician “human-in-the-loop” controls and maintain explainability logs.
Facial Recognition and Surveillance
Facial recognition raises acute privacy and civil rights concerns. Studies by Joy Buolamwini and Timnit Gebru demonstrated error rates up to 34% for darker-skinned women, underscoring the technology’s disproportionate impact on marginalized communities.
- Fifteen states—including Illinois and Washington—have enacted laws limiting government and law enforcement use, requiring vendors to publish third-party bias test reports and mandating mandatory human review before enforcement actions.
- California’s AB 2497 codifies IEEE P7003 (Algorithmic Bias Considerations) and obligates agencies to maintain logs of all facial recognition queries for no less than 24 months.
Generative AI and Foundation Models
Proliferation of large language models and image generators has spurred state-level disclosure laws.
- Utah’s Artificial Intelligence Policy Act compels organizations to inform users when interacting with chatbots or voice agents that provide advice or collect sensitive personal data. The scope was refined in 2025 to APIs handling HIPAA‐regulated data.
- California AB 2013 requires developers of foundation models—those trained on datasets exceeding 1 PB—to publish detailed data usage statements, copyright clearance processes, and data cleaning methodologies on public registries.
- The FTC’s recent guidance (Oct 2025) further clarifies that consent must be obtained when consumer data is used to fine-tune privately hosted models.
Technical Standards and Certification
To harmonize state requirements, national standards bodies are updating formal frameworks:
- NIST AI RMF v2.0 (Dec 2025) introduces a tiered risk evaluation system (Tier 1: Low Risk to Tier 5: Severe Risk) and prescribes controls aligned with SP 800-53 Revision 5.
- ISO/IEC 42001 (pending final vote in March 2026) will establish requirements for AI management systems, analogous to ISO 9001 for quality management.
- IEEE P7001 Transparency of Autonomous Systems and P7002 Data Privacy Process are increasingly cited in state statutes.
- Model cards (per Google Brain) and dataset datasheets (per Timnit Gebru) are now explicitly required by California’s AB 2430 for any system used by more than 1,000 individuals annually.
Interstate Challenges and Compliance Strategies
AI vendors navigating this regulatory mosaic often adopt the most stringent state standard as a corporate baseline. Leading cloud providers offer compliance toolkits—including AWS Artifact, Azure Policy-as-Code, and Google Cloud’s Assured Workloads—to monitor and enforce multi-jurisdictional requirements in real time.
Future Federal Developments
“We anticipate the forthcoming AI Executive Order will reference state-led approaches and encourage voluntary alignment with NIST RMF v2.0,” says Dr. Alex Rivera, senior advisor at the White House Office of Science and Technology Policy.
The Biden administration’s draft AI Action Plan for 2026 suggests tying future federal research grants to demonstrable compliance with state and NIST frameworks, potentially incentivizing broader adoption of uniform standards.
Conclusion
Absent a unified federal statute, states continue to innovate and iterate on AI oversight, creating a complex compliance environment. Yet this patchwork also serves as a laboratory for best practices in transparency, fairness, and risk management that may ultimately inform national policy.