The Burgeoning Policy Minefield of AI
With the failure of the 10yr AI moratorium passing, shipping AI in the US will be more complex than ever.
The rapid acceleration of AI capabilities, particularly in the frontier and general intelligence domains, is occurring alongside a fragmented and unstable regulatory environment. The laws governing AI are neither fully formed nor unified. Instead, what exists is a patchwork of proposed, failed, enacted, and repealed legislation at both the state and federal levels, with significant variation in terminology, enforcement mechanisms, and normative assumptions. For teams building toward AGI, or just shipping AI systems, this presents a practical and strategic dilemma: the timeline to deploy is shrinking, but the governance perimeter is expanding in unstable and often contradictory ways.
What It Will Mean to Ship AI
To launch any product-level system that utilizes frontier models, or even AI at scale, developers will need to navigate far more than model optimization and evals. They will need legal input at every design milestone, jurisdictional mapping of downstream risks, and technical controls that satisfy not just safety benchmarks but also compliance documentation.
What this implies is that AI system architecture must evolve to produce structured evidence of constraint, accountability, and interpretability. This is not a theoretical exercise. The Colorado Artificial Intelligence Act (CAIA), passed earlier this year, imposes specific obligations on developers of "high-risk AI systems," including impact assessments and the documentation of mitigation steps. The moment a system interacts with employment, education, or public service contexts, the burden of proof shifts onto the developer to demonstrate lawful behavior.
Twin Forces: State Proliferation vs. Federal Centralization
The regulatory landscape in the United States is defined by two competing dynamics. At the state level, legislative activity has surged. More than 600 AI-related bills have been introduced across all 50 states in 2025 alone. Roughly 75 of these have been enacted or adopted, and many more remain pending or in study status. These include regulations targeting deepfakes, synthetic content, AI-assisted decision-making, model explainability, and transparency in training data usage.
By contrast, the federal government, while nominally committed to a central AI governance strategy, remains institutionally reactive. The Biden administration's Executive Order on Safe, Secure, and Trustworthy AI laid groundwork in 2023, but its authority is largely normative, not preemptive. A recent attempt by Senator Ted Cruz to introduce a 10-year federal moratorium on state-level AI laws, effectively centralizing control under federal oversight, was stripped from budget negotiations in early 2025. The White House has since responded with a new AI Action Plan intended to reassert federal influence, but it lacks enforcement mechanisms and has not altered the trajectory of state-level experimentation.
Signal vs. Noise
In this environment, the primary challenge is interpretation. Many bills are introduced with little chance of passing. Others are passed but lack teeth. Some will be repealed before they are enforced. This makes it operationally unsound to react to every headline or proposed bill.
Instead, what’s required is a model of governance engineering: identifying early, durable signals that are likely to crystallize into enforceable norms. This includes tracking repeated language across state proposals, studying which bills are being replicated in multiple jurisdictions (e.g., the influence of CAIA on legislation in New Jersey and Illinois), and mapping public-private alignment (such as NIST’s Risk Management Framework being cited in state bills and private sector evaluations).
AI Regulatory Constants
Despite the noise, some points of agreement are emerging across regulatory proposals, industry practices, and academic frameworks:
Interpretability: There is a near-universal expectation that high-impact AI systems must be explainable. This applies to both technical transparency and user-facing disclosures.
Traceability and audibility: Increasing demand for lineage documentation, from dataset provenance to model versioning and downstream usage logs.
Risk-tier classification: More laws are moving toward a tiered structure that mirrors data privacy regimes (e.g., GDPR), distinguishing between general-purpose tools and high-risk systems.
Evaluation infrastructure: Formal evaluations, including red teaming, and alignment metrics, are no longer optional in high-risk deployments. They are fast becoming de facto regulatory artifacts.
Mitigation documentation: States are beginning to demand explicit articulation of what mitigations were considered, selected, and deployed, even if no harm has occurred.
AI will not be shipped into a vacuum. It will be shipped into a legal and political environment that is inconsistent, territorial, and ideologically unstable. For those building these systems, the challenge is not only technical alignment or safety, but regulatory foresight. Compliance, in this context, must be anticipatory, modular, and resilient to change. The question is no longer whether AI will be regulated, but which policies will calcify fast enough to block, delay, or retroactively penalize progress.
AI will need to be aligned with an unstable patchwork of laws, many of which disagree with each other. Navigating that is not a legal task alone. It is a product architecture problem. And it begins now.
A comprehensive list of potential shipping requirements
To show what shipping would look like if every pending bill became law, I compiled a 200‑item checklist of required actions. At a glance: you’d run risk reviews with clear notices and a right to appeal (Colorado); build and prove controls the way NIST describes to claim safe‑harbor (Texas); publish a public page summarizing each model’s training data (California); get independent bias audits for hiring tools (NYC); label or watermark AI content and follow state election‑season rules; keep an AI‑system inventory for government buyers (New York); report serious “frontier‑model” safety incidents within 72 hours (New York, pending); treat AI voices in robocalls as illegal without consent (FCC); and get consent before cloning voices or images (Tennessee). The awkward bits are obvious: trying to prove the origin of every AI image/video everywhere, juggling both 72‑hour and 90‑day incident clocks, flipping on/off state‑specific political rules in real time, and, if California passes its auditor bill, using only state‑enrolled auditors.
Enacted AI Regulations for Model Builders
The “Enacted” block mixes (a) true laws, (b) regs that have been adopted but aren’t in force yet, and (c) guidance/procurement rules that aren’t statutes as of July 28, 2025.
Risk Management and Assessments
Maintain a written AI risk-management program for high-risk systems: Describe objectives, risks, controls, owners, and review calendar; store in a repo and keep current. (Colorado SB24-205, effective Feb 1, 2026)
Complete an impact assessment before any use materially affecting a person: Define use-case, affected groups, metrics, foreseeable failures, chosen mitigations, and sign-offs; save with the model/version. (Colorado SB24-205, effective Feb 1, 2026)
Run pre-release evaluations against release criteria: Cover capabilities, misuse/safety, fairness, robustness, privacy/PII; store configs, datasets, results with model/version. (Colorado SB24-205, effective Feb 1, 2026)
Monitor post-release for drift/new risks; prepare rollbacks: Set thresholds triggering rollback; log alarms, decisions, outcomes. (Colorado SB24-205, effective Feb 1, 2026)
Map program to NIST AI RMF; keep evidence packs: Show artifacts proving controls (assessments, evals, logs, approvals). (Texas HB 149 TRAIGA, effective Jan 1, 2026)
Run red-team program pre-launch: Test for injections, jailbreaks, abuse; track mitigations to closure. (NIST RMF practices, aligned with CO/TX; effective varies)
Align to NIST RMF for safe-harbor: Control-evidence map. (Texas HB149 TRAIGA, effective Jan 1, 2026)
Build fail-safe modes/human interlocks for critical infrastructure AI: Provide verifiable shutdown, drill procedures, keep telemetry. (Montana SB 212, effective Oct 1, 2025)
Implement safe-state controls for critical infrastructure: Human-override/shutdown, drills, telemetry. (Montana SB 212, effective Oct 1, 2025)
Transparency and Disclosures
Disclose AI use in consequential decisions: Provide on-screen or written notice at decision time and in policy docs. (Colorado SB24-205, effective Feb 1, 2026)
Publish plain-language public summary for high-risk systems: Include purpose, decision domains, data categories, known limits/risks, user rights (notice, correction, appeal), and contacts. (Colorado SB24-205, effective Feb 1, 2026)
Publish transparency note: Explain capabilities, uses, settings, evals/results, update history. (NIST RMF, NY procurement; effective varies)
Disclose when interacting with automated system: Tell user it’s AI; log acknowledgment. (Colorado SB24-205, effective Feb 1, 2026)
Publish change-log for safety-relevant updates: Record date/version, changes, why, user impact. (Colorado SB24-205, effective Feb 1, 2026)
Publish training-data dossier for generative models sold in CA: High-level sources, modalities, synthetic data use; update after changes. (California AB 2013, effective Jan 1, 2026)
Publish model limitations: Domains, data, risks, rights. (Colorado AI Act, effective Feb 1, 2026)
Maintain public AI disclosures page: Bot notices, data summaries, changes. (CA AB2013/CO AI Act, effective Jan/Feb 2026)
Publish canonical public summary page per system: Link from UI/support. (CO SB24-205, effective Feb 1, 2026)
Publish known limitations/unsafe affordances: Notify customers on changes. (CO AI Act, effective Feb 1, 2026)
Disclose bot in CA for sales/vote influence: Clear label in interaction. (CA B.O.T. Act SB1001, effective now)
Utah consumer disclosures for GenAI: Disclose on request; pre-disclose for high-risk (legal/medical/financial). (Utah AI Policy Act Amendments, effective May 7, 2025)
Post consumer-facing AI notices beyond CO/UT: Map/surface disclosures in UI per jurisdiction. (Various states, effective now)
Keep records proving bot-disclosure: Log contexts (persuasion/election), notice shown, timestamps. (CA B.O.T. Act, effective now)
Utah: Pre-disclose in high-risk interactions: Proactive oral/written at start for legal/medical/financial. (Utah AI Policy Act Amendments, effective May 7, 2025)
Add AI disclosures for high-risk in UT: Mental-health chatbot safeguards. (UT SB226/SB332/HB452, effective May 7, 2025)
Honor UT disclosure regime: On request say AI; proactive for high-risk; mental-health safeguards. (UT SB226/SB332, effective May 7, 2025)
Register regulated AI interactions: Force disclosures for categories. (UT AIPA Amendments, effective May 7, 2025)
Appeals, Corrections, and Explanations
Offer human appeal and review for AI decisions: Allow appeals, route to trained reviewers, track turnaround, and record human decisions. (Colorado SB24-205, effective Feb 1, 2026)
Allow data correction requests: Expose correction flow, show resolution status, keep audit log. (Colorado SB24-205, effective Feb 1, 2026)
Provide plain-language explanations for decisions: Make usable by non-experts/reviewers; save explanation artifact. (Colorado SB24-205, effective Feb 1, 2026)
Publish explanations for significant decisions: Plain-language logic/data summaries on request. (CO Privacy Rules, effective now)
Incident Reporting and Discrimination
Detect, log, and fix algorithmic discrimination; notify regulators: If system caused or likely caused unlawful discrimination, notify Colorado AG within 90 days with evidence. (Colorado SB24-205, effective Feb 1, 2026)
Prep discrimination-incident notification: 90-day workflow. (CO SB24-205, effective Feb 1, 2026)
Audits and Bias Testing
Bias-audit hiring tools in NYC: Annual independent audit, publish summary, give notices. (NYC Local Law 144 AEDT, effective 2023)
Publish AEDT audit summaries in NYC: Post date/method/results on careers site. (NYC Local Law 144 AEDT, effective 2023)
NYC AEDT penalties awareness: Per-violation fines; treat notice/audit as separate. (NYC Local Law 144 AEDT, effective 2023)
For NYC automated hiring: Bias audit, notices, opt for non-AI. (NYC Local Law 144 AEDT, effective 2023)
Post audit summaries/notices for NYC hiring. (NYC AEDT, effective 2023)
Records and Documentation
Keep signed change-control records: Record approvals, changes, why, when; attach diffs to evidence pack. (Texas HB 149 TRAIGA, effective Jan 1, 2026)
Keep internal dataset lineage/license records: Save snapshots, licenses/terms, synthetic ratios; run quality screens, keep results. (California AB 2013, effective Jan 1, 2026)
Keep immutable records: Store model hashes, prompts, datasets, configs, approvals, logs for retention period. (Colorado SB24-205, effective Feb 1, 2026)
Keep records-retention policy: Cover assessments, evals, disclosures, incidents; meet longest window. (CO AI Act/NIST, effective Feb 1, 2026)
Deployer and Customer Support
Provide deployers a "deployer packet": Include impact assessment summary, model card, evaluation results, intended/unsupported uses, and exact user disclosures. (Colorado SB24-205, effective Feb 1, 2026)
Maintain "do-not-use" list: Publish unsupported/prohibited uses; block where feasible; notify customers on changes. (Colorado SB24-205, effective Feb 1, 2026)
Inventories and Procurement
Maintain internal AI system inventory: List purpose, data, oversight, evals, risks, contact; update before state sales. (NY OITS Guidance, effective now)
Generate procurement packet for gov buyers: Export inventory, assessment summary, evals, human-oversight in requested format. (NY OITS Guidance, effective now)
Keep AI system inventory for NY gov sales: Purpose, data, risks, contacts. (NY ITS Guidance, effective now)
Keep AI inventory export for NY public sales: Purpose/data/oversight/evals/risks. (NY OITS Guidance, effective now)
Anticipate AI task-force/inventories for MD public/education sales. (MD SB906/SB818, enacted 2025)
Synthetic Media and Deepfakes
Label synthetic media/disclose AI in elections where required: On-screen labels, audio notices, banners during election periods. (Various state laws, e.g., CA, MN, FL; effective varies, mostly 2024-2025)
Attach provenance metadata (e.g., C2PA) to AI-generated media: Preserve through processing/distribution. (Various state laws, e.g., FL; effective 2025)
Stand up 48-hour takedown for intimate deepfakes: Remove non-consensual (incl. AI); prevent re-uploads. (Federal TAKE IT DOWN Act, effective May 19, 2025)
Label synthetic content in political ads: Add labels, provenance; enforce in pre-election windows. (Various states e.g., FL, MI, MN; effective 2024-2025)
Deploy revenge-porn/deepfake workflow: Intake/verification, 48-hour removal, suppression. (Federal TAKE IT DOWN Act, effective May 19, 2025)
Gate political deepfake generation: Labels/bans near elections; log actions. (Various states e.g., CA, FL; effective 2024-2025)
Election-period throttles: Stricter disclosure/provenance/response behaviors; jurisdiction maps. (Various states, effective 2024-2025)
Keep elections law calendar: Track blackout periods, disclosure language; freeze features. (Various states, effective 2024-2025)
Build elections mode: Flag/label/block synthetic content per state. (NCSL, effective 2024-2025)
Build state blackout calendars in pipeline: Block political deepfakes. (Federal Register/state, effective now)
Build NCII incident runbook: 48-hour SLA, preservation. (Federal TAKE IT DOWN Act, effective May 19, 2025)
Apply MN deepfake law for political: No distribution in windows; label if publishing. (MN Statute, effective now)
Label manipulated political media in OR: Block deceptive per SB1571. (OR SB1571, effective now)
Put AI disclaimers on political ads in MI: Block deceptive distribution. (MI Law, effective now)
Add AI disclaimers in WI political: Compliance log for auditors. (WI Law, effective now)
72-hour takedown for non-consensual intimate deepfakes: Suppress re-uploads. (Federal TAKE IT DOWN Act, effective May 19, 2025)
Track multistate deepfake election windows: Freeze/relabel (DE, MI, MN, NM, WI). (Public Citizen/NCSL, effective now)
Encode state-by-state AI-ad rules: Log compliance. (FL/MI, effective 2025)
Comply with FL political-ad AI disclosures: Add statements; maintain labeling proof. (FL Statute, effective 2025)
Track/comply with state political-ad AI disclosures: Disclaimers in ads. (FL Statute, effective 2025)
Prepare provenance labels for AI media: Origin tags where required. (FL CS/SB702, effective 2025)
Switch stricter labels for election deepfakes: In states like CA, MI, MN. (NCSL summaries, effective 2024-2025)
Block deceptive political deepfakes: Labels/bans in windows (CA, MN, FL). (State overviews, effective 2024-2025)
Embed disclaimer/provenance in FL ads: Keep logs. (FL Committee, effective 2025)
Embed disclaimer/provenance in FL political ads. (FL Statute, effective 2025)
Tie provenance to campaigns: Verifiable tags. (FL Analysis, effective 2025)
Tie provenance to political deliveries: Log per state. (NCSL, effective 2024-2025)
Keep election communications checklist: Disclaimers, penalties. (Various states, effective 2024-2025)
Log state-regulated political ads: Copy, labels, provenance, time/place. (MI/FL/WI, effective now)
Consent and Prohibitions
Obtain consent for voice/likeness cloning: No unauthorized clones; add contract/technical blocks. (Tennessee ELVIS Act, effective Jul 1, 2024)
Respect digital replica rights in entertainment: Written consent, labeling, takedown for replicas. (CA AB2602, effective Jan 1, 2025; TN ELVIS)
Avoid unauthorized digital replicas of deceased/performers: Consent/labeling required. (NY right-of-publicity, effective now; TN ELVIS)
Block AI voice robocalls without consent: Treat as artificial/prerecorded under TCPA; maintain proof. (FCC Ruling, effective Feb 2024)
Maintain robocall guardrails: Block AI voices lacking consent; scrubbing/disclosures. (FCC Ruling, effective Feb 2024)
Don’t use AI voices in robocalls: Get prior consent; no black-box excuses. (FCC Ruling, effective Feb 2024)
No AI voice robocalls without consent: Proofs required. (FCC Ruling, effective Feb 2024)
Obtain authorization for TN voice/image clones: Takedown/contract controls. (TN ELVIS Act, effective Jul 1, 2024)
Protect voice from AI clones: No deployment without consent. (TN ELVIS Act, effective Jul 1, 2024)
Add “no deceptive impersonation” guardrails. (TN ELVIS, effective Jul 1, 2024)
Respect digital replica/likeness regimes: Consent/labeling/takedown for synthesized personas. (TN ELVIS, CA AB2602; effective 2024-2025)
Maintain removal/appeal for non-consensual replicas: Rapid intake/takedown. (Various states incl. TN ELVIS, effective 2024)
Tie consent/watermarking to voice cloning: Takedown API. (TN ELVIS/Federal TAKE IT DOWN, effective 2024-2025)
Collect biometrics with consent: Written consent, retention/destruction rules (IL BIPA, TX CUBI, WA). (IL BIPA Amendments, effective Aug 2024)
Biometric compliance: Consent, retention, no sale (IL BIPA, TX CUBI, WA). (IL BIPA Updates, effective 2024)
Obtain written consent for biometrics: Publish retention/deletion; no sale. (IL BIPA/TX CUBI/WA, effective 2024)
Build biometric consent pipeline: Pre-notice, policy, non-sale. (IL BIPA/TX CUBI/WA, effective 2024)
Treat biometric enrollment strictly: Consent/retention. (WA RCW 19.375, effective now)
Honor WA My Health My Data: Consent for health data in AI; restrict sharing/deletion. (WA AG MHMDA, effective now)
Sector-Specific (Employment, Healthcare, Finance, etc.)
For employment: Provide notices, enable human review, run bias audits. Tell candidates of automated tools; allow review; conduct audits, keep reports. (NYC Local Law 144 AEDT, effective 2023)
Design hiring tools for ADA compliance: No disadvantage to disabilities; provide accommodations/alternatives. (EEOC/DOJ Guidance, effective now)
Document accommodation pathways in hiring: Non-AI alternatives on request; train reviewers for ADA. (EEOC Guidance, effective now)
Make AI tools ADA-compliant: Accommodations, alternatives for disabilities. (EEOC/DOJ Guidance, effective now)
Align with EEOC disparate-impact: Alternatives on request. (EEOC Guidance, effective now)
Give non-AI alternative for ADA: Clear request way. (EEOC/DOJ, effective now)
Analyze video with AI in IL hiring: Notice, explain, consent, delete on request. (IL AI Video Interview Act, effective now)
Give notices in IL employment AI use: No discrimination; keep records from 2026. (IL HB3773, effective 2026)
Post employee notices for AI in IL employment: Enforcement from 2026. (IL Employment AI Law, effective 2026)
Treat worker surveillance/scores as FCRA-covered: Notices/consents, adverse-action rights. (CFPB Guidance, effective now)
Treat algorithmic background checks/scores as FCRA: Consent/notices, support disputes. (CFPB Guidance, effective now)
Scrub clinical uses for discrimination: Monitor/mitigate bias; document under HHS OCR §1557. (HHS §1557 Rule, effective May 2024)
Embed clinical-AI governance: Policies for monitoring impact on protected classes; corrective actions. (HHS §1557, effective May 2024)
Avoid discriminatory healthcare algorithms: Monitor outputs, document reviews. (HHS OCR §1557, effective May 2024)
Write Section 1557 AI policy for healthcare: Monitor outcomes, document fixes, train on limits. (HHS OCR, effective May 2024)
Give specific reasons for AI credit denials: Applicant-specific, no generic boilerplate. (CFPB Guidance, effective now)
Give specific adverse-action reasons in lending: Concrete, no generics. (CFPB Circular, effective now)
Meet insurer AI governance: Follow CO DOI/NY DFS rules; governance/testing proof for underwriting/pricing. (CO DOI Rule, effective Nov 2023; NY DFS Circular, effective 2024)
Implement AI governance for insurers: Prove no unfair discrimination; assessments/controls. (NY DFS Circular, effective 2024)
For life insurers in CO: Governance for external data models; attestations. (CO DOI Regulation, effective Nov 2023)
Fold AI risks into cybersecurity programs: For financial services. (NY DFS Guidance, effective now)
NY DFS/insurer AI documentation: Explainability docs, testing artifacts, inventories, governance. (NY DFS Circular, effective 2024)
Document insurer model testing: Scripts, thresholds, remediation. (CO DOI/NY DFS, effective 2023-2024)
Prepare regulator exams for AI in underwriting: Testing scripts/datasets. (NY DFS/CO DOI, effective now)
Avoid proxies encoding protected traits in insurance: Fairness assessment. (NY DFS Circular, effective 2024)
Fold AI threats into cyber assessments: Prompt-fraud, leakage. (NY DFS, effective now)
Fold AI misuse threats into cyber programs: Expect NYDFS review. (NY DFS Letters, effective now)
Keep governance artifacts for insurers: Prove no discrimination (inputs/testing/mitigation). (CO DOI/NY DFS, effective now)
Harden AI claims: Substantiate performance/safety; avoid “bias-free” without support (FTC). (FTC Guidance, effective now)
Don’t make exaggerated AI claims: Substantiate; avoid omissions. (FTC Enforcement, effective now)
Treat deceptive AI marketing as FTC risk: Validate capabilities. (FTC Guidance, effective now)
Train marketing/legal on FTC “no AI exemption”: Pre-clear claims; maintain substantiation. (FTC Guidance, effective now)
Watch FCC for AI in political ads: Added disclosures. (FCC Proposals, ongoing 2025)
Track state AG AI enforcement under UDAP: Prepare for misrepresentation/unfairness/discrimination claims. (NAAG/FTC, effective now)
Ensure pricing/recommendation algorithms non-collusive: Retain inputs/rationale; avoid data sharing implying coordination. (Various state bills/AG actions, effective now)
TCPA compliance for AI outreach: Consent, DNC, opt-out. (FCC Rulings, effective now)
Avoid unauthorized synthetic impersonations. (NY AG Materials, effective now)
Privacy and Opt-Outs
Prepare for CA ADMT rules: Implement access/opt-out, risk assessments, logic info, cybersecurity audits. (CPPA ADMT Rules, effective Oct 2025)
Honor CO privacy profiling opt-outs: Opt-out (incl. universal signals) for significant decisions; disclose logic/human involvement. (CO Privacy Act Rules, effective now)
Configure universal opt-out signals for profiling: Honor signals (e.g., CO); document denials. (CO Privacy Act, effective now)
Set up universal opt-out handling: GPC signals, consumer requests. (CO Privacy Act, effective now)
Honor opt-outs for profiling in important decisions: Explain logic/data/human role. (CO CPA Rules, effective now)
Honor universal opt-out for profiling: Logic/human summaries on request. (CO Privacy Act, effective now)
Pending Regulations
Risk Management and Assessments
Keep safety & security protocol for frontier models; public redacted version: Pre-training protocol, safety tests, security measures, governance owners. (New York RAISE Act S6953, pending governor signature by Dec 31, 2025)
Retain safety test procedures/results for frontier models: Specs, datasets, scripts, runs, outcomes for audit. (New York RAISE Act, pending)
Keep internal AI audit plan/cadence: Scope (fitness/performance/controls), schedule, track remediation/work-papers. (California AB 1405, pending)
Prepare high-risk AI controls: Reasonable care, risk/impact analyses, channel to challenge decisions. (Connecticut SB-2, pending; died in House but could revive)
Stand up challenge channel for AI decisions: Esp. housing/credit. (Connecticut SB-2, pending)
Transparency and Disclosures
Monitor NJ media-specific AI: Disclose in content workflows, oversight. (NJ A5164, pending)
Monitor CA bot-law expansion: Broader disclosures if AB410 amends. (CA AB410, pending)
Apply bot-disclosure patterns beyond CA: Store proofs. (Multistate proposals, pending)
Appeals, Corrections, and Explanations
Follow full ADS package if passes: Pre-use eval, discrimination analysis/mitigations, deployer packet, independent audit from 2030, pre/post-decision notice/explanation, 30-day correction/appeal, 10-year records, compliance owner, no over-claims. (California AB 1018, pending in Senate)
Provide deployer duties support: Pre-decision notice, 5-day explanation, 30-day correction/appeal; supply packet. (California AB 1018, pending)
Incident Reporting and Discrimination
Report safety incidents within 72 hours: E.g., weight theft, loss of control; keep timer/incident packet. (New York RAISE Act, pending)
Audits and Bias Testing
Enroll independent AI auditor when required: Use enrolled third-party; independence/cooling-off; keep report/fixes. (California AB 1405, pending in Senate)
Use only enrolled independent auditors: Independence/confidentiality/whistleblower; plan timelines (enrollment by 2027). (California AB 1405, pending)
Records and Documentation
Developer duties if passes: Permitted uses, pre-use performance/discrimination evals, deployer instructions, 10-year records, compliance owner, no over-claims. (California AB 1018, pending)
Deployer and Customer Support
None specific in pending beyond ADS packet above.
Inventories and Procurement
For education/procurement: Require AI risk disclosures/inventories. (Various states, pending)
Monitor agency “AI inventory” directives: Export in formats. (Beyond NY, pending)
Synthetic Media and Deepfakes
Add deepfake restrictions as signed: Day-counts before elections. (DE/other late states, pending)
Tie provenance to policies: Assets carry tags end-to-end. (NCSL synthetic-media, pending expansions)
Consent and Prohibitions
Control/monitor access to model weights: Role-based, key-management, egress monitoring, kill-switch; keep logs. (New York RAISE Act, pending)
Sector-Specific
Design systems for patient-specific reasons/human re-review in CT insurers. (CT Proposed, but monitor; effective pending)
Define ADS in docs: System outputting score/category/recommendation in domains (hiring/credit/healthcare/etc.). (California AB 1018, pending)