AI Legal Compliance Pt 3: U.S. Enforcement Reality—What Actually Gets Startups in Trouble

There’s no comprehensive federal AI law in the United States. No “AI Act” equivalent. No single agency with clear AI enforcement authority.

But that doesn’t mean U.S. startups are operating in a regulatory vacuum. Enforcement is happening—it’s just fragmented, unpredictable, and often comes through laws that predate AI by decades.

Here’s what the enforcement landscape actually looks like and what’s getting startups into trouble right now.

The Real U.S. Risk: Existing Laws, New Applications

The FTC, state attorneys general, and sector-specific regulators are using existing authority to go after AI-related harms. The statutes haven’t changed. The technology has. And regulators are adapting faster than most founders realize.

The laws being enforced include:

  • FTC Act Section 5: Prohibits unfair and deceptive practices
  • State consumer protection acts: Nearly identical authority at the state level
  • Fair Credit Reporting Act (FCRA): Governs use of consumer data for credit, employment, insurance
  • Equal Credit Opportunity Act (ECOA): Prohibits discrimination in lending
  • Title VII and state employment discrimination laws: Prohibit bias in hiring and employment
  • State privacy laws: California (CCPA/CPRA), Virginia, Colorado, Connecticut, and more

If your AI product touches any of these areas—and most do—you’re operating under regulatory scrutiny whether you realize it or not.

“AI Washing”: The New Deceptive Marketing Target

One of the fastest-growing enforcement areas is what regulators call “AI washing”—making unsubstantiated or misleading claims about AI capabilities.

Examples of claims that draw scrutiny:

  • “Our AI is 99% accurate”
  • “Bias-free algorithms”
  • “We don’t use your data for training”
  • “Our AI makes better decisions than humans”
  • “Fully automated, no human review needed”

The FTC has brought multiple actions in this space already, and state AGs are following suit. The theory is simple: if you make a claim about your AI’s performance, accuracy, or fairness, you need to be able to back it up with testing, documentation, and evidence.

Practical implication: Treat AI claims in your marketing the same way you’d treat security claims. If you wouldn’t say “bank-level security” without a SOC 2 report or penetration testing, don’t say “our AI eliminates bias” without documented fairness testing and ongoing monitoring.

What to review:

  • Website copy
  • Pitch decks
  • Sales collateral
  • Contract representations and warranties
  • Customer case studies

If you’re making claims about accuracy, bias, data retention, or model performance, ask: Can we substantiate this? Do we have testing data? Is this claim consistent across marketing, contracts, and product documentation?

California: The De Facto U.S. Standard

California’s privacy laws—the CCPA and its amendments (CPRA)—are increasingly focused on automated decision-making. As of January 1, 2026, California’s rules on automated decision-making technology (ADMT) require businesses to provide consumers with certain rights and transparency when AI is used to make or facilitate decisions that have legal or similarly significant effects.

The California Privacy Protection Agency (CPPA) has also signaled that risk assessments and audits may be required for certain high-risk automated decision-making systems.

What counts as ADMT?
Systems that process personal data and make or facilitate decisions about:

  • Employment (hiring, firing, promotion, compensation)
  • Credit or lending
  • Housing or insurance eligibility
  • Education or educational opportunities
  • Healthcare services
  • Criminal justice or public benefits

If your product fits one of these categories and processes California residents’ personal data, you likely have ADMT obligations.

Four Questions to Answer If You Do Automated Decision-Making

If your product uses AI for automated decision-making, you should be able to answer:

1. What inputs do we use?
What data, features, or signals feed into the decision? Where does that data come from? Is it personal data? Sensitive data?

2. What outcomes do we influence?
Are we making final decisions or providing recommendations? What decisions: hiring, lending, pricing, eligibility, access?

3. What consumer control exists?
Can consumers opt out? Request human review? Access the logic behind a decision? Correct inaccurate data?

4. What audit or risk assessment process exists?
Have we tested for accuracy and bias? Do we monitor outcomes over time? Is there a documented review process?

Even if you’re not directly covered by California law today, enterprise customers will ask these questions in procurement. And if you expand—geographically or by use case—you’ll need good answers.

Industry-Specific Hotspots

Certain industries are seeing enforcement and scrutiny much faster than others:

Fintech (credit, lending, underwriting):
Fair lending laws, ECOA, FCRA. If your AI influences credit decisions, you’re in a heavily regulated space. Expect questions about adverse action notices, disparate impact testing, and explainability.

HR Tech (recruiting, screening, performance management):
EEOC enforcement, state employment discrimination laws, California ADMT. If your product screens resumes, ranks candidates, or influences hiring decisions, you’re high-risk under both U.S. and EU frameworks.

Healthtech (diagnostics, treatment recommendations, patient triage):
FDA medical device regulation (if your AI makes clinical decisions), HIPAA (if you process protected health information), state telemedicine laws. The regulatory surface area is massive and highly specialized.

EdTech (admissions, grading, learning assessments):
FERPA (student privacy), accessibility requirements, state education regulations. If your product influences educational outcomes or access, expect scrutiny.

If you’re in one of these industries, you need industry-specific compliance advice. General AI compliance guidance won’t be enough.

Hypothetical Case Study: FTC Goes After “99% Accurate” Claims

Imagine a startup that sells an AI-powered fraud detection tool to e-commerce companies. Their website says: “Our AI catches 99% of fraud with zero false positives.”

The problem: they tested the model on a clean, labeled dataset in a lab. In production, accuracy is closer to 85%, and there are false positives that block legitimate customers.

The FTC investigates. They find:

  • The 99% claim was based on internal testing that didn’t reflect real-world conditions
  • Customers experienced material false positive rates
  • The startup didn’t update its marketing when production performance diverged from testing

The FTC brings a deceptive practices action. The startup settles, agrees to stop making unsubstantiated claims, and pays a fine.

The lesson: Test in conditions that mirror production. Update claims when performance changes. If you can’t defend the claim in an enforcement action, don’t make it.

Action Item

Review your website, pitch deck, and sales collateral. Flag any claims about AI performance, accuracy, bias, or data handling. For each claim, ask:

  • Can we back this up with testing or documentation?
  • Is this claim consistent with our contracts?
  • If a regulator asked us to substantiate this, could we?

If the answer is no, soften the language or build the testing plan now.