AI Legal Compliance Pt 2: The EU AI Act for U.S. Startups

You don’t sell into Europe. Your customers are U.S.-based. Your team is in San Francisco or New York or Austin. So why does the EU AI Act matter?

Because your customers operate globally. Because investors use EU-readiness as a diligence proxy. And because the Act’s framework—risk categories, transparency requirements, high-risk obligations—is becoming the global template for AI regulation.

Even if you never sign a contract with a European company, the EU AI Act is likely shaping your business already.

Why U.S.-Only Startups Care

Your customers operate globally.
If you sell to enterprises, they’re managing compliance obligations across jurisdictions. They will pass EU-readiness requirements downstream to you through procurement questionnaires, vendor audits, and contract terms. This is the same pattern we saw with GDPR. Companies that had no European operations suddenly found themselves answering detailed privacy questions because their Fortune 500 customers demanded it.

Investors ask “EU-ready?” as a diligence proxy.
Even U.S.-focused funds are starting to ask about AI compliance posture. “Are you EU-ready?” is shorthand for “Do you understand AI risk? Have you classified your systems? Do you have documentation?” A coherent answer signals operational maturity. A blank stare raises concerns.

The Act is becoming a global template.
The EU’s risk-based approach is influencing regulatory thinking in the U.K., Canada, Singapore, and beyond. If you build compliance muscle around these concepts now, you’re not just checking a European box—you’re future-proofing for wherever regulation heads next.

How to Read the Act: Five Questions

The EU AI Act is long, technical, and built for regulators and lawyers. For a startup audience, the most useful way to read it is as a set of questions your business must be able to answer.


Question 1: Is our system in a prohibited category?

Certain AI uses are banned outright in the EU:

  • Social scoring by governments
  • Real-time biometric identification in public spaces (with narrow law enforcement exceptions)
  • Emotion recognition in workplaces or educational institutions
  • Exploitative or manipulative AI (e.g., systems that exploit vulnerabilities of children or people with disabilities)

Most startups won’t touch these categories, but it’s worth an explicit check. If your product uses emotion recognition or biometric identification, you need to understand the scope of the prohibition and whether any exceptions apply.


Question 2: Is our system “high-risk” by use case?

The Act defines high-risk AI based on where and how it’s used. If your system falls into one of these categories, you face significantly higher compliance obligations:

  • Biometrics: Identification, categorization, emotion recognition
  • Critical infrastructure: Systems that manage safety components of infrastructure
  • Education and vocational training: Tools that influence access to education or vocational training
  • Employment: Recruitment, screening, promotion, termination, task allocation, monitoring
  • Essential services: Credit scoring, creditworthiness assessment, emergency response dispatch
  • Law enforcement: Predictive policing, risk assessments, evidence evaluation
  • Migration and asylum: Visa processing, risk assessments for border control
  • Administration of justice: Systems that assist judicial decision-making

If your product fits one of these categories, you need:

  • A risk management system
  • Data governance and training data requirements
  • Technical documentation
  • Record-keeping and logging
  • Transparency to users and human oversight
  • Accuracy, robustness, and cybersecurity measures

Example: You’re building a recruiting screening tool that ranks candidates based on resumes. This is almost certainly high-risk under the “employment” category. You’ll need documented testing for bias, accuracy benchmarks, human review processes, and transparency to candidates about how the AI is used.


Question 3: Are we a “provider,” a “deployer,” or both?

The Act distinguishes between:

  • Providers: Entities that develop or supply AI systems
  • Deployers: Entities that use AI systems under their own authority

If you build a recruitment screening tool and sell it to HR departments, you’re the provider. The HR department is the deployer. If the HR department customizes the tool heavily or makes final hiring decisions based on it, they have their own compliance obligations as a deployer.

Many startups will wear both hats depending on the use case. You might be a provider for your customer-facing product and a deployer for internal tools you use to run your business.

Understanding your role matters because obligations differ. Providers typically bear heavier documentation, testing, and conformity assessment burdens. Deployers have obligations around human oversight, monitoring, and transparency to affected individuals.


Question 4: Are we providing a general-purpose AI model (GPAI), or are we integrating one?

If you’re building foundation models (large language models, multimodal models), you have specific obligations as a GPAI provider—including technical documentation, transparency about training data, and risk assessments if your model poses systemic risk.

Most startups aren’t building foundation models. You’re integrating them. You’re using OpenAI, Anthropic, Google, or another provider’s model as part of your product.

Even as an integrator, you need to understand:

  • What obligations your upstream provider is handling (e.g., GPAI compliance)
  • What obligations flow downstream to you (e.g., transparency to end users, accuracy if you’re fine-tuning)
  • What’s in your vendor contract about liability, indemnification, and regulatory compliance

If you’re building on top of GPT-4 or Claude, your vendor is (in theory) handling GPAI compliance. But you’re still responsible for how you use the model, what data you feed it, and what decisions your system makes based on its outputs.


Question 5: What transparency does the user deserve?

Even for systems that aren’t high-risk, the Act requires transparency in specific situations:

  • When users interact with AI systems (e.g., chatbots)
  • When content is AI-generated (deepfakes, synthetic media, automated content)
  • When emotion recognition or biometric categorization is used

Transparency isn’t optional. It’s a baseline requirement. If your customer support is powered by a chatbot, users need to know they’re talking to AI. If you’re generating marketing copy or images with AI, that should be disclosed in appropriate contexts.

The threshold isn’t “perfect transparency in every case.” It’s “reasonable transparency appropriate to the context and risk.”


What You Need: A Risk Classification Memo, Not a Compliance Department

Here’s the reality: early-stage startups don’t have the resources to build full-scale EU AI Act compliance programs from scratch. And you don’t need to.

What you do need is a 2–3 page risk classification memo that answers the five questions above for your product. This memo should:

  • Identify whether you’re in a prohibited or high-risk category
  • Clarify your role (provider, deployer, or both)
  • Describe your use of third-party models and what compliance your vendors are handling
  • Outline what transparency obligations apply
  • Flag any areas of uncertainty or risk

This memo becomes the document you show on customer calls, share with investors during diligence, and use internally to guide product and legal decisions. It doesn’t need to be perfect. It needs to be honest, evidence-based, and updateable as your product evolves.

Action Item

Draft a 1-page document answering the five questions for your core product. Share it with your co-founder, head of product, or lead investor. Ask: Does this make sense? Are we missing anything? Is this defensible?

If you can’t answer the questions, that’s valuable information too. It tells you where to focus your next conversation with counsel or your compliance owner.