August 9, 2025

How to Prepare Your AI Products for the EU AI Act: A Step-by-Step Plan

Step-by-step plan to prepare AI products for the EU AI Act—covering inventory, risk classification, QMS, Annex IV docs, transparency, oversight, and post-market monitoring.

EU AI actGPAIAI PrivacyAI risk management
How to Prepare Your AI Products for the EU AI Act: A Step-by-Step Plan

If you plan to sell or use AI in the EU, the EU AI Act is now the rulebook. It entered into force on August 1, 2024, with prohibited uses applying from February 2, 2025, general-purpose AI (GPAI) transparency obligations starting August 2, 2025, and the bulk of high-risk obligations landing August 2, 2026 (some product-safety-linked systems have until August 2, 2027). Knowing these dates lets you back-plan legal, technical, and documentation work instead of scrambling.

What’s your starting point? Inventory first.

Begin by building a single, living inventory of every AI system you develop or deploy that touches the EU—what it does, how it’s delivered (API, SaaS, embedded), what data it trains on and ingests, who uses it, and what decisions it influences. This is the foundation for classification, documentation, conformity assessment, and post-market monitoring later on. (The Act treats AI as product-safety regulation, so life-cycle traceability really matters.)

Map your role(s): provider, deployer, importer, distributor

Your obligations depend on your role for each system and use case. A provider develops an AI system and places it on the market; a deployer uses an AI system under its authority. You can be both—e.g., you sell a model (provider) and also use it internally (deployer). Getting this right determines which controls, notices, and filings you own.

Classify the risk—and flag any GPAI

Next, classify each system. Prohibited practices (certain manipulative uses, some biometric uses, social scoring) must stop. High-risk systems are either safety components under existing EU product laws or use cases listed in Annex III (e.g., employment, education, essential services, biometrics, law enforcement). Limited-risk systems mainly trigger transparency duties (e.g., telling people they’re interacting with AI or labelling synthetic media). If you provide a GPAI model, you inherit a separate set of transparency and (for “systemic risk” models) safety and cybersecurity obligations.

Run a gap analysis against the core requirements

For in-scope systems—especially high-risk—compare your current controls to the Act’s core requirements: risk management, data governance and data quality, technical documentation, logging, transparency/instructions for use, human oversight, and accuracy/robustness/cybersecurity. Treat this like a design control checklist you’ll keep updated through the product life cycle.

Stand up (or extend) your Quality Management System

High-risk providers must operate a Quality Management System (QMS) that documents policies and procedures for design controls, testing/validation, supplier management, change control, incident handling, and post-market monitoring. If you already follow ISO-style QMS for medical, automotive, or other regulated products, extend it to cover AI-specific requirements.

Assemble the Annex IV technical documentation

Create a technical file per system—think of it as your “audit-ready binder.” It should describe the system and intended purpose, data and evaluation methods, risk management and mitigations, human oversight design, cybersecurity posture, and update policy. The Commission will also provide a simplified form for SMEs; track that and adopt as soon as available.

Engineer for transparency and user information

Design the product so deployers and end-users actually understand capabilities, limits, data needs, and appropriate use. Separately, Article 50’s transparency obligations require telling people when they’re interacting with AI, labelling AI-generated or manipulated content (e.g., deepfakes), and disclosing use of emotion recognition/biometric categorisation where applicable. Build these disclosures into your UI, docs, and release processes now.

Make human oversight real (not performative)

Human oversight isn’t a checkbox. You need named roles, intervention points (pause/override), escalation paths, and training so people can meaningfully govern operation and misuse. Bake those controls into operations dashboards and customer guidance—not just policy PDFs.

Prepare for conformity assessment, EU Declaration, and CE marking

Before placing a high-risk system on the EU market, complete the correct conformity assessment path (internal control vs. notified-body involvement depends on the product family and integration), issue the EU Declaration of Conformity, and apply CE marking. Plan time for testing evidence and tech-doc reviews; substantial modifications can trigger a fresh assessment.

Operate post-market monitoring—and know your incident clocks

Once in market, you must monitor real-world performance and feed what you learn back into risk management and updates. For serious incidents, providers generally must notify national market-surveillance authorities without undue delay and no later than 15 days after establishing a link or a reasonable likelihood of one; shorter windows apply for widespread or catastrophic cases. Set up logging, triage, and legal/comms runbooks now so you can meet those deadlines.

If you provide GPAI, hit the 2025–2027 checkpoints

From August 2, 2025, GPAI providers face transparency obligations (e.g., technical documentation, training-data summaries, and copyright compliance policies). The Commission and industry are rolling out a voluntary Code of Practice as the practical path; expect it to shape what “good” looks like even if you’re not a signatory. Models already on the market before that date face transitional arrangements into 2027.

Don’t silo privacy: align your AI Act work with GDPR (and CCPA)

The AI Act doesn’t replace privacy law. If your systems touch personal data, you still need GDPR-grade legal bases, DPIAs where required, vendor terms, data-subject rights tooling, and retention controls—and for California users, CCPA/CPRA obligations. Think of the AI Act as product-safety and governance rules that sit alongside privacy rules; teams and controls should be integrated.

A realistic internal timeline

Between now and August 2026, complete inventory and role mapping, lock risk classifications, stand up your QMS, and build out Annex IV documentation and transparency/oversight controls. If you ship high-risk AI, dry-run your conformity assessment early. If you provide GPAI, align to the Code of Practice and publish the required model documentation ahead of August 2025. For embedded AI that’s also a safety component in sectoral regimes, plan through August 2027.

Where WALLD fits

WALLD automates the busywork: building your AI system inventory, mapping provider/deployer obligations, generating Annex IV-ready technical documentation, collecting evidence for conformity assessments, and running post-market monitoring workflows. That way, engineering and legal teams spend time on design decisions—not wrangling spreadsheets and versioned PDFs.

Disclaimer: This article is for general information and is not legal advice. Always consult qualified counsel for your specific situation.

Sources:

Alex Makuch