Regulatorik 7 min read

The EU AI Act: What Software Teams Need to Know Before August 2026

The EU AI Act takes full effect in August 2026. Here's what it requires, which AI systems are affected, and how to prepare your software.

BrotCode
The EU AI Act: What Software Teams Need to Know Before August 2026

Six Months. That’s What You’ve Got.

The EU AI Act reaches full enforcement on August 2, 2026. Every obligation for high-risk AI systems becomes binding on that date. And the penalties aren’t gentle: up to EUR 35 million or 7% of global annual revenue.

Most software teams haven’t started preparing. Over half of organizations lack even a basic inventory of AI systems running in their environment. You can’t classify risk for systems you don’t know exist.

This isn’t a future problem. Prohibited AI practices have been illegal since February 2025. If your system uses social scoring, real-time biometric surveillance in public spaces, or exploits vulnerabilities in specific groups, you’re already in violation.

Here’s what the regulation actually requires, who it applies to, and what to do about it.

The Timeline: What’s Already Active and What’s Coming

The AI Act rolls out in phases. Some are already live.

February 2, 2025: Prohibited AI practices became illegal. No social scoring, no untargeted facial recognition databases, no AI that manipulates behavior to cause harm.

August 2, 2025: Governance structures and codes of practice established. The EU AI Office began publishing guidelines and templates.

August 2, 2026: The big one. Full application of all rules. High-risk AI systems must meet every requirement: conformity assessments, technical documentation, human oversight, transparency obligations, EU database registration.

August 2, 2027: Additional obligations for high-risk AI systems embedded in regulated products (medical devices, machinery, vehicles) under existing EU product safety legislation.

Miss August 2026 and you’re operating non-compliant systems in a live enforcement environment.

What Counts as an “AI System”? Broader Than You Think.

The Act’s definition is broad: a machine-based system designed to operate with varying levels of autonomy, that can exhibit adaptiveness after deployment, and infers from its inputs how to generate outputs like predictions, content, recommendations, or decisions.

That’s broad. It covers obvious things like large language models and computer vision. But it also captures traditional machine learning models, some rule-based systems, and advanced analytics that generate predictions or recommendations.

If your product uses any form of automated inference to produce outputs that influence decisions, it’s probably covered. Sound like a lot of modern software? It should.

The Four Risk Categories

Unacceptable risk: banned

Social scoring by governments or private companies. Real-time biometric identification in public spaces (with narrow exceptions for law enforcement). AI that exploits age, disability, or economic vulnerability.

Untargeted scraping to build facial recognition databases.

These are illegal now. Not August 2026. Now.

High risk: heavy obligations

This is where most compliance work lives. High-risk AI systems include those used in:

  • Biometric identification and categorization
  • Critical infrastructure management (energy, transport, water)
  • Education and vocational training (admissions, assessments)
  • Employment (recruitment, performance evaluation, task allocation)
  • Access to essential services (credit scoring, insurance pricing)
  • Law enforcement and migration
  • Administration of justice and democratic processes

If your AI system falls into any of these categories, the obligations are heavy: quality management, risk management, technical documentation, data governance, logging, human oversight, conformity assessments, and EU database registration. That’s a substantial engineering effort. Start now.

Limited risk: transparency required

AI systems that interact with people, generate or manipulate content, or detect emotions. Chatbots must clearly disclose they’re AI. Deepfake content must carry machine-readable watermarks.

Emotion recognition systems must notify users before use.

If you’re running a customer-facing chatbot, this applies to you. The fix is usually straightforward: add a clear disclosure at the start of every interaction.

Minimal risk: no specific obligations

Most business software falls here: spam filters, inventory optimization, recommendation engines for non-critical applications. No specific AI Act obligations, though voluntary codes of practice are encouraged.

Provider vs. Deployer: Your Role Matters

The Act distinguishes between providers (those who develop or commission AI systems) and deployers (those who use AI systems in their operations).

Most SMBs are deployers. You’re using GPT-4, Claude, or an open-source model within your product. You didn’t build the foundation model.

Deployer obligations are lighter but real. You must use the system according to the provider’s instructions. Ensure human oversight for high-risk systems.

Monitor for risks during operation. Maintain transparency with users. Keep logs for the mandated retention period.

Conduct a fundamental rights impact assessment for certain high-risk uses.

Providers carry heavier obligations: conformity assessments, technical documentation, post-market monitoring, and incident reporting.

If you fine-tune a model or modify it substantially, you could be reclassified as a provider. That’s a significant liability shift. Know your role.

AI Literacy: Not Optional

Article 4 requires organizations to ensure their staff have “sufficient AI literacy” to operate AI systems competently. The Digital Omnibus proposal adjusts this requirement, but the core obligation remains.

What does “sufficient” mean? The Act doesn’t specify hours or certifications. It means your team understands how the AI systems they use work, what their limitations are, and how to maintain human oversight.

For development teams, this means understanding model behavior, bias detection, and failure modes. For business users, it means knowing when to trust AI outputs and when to override them.

Build this into your onboarding and ongoing training. Document it.

Practical Compliance Steps

Step 1: Inventory your AI. Every model, every automated decision system, every AI-powered feature. Map them. You can’t classify what you can’t see.

Step 2: Classify risk. For each AI system, determine the risk category. Most will fall into minimal or limited risk. The ones that don’t need immediate attention.

Step 3: Determine your role. Are you the provider or deployer for each system? This determines your specific obligations.

Step 4: Gap analysis. For high-risk systems, compare current documentation, oversight mechanisms, and logging against the Act’s requirements. Identify gaps.

Step 5: Build the documentation. Technical documentation for high-risk systems must cover: intended purpose, design specifications, training data and methodology, performance metrics, known limitations, human oversight instructions, and risk mitigation measures.

Step 6: Implement oversight. Human-in-the-loop or human-on-the-loop for high-risk systems. Not a checkbox. A real mechanism where a human can understand, intervene, and override.

Step 7: Set up logging. High-risk AI systems must maintain automatic logs. These logs must enable post-market monitoring and incident investigation. Retention period: at least as long as the system is in operation.

Architecture Patterns for Compliance

Building AI Act compliance into your technical architecture doesn’t require a complete rewrite. But it does require intentional design.

Audit logging layer. Every AI inference should be logged: inputs, outputs, confidence scores, model version, timestamp. Immutable storage. This is your compliance evidence.

Explanation capabilities. For high-risk systems, you need to explain how a decision was reached. Not full interpretability for every neural network. But enough transparency that a human reviewer can understand and contest a decision.

Model versioning. Track which model version produced which outputs. When regulators ask questions about a specific decision, you need to reconstruct what happened.

Kill switches. Human override mechanisms for high-risk systems. The ability to shut down or bypass AI decision-making when needed.

For the broader compliance context, see our pillar guide on EU compliance for software teams. If you’re building GDPR-compliant AI systems, our architecture guide covers the data protection layer.

What Happens If You Don’t Comply

The penalty structure is tiered.

Prohibited AI practices: up to EUR 35 million or 7% of global revenue. Violations of high-risk obligations: up to EUR 15 million or 3% of global revenue. Supplying incorrect information to authorities: up to EUR 7.5 million or 1% of global revenue.

For SMBs, the percentages are what matter. 7% of revenue for a company doing EUR 5 million in annual revenue is EUR 350,000. That’s not theoretical. The regulation is designed to hurt at every scale.

National market surveillance authorities will enforce the Act. In Germany, this will likely involve the BfDI (Federal Data Protection Commissioner) and sector-specific regulators. They’re building capacity now.

Don’t Wait for August

The companies that scramble in July 2026 will pay consultants premium rates for rush assessments and find themselves patching systems under pressure. The companies that start now will build compliance into their architecture, train their teams gradually, and face enforcement with confidence.

Start with the inventory. Everything else follows from knowing what AI you actually run.


Building with AI? Let’s make sure it’s compliant from the start. We design AI systems with EU AI Act requirements built into the architecture.

Artikel teilen
AI Act compliance AI security architecture

Verwandte Artikel

Brauchen Sie Hilfe beim Bauen?

Wir verwandeln komplexe technische Herausforderungen in produktionsreife Lösungen. Sprechen wir über Ihr Projekt.