• blogangle-right
  • How to Achieve EU AI Act Compliance and Build Trustworthy AI

How to Achieve EU AI Act Compliance and Build Trustworthy AI

  • September 11, 2025
Author

Emily Bonnie

Senior Content Marketing Manager

Reviewer

Cavan Leung

Senior Compliance Manager

If your company develops or uses AI in any way, you may be wondering: Does the EU AI Act apply to me? And if it does, how much work will compliance actually take?

The short answer is yes: if your AI systems are placed on the EU market or used within the European Union, you will need to comply. The EU AI Act, also known as the Artificial Intelligence Act, is the first comprehensive AI law in the world. It introduces strict rules based on the level of risk your systems pose, with noncompliance penalties as high as €35 million or 7% of global revenue.

That makes EU AI Act compliance both unavoidable and urgent. But it doesn’t have to be overwhelming. In this guide, we’ll walk through what the AI Act requires, who it applies to, when deadlines kick in, and most importantly, the steps your organization can take to comply.

What is the EU AI Act and why does AI governance matter?

Over the past decade, the rapid rise of AI technologies raised concerns about bias in automated decision-making, opaque systems that lacked transparency obligations, and the growing influence of generative AI and general-purpose AI models (GPAI models) in everyday life. The European Parliament debated how these technologies could affect privacy, discrimination, public services, and even safety components in critical infrastructure.

There was also increasing recognition that the use of AI carried both opportunity and high impact risk. Issues like social scoring, surveillance, and systemic risk across sectors highlighted the need for clear rules. To respond, the European Commission proposed the Artificial Intelligence Act, and after years of negotiation and amendments, it was formally adopted. The regulation also introduced oversight mechanisms such as the creation of an AI Office within the European Commission to coordinate enforcement and help guide consistent application across the EU.

The Artificial Intelligence Act establishes a legal framework across all 27 EU member states. It is designed to ensure that AI systems placed on the EU market and used within the EU are safe, transparent, non-discriminatory, and environmentally sustainable, while also encouraging innovation and competitiveness.

The Act is guided by several objectives, each contributing to its overarching goal of trustworthy AI:

  • Ensuring safety and fundamental rights: Protecting privacy, dignity, and non-discrimination while minimizing risks to individuals. This includes preventing AI from causing harm, whether intentional or unintended, and addressing broad vulnerabilities AI systems can create.
  • Fostering innovation: The Act balances regulation with innovation. It includes a regulatory sandbox, a controlled environment where organizations can test AI systems in collaboration with regulators before bringing them to market. This allows providers and deployers to refine systems, address compliance issues early, and experiment with new applications without immediate legal risk. The Act also encourages the development of codes of practice, voluntary guidelines that can help organizations align with best practices on issues like bias reduction, transparency, and system resilience.
  • Establishing a single market for AI: Harmonized rules reduce fragmentation, create predictability, and make it easier for businesses to scale AI across the EU.
  • Building trust: Trust is essential to adoption. When AI systems are explainable and responsibly designed, people are more likely to use them in sensitive areas such as healthcare, cybersecurity, and financial services.

The EU AI Act also creates new regulatory bodies to oversee compliance. At the EU level, the AI Office within the European Commission will coordinate enforcement and provide guidance. Complementing this, the European Artificial Intelligence Board will bring together representatives from all 27 member states to ensure consistent application of the Act.

The Board will play a central role in market surveillance, information sharing, and developing guidelines to help organizations interpret their compliance obligations. For companies operating across multiple EU countries, this centralized oversight is meant to create a more consistent and predictable regulatory environment.

Guiding Your Organization's AI Strategy and Implementation

As the use of AI in cybersecurity continues to grow, cybersecurity leaders will play a critical role in harnessing the potential of AI while ensuring its secure and effective implementation. By following these best practices, leaders can effectively implement AI while addressing concerns related to transparency, privacy, and security.

Who needs to comply with the EU AI Act?

The EU has been clear that the Artificial Intelligence Act is not just about protecting European citizens. It is also about shaping the global conversation on AI governance. By creating the first binding legal framework for AI, the EU is positioning itself as a global leader in setting standards for safe, ethical, and transparent AI practices.

This means that organizations outside Europe, including US-based companies, will increasingly need to adapt to EU-driven norms. Just as GDPR influenced global data privacy practices, the AI Act is likely to influence AI law and regulation far beyond Europe’s borders.

For example, a US-based company that develops an AI-powered recruitment platform but sells it to European employers must comply. A European healthcare provider using AI diagnostic tools must comply. Even a startup that sources AI from a third party but integrates it into a product offered in the EU falls under the Act’s obligations.

This extraterritorial reach makes one thing clear: if your AI system touches the EU market, EU AI Act compliance must be part of your roadmap.

The AI Act defines six specific categories of operators, each with its own obligations:

  • Provider: An organization or individual that develops an AI system or general-purpose AI model and makes it available on the market.
  • Deployer: Any individual or entity using an AI system for professional purposes.
  • Importer: An EU-based organization that places an AI system from a non-EU provider on the EU market.
  • Distributor: An entity in the supply chain (other than providers or importers) that makes AI systems available on the EU market.
  • Authorized representative: An EU-based party mandated to carry out a provider’s obligations under the Act.
  • Product manufacturer: An entity that integrates an AI system into a product and makes it available under their name or trademark.

Providers remain the most heavily regulated, but all operators should carefully review which category applies to them, since responsibilities and penalties vary by role.

EU AI Act compliance deadlines and penalties

The EU AI Act entered into force on August 1, 2024, following its publication in the Official Journal on July 12, 2024. Its provisions are rolling out in phases to give organizations a chance to prepare:

  • February 2, 2025: Prohibitions on unacceptable risk AI systems became effective, meaning certain AI practices such as social scoring, subliminal manipulation, or emotion inference in workplaces and educational settings are now banned. AI literacy requirements that mandate training for individuals involved with AI also took effect.
  • August 2, 2025: Rules for general-purpose AI models (GPAI models) went into effect for new models placed on the market. Governance provisions, notification obligations, and penalties provisions also became enforceable. Models already on the market before August 2, 2025 must comply by August 2, 2027.
  • August 2, 2026: The Act becomes generally applicable, including high-risk AI system obligations as defined in Annex III.
  • August 2, 2027: Additional high-risk AI systems, particularly those integrated as safety components in regulated products, must comply with the AI regulation.

These dates are firm legal milestones, not suggestions. As recently reaffirmed by the European Commission, there will be no delays or grace periods. Missed deadlines can result in non-compliance violations and significant regulatory fines. 

Non-compliance penalty tiers

The EU AI Act establishes a tiered system of penalties, based on the seriousness of the  violation:

  • Unacceptable risk AI systems: Noncompliance can lead to fines of up to €35 million or 7% of global annual turnover.
  • Other major requirements (particularly for high-risk systems): Fines of up to €15 million or 3% of global annual turnover.
  • Supplying incomplete, false, or misleading information to authorities: Fines of up to €7.5 million or 1% of global annual turnover.

For startups and SMEs, the lower of the two amounts applies, but penalties are still significant. In addition to fines, regulators can order the withdrawal of noncompliant systems from the market, resulting in both financial and reputational damage.

Recommended reading

Comparing AI Frameworks: How to Decide If You Need One and Which One to Choose

The EU AI Act’s risk-based approach

A cornerstone of the EU AI Act is its risk-based framework, which classifies AI systems based on their potential to cause harm. The higher the level of risk, the stricter the requirements.

Understanding where your AI system falls within these risk categories is critical because it directly determines your compliance obligations.

  • Unacceptable Risk AI Systems: These systems are considered a clear threat to fundamental rights and are banned outright. Examples include cognitive behavioral manipulation, social scoring by public authorities, and real-time remote biometric identification such as facial recognition in public spaces by law enforcement (with narrow exceptions). Emotion inference is prohibited in workplaces and educational settings. If your AI system falls into this category, it cannot be legally deployed in the EU.
  • High-Risk AI Systems: These include systems that affect people’s health, safety, or fundamental rights. Examples cover critical infrastructure, education, employment and worker management, essential services, law enforcement, migration and border control, and justice and democratic processes. High-risk AI systems must meet strict requirements for risk management, data governance, transparency, human oversight, and conformity assessments.
  • Limited Risk AI Systems: These systems pose transparency risks. The main requirement is that users must be informed when they are interacting with AI, allowing them to make informed decisions. Chatbots and deepfakes are common examples.
  • Minimal or No Risk AI Systems: Most AI applications fall into this category, where obligations are light. While not regulated under the Act, organizations are encouraged to align with other laws such as GDPR and adopt voluntary codes of conduct.

Correctly classifying your systems is the first and most important step of compliance. Misclassification can result in unnecessary compliance burdens or, worse, falling short of legal requirements.

Practical steps to achieve EU AI Act compliance

Now that you understand what the Artificial Intelligence Act is and why it matters, let’s get into the how. These are the practical steps you need to take to make sure your organization meets its obligations and avoids penalties.

Complete an AI system inventory and classification

Start by creating a comprehensive inventory of all AI systems in your organization. This should include legacy tools still in use, systems under development, and any acquired from vendors.

For each system, document its purpose, functionality, data inputs, and outputs. Then classify it under the EU AI Act’s risk categories: unacceptable risk, high risk, limited risk, or minimal/no risk. Pay special attention to Annex III, which lists specific high-risk use cases like recruitment, education, law enforcement, and critical infrastructure.

This system categorisation will determine your compliance obligations under the EU AI Act, from minimal oversight for low-risk tools to more intensive requirements for high-risk AI.

Implement robust risk management for high-risk AI

If a system is classified as high risk, you must establish a structured, ongoing process for risk assessments. These assessments identify foreseeable risks to health, safety, and fundamental rights, including bias in data, unfair or opaque decision-making, and potential technical failures.

Each identified risk should be evaluated for both severity and likelihood, then reduced to an acceptable level through technical safeguards, design adjustments, or human oversight measures. Risk management is not a one-time task. The EU AI Act requires continuous monitoring and updates to ensure that risks are managed throughout the system’s lifecycle.

Ensure data governance and quality

High-risk AI systems rely heavily on the quality of their data. Poor or biased datasets can create discriminatory outcomes, damage trust, and put you out of compliance.

Organizations should set clear policies for how data is collected, cleaned, labeled, and validated. Data must be relevant, representative, and as free from bias as possible. Testing and validation processes should be well-documented, with attention to vulnerabilities that could distort outputs.

All data practices must also comply with GDPR and other applicable privacy laws. This is where aligning AI governance with existing cybersecurity and data protection programs can provide efficiency.

Prioritize transparency and human oversight

Transparency is a core requirement of the Artificial Intelligence Act. High-risk AI systems must be designed to give deployers and end-users clear information about what the system can and cannot do, along with any transparency obligations around risks or limitations.

Where feasible, systems should also provide explainability so that users can understand how outputs are generated. This is especially important for generative AI and complex machine learning models where decision paths are often opaque.

Effective human oversight is another non-negotiable requirement. Personnel must have the authority and the AI literacy to intervene when AI-generated outputs could cause harm. This may involve training employees, setting thresholds for intervention, or designing fail-safes that allow humans to override automated outcomes.

Maintain documentation and records

Documentation is at the heart of EU AI Act compliance. Regulators and market surveillance authorities rely on records to verify that your systems meet requirements.

For every high-risk AI system, providers must prepare and maintain:

  • Technical documentation describing system design, development, training data, and testing procedures.
  • Risk management records that document identified risks, mitigation steps, and updates.
  • Operational logs that capture events during system use to support traceability.

High-risk AI providers must also issue an EU Declaration of Conformity, which is a formal legal document in which the provider attests that the system meets all relevant requirements of the Artificial Intelligence Act. This declaration must be signed by an authorized representative and kept on file for regulators (typical retention period is 10 years).

Complete a conformity assessment

A conformity assessment is the process of proving that your high-risk AI system complies with the Act before it can be placed on the EU market.

There are two main pathways:

  • Internal control (self-assessment): Allowed for some high-risk systems where risks are more limited. The provider prepares all documentation, verifies compliance internally, and issues the Declaration of Conformity.
  • Third-party assessment: Required for most high-risk systems, particularly those that act as a safety component of another regulated product. In these cases, an accredited independent organization reviews your technical documentation, risk management, testing results, and governance processes.

If the system passes, the provider can affix the CE marking, which signals to regulators and stakeholders that the AI system complies with EU law. CE marking is mandatory before the system can be sold or deployed.

Monitor and report after deployment

Compliance does not end when the AI system goes live. The EU AI Act requires ongoing post-market monitoring.

Providers of high-risk AI must establish processes to continuously track system performance, detect malfunctions, and identify new or unforeseen risks. If a serious incident occurs (for example, the system produces harmful outcomes or fails in a way that affects public services), providers are legally required to notify authorities.

Corrective action must then be taken, which may involve retraining the model, updating documentation, issuing user warnings, or in extreme cases, withdrawing the system from the market. These measures ensure that AI practices remain safe, ethical, and compliant over time.

EU AI Act Compliance Checklist

The EU AI Act sets strict requirements for high-risk AI, from risk assessments and data quality to transparency, human oversight, and conformity assessment. Our checklist turns the law into clear tasks, helping your team confirm classification, prepare documentation, plan CE marking, and prioritize remediation.

Overcoming common EU AI Act compliance challenges

EU AI Act compliance is a heavy lift. Organizations will undoubtedly face hurdles, but proactive strategies can transform these challenges into opportunities for responsible innovation.

Organizations often encounter several obstacles when striving for compliance:

  • Complexity of classification: Accurately classifying AI systems, especially those with multiple functionalities or nuanced applications, can be challenging. Ambiguity in use cases can lead to misclassification.
  • Data governance overhead: Ensuring high-quality, unbiased, and compliant data across the entire AI lifecycle requires substantial investment in infrastructure, processes, and personnel.
  • Technical explainability: Making complex AI models, particularly deep learning systems, fully explainable to human users and regulators remains a significant research and engineering challenge.
  • Integration with existing systems: Integrating new compliance frameworks and technical requirements into existing, often fragmented, IT infrastructures can be a daunting task.
  • Resource constraints: SMEs, in particular, may struggle with the financial and human resource demands required to implement comprehensive compliance measures.
  • Evolving regulatory landscape: The AI Act may undergo further refinements, and national implementations could introduce additional nuances, requiring continuous monitoring and adaptation.

The best way to overcome these challenges is to start early, build cross-functional teams that include legal, technical, and product leaders, and make compliance part of the design process from the beginning. Leveraging automation platforms can also ease the burden by streamlining evidence collection, risk tracking, and monitoring.

The future of trustworthy AI

The EU AI Act is just the first landmark AI legislation, with governments across the globe implementing their own laws, policies, and frameworks around responsible AI. Companies that align with the EU AI Act will be better prepared for future regulations in other regions and more trusted by customers and partners.

To help organizations achieve and maintain compliance with AI frameworks, Secureframe now provides out-of-the-box support for the NIST AI RMF, ISO 42001, and the EU AI Act, including:

  • 200+ integrations with your existing tech stack to automate gap analysis, control monitoring, and evidence collection against EU AI Act requirements
  • Policy and procedure templates created by in-house experts and former auditors, tailored for AI governance
  • Continuous monitoring with real-time alerts when cloud tests fail, so systems remain compliant
  • Built-in risk management tools to identify and manage AI risks

With Secureframe, compliance becomes simpler and more scalable, helping your organization reduce risk while building AI systems customers can trust. To learn more about how Secureframe can help you achieve EU AI Act compliance, schedule a demo with one of our experts.

Use trust to accelerate growth

Request a demoangle-right
cta-bg

FAQs

Has the EU passed the AI Act? 

Yes. The Artificial Intelligence Act was formally adopted by the European Parliament and entered into force on August 1, 2024. It is the world’s first comprehensive AI law and applies across all 27 EU member states.

What is the EU AI Act?

The EU AI Act is a binding legal framework that regulates the use of AI within the European Union. It categorises AI systems by their level of risk and sets out obligations for providers, deployers, importers, and distributors. The goal is to ensure AI technologies are safe, transparent, and aligned with fundamental rights.

Does the EU AI Act apply to the US?

Yes, U.S.-based or other non-EU companies must comply if they place AI systems on the EU market or make them available to EU users.

What is the timeline for the AI Act?

 The requirements are phased in:

  • February, 2025: Prohibitions on “unacceptable risk” AI and AI literacy rules take effect.
  • August, 2025: Rules for general-purpose AI models (GPAI models) and governance provisions begin.
  • August, 2026: Obligations for high-risk systems listed in Annex III apply
  • August, 2027: Additional rules for high-risk systems acting as a safety component in regulated products take effect.

Who needs to comply with the EU AI Act?

The Act applies broadly to providers (those who develop AI systems), deployers (those who use them), and importers and distributors making AI systems available in the EU. Any organization whose AI systems touch the EU market must comply.

What is prohibited under the EU AI Act?

The Act bans AI systems deemed to present an unacceptable risk. Prohibited practices include social scoring by governments, manipulative cognitive or behavioral AI, and most real-time biometric identification in public spaces by law enforcement. These systems cannot legally be placed on the EU market.

What are the four risks of the AI Act?

The Act introduces a risk-based approach with four categories:

  1. Unacceptable risk: Banned outright. Prohibited practices include social scoring, manipulative cognitive or behavioral AI, and most real-time biometric identification in public spaces by law enforcement.
  2. High risk: Subject to strict requirements, including risk assessments, documentation, and conformity assessments.
  3. Limited risk: Must comply with transparency obligations, such as disclosing when users interact with AI.
  4. Minimal or no risk: Few requirements, but organizations are encouraged to follow voluntary codes of practice and existing laws.
How to Achieve EU AI Act Compliance and Build Trustworthy AI