Skip to main content

The European Union’s AI Act marks a major regulatory milestone for any organisation building, deploying, or using AI systems that may affect EU citizens. After years of debate, the Act has been formally adopted and will begin applying, in phases, from 2025 onwards. 

What Is the EU AI Act? 

The EU AI Act is the world’s first comprehensive legal framework for Artificial Intelligence. 

Its aim is to ensure that AI systems in the EU are safe, transparent, and respectful of fundamental rights. It introduces a risk-based approach: the higher the potential risk an AI system poses to individuals or society, the stricter the regulatory requirements.

Your contacts
Shreeji Doshi
Shreeji Doshi

Director, GRC | Cyber Risk

sdoshi@thomasmurray.com

Risk Categories Under the AI Act

Risk CategoryExamplesRegulatory Treatment
Unacceptable RiskSocial scoring by governments, manipulative AIProhibited
High RiskAI in infrastructure, healthcare, credit scoring, policingStrict requirements (testing, risk management, human oversight, documentation)
Limited RiskChatbots, biometric categorisationTransparency obligations (users must be told they are interacting with AI)
Minimal/No RiskSpam filters, video game AINo specific requirements
 

Key Actors in the AI Act 

The Act applies across the AI ecosystem. Organisations may play one or more of these roles: 

  • Providers – Develop and place AI systems on the EU market.
  • Deployers – Use AI systems within their business operations.
  • Importers/Distributors – Bring or resell AI systems in the EU.
  • Users – Interact with AI in a regulated context. 
 

What Deployers Need to Know 

Most organisations will fall under the role of deployer. High-level requirements include: 

  • Risk Management – Assessing how AI impacts individuals and society.
  • Human Oversight – Ensuring meaningful human control, especially in high-risk contexts.
  • Data Governance – Using high-quality, unbiased data to train AI models.
  • Documentation and Record Keeping – Maintaining logs, technical documentation, and risk assessments.
  • Transparency – Informing users when they are interacting with AI. 
 

Key Milestones and Compliance Roadmap 

2025 – Certain transparency rules begin applying.

2026 – High-risk AI requirements come into force.

2027 onwards – Full enforcement across the EU. 

To prepare, organisations should follow a staged roadmap: 

  1. Identify AI Systems in Use – Map all AI tools, both internal and third-party.
  2. Classify by Risk Category – Apply the Act’s risk-based framework.
  3. Establish Governance Structures – Assign accountability at board/senior management level.
  4. Update Contracts and Vendor Assessments – Extend third-party risk management (TPRM) to AI vendors.
  5. Embed Human Oversight and Documentation – Ensure ongoing compliance and audit readiness. 
 

Why This Matters 

AI adoption is accelerating: according to McKinsey’s State of AI survey, over 75% of organisations already use AI in at least one business function. As adoption grows, so does regulatory exposure. 

For organisations, this translates into three strategic priorities: 

  • Governance – Embedding strong oversight and accountability across AI systems.
  • Third-Party Risk – Managing external AI vendors with the same rigour as internal systems.
  • Trust and Transparency – Building stakeholder confidence through responsible and compliant AI practices. 
 

Summary 

The EU AI Act is not simply another compliance requirement - it sets the precedent for how AI will be regulated globally. Organisations that act early will not only reduce regulatory risk but also position themselves as trusted leaders in responsible AI adoption.

 

Background
Upcoming webinar:

DORA 9 Months On – Compliance is Just the Beginning, Resilience is Next

Wednesday, 15 October 2025, 11:00 - 12:00 (UTC+01:00)
Managing Risk of AI Adoption

Managing Risk of AI Adoption

AI is transforming how organisations across the globe work, from powering internal knowledge hubs and embedding tools like CoPilot in Teams, to generating production-ready code. But every innovation brings new cyber risks, compliance challenges, and attack surfaces. By utilising our AI code testing service, you can ensure your AI deployments are resilient, compliant, and ready for the real world.

Learn more