All blogs

How the EU AI Act Affects Radiology AI — What IT Leaders Should Know

Efrain Salinas Hernandez
COMPLIANCE

Want to Learn More?

Watch our "Mastering AI in Healthcare: Northwell Health’s Blueprint for a Robust Information Security Strategy" webinar

Introduction

A sweeping new regulation – the European Union’s AI Act – is set to shake up how AI is developed and deployed across industries. For healthcare and radiology in particular, this Act arrives as a pivotal change. The EU AI Act, approved in 2024, is the world’s first comprehensive AI law. It introduces strict requirements on AI systems and will start taking effect soon (general AI provisions kick in by mid-2025, with healthcare AI obligations expected to be implemented by August 2nd, 2026). 

Why should radiology IT leaders care? Under the AI Act, most radiology AI tools – from diagnostic support algorithms to workflow assistants – qualify as “high‑risk AI.” While the primary compliance burden lies with the AI provider, deployers—hospitals and imaging centres—now have explicit responsibilities: procure only compliant solutions, maintain human oversight, log performance, and cooperate on post‑market monitoring. IT leaders, therefore, need to grasp these shared duties to choose vendors wisely and embed governance that keeps AI use safe, legal, and auditable. Below, we unpack the EU AI Act, compare it with existing frameworks such as the Medical Device Regulation (MDR) and the UK’s AI sandbox, and outline the steps IT leaders should take.

The EU AI Act at a Glance

The EU AI Act is a landmark EU-wide law designed to ensure AI systems are safe, transparent, and aligned with fundamental rights. Much like GDPR did for data privacy, the AI Act establishes uniform rules across all member states. Key points to know about the Act include:

  • Risk-based Classification: The Act classifies AI systems by risk level. Radiology AI software, which under the EU MDR is required to undergo a third-party conformity assessment, falls under “high-risk AI” since it’s a medical device impacting patient outcomes. High-risk is the highest permitted category (below only outright prohibited AI like social scoring).
  • Global Reach: If you’re deploying AI in an EU hospital or even using an AI service that affects EU patients, the law applies, regardless of where the provider is based. In other words, any radiology AI used in the EU must adhere to these standards, giving the Act a global influence.

What does the “high-risk” designation mean? It triggers a cascade of obligations for the AI system’s developer (the provider) and, to some extent, the deployer (the hospital/clinic). In essence, it ensures that life-impacting AI, like an algorithm assisting diagnosis on a CT scan, is thoroughly vetted and monitored. Below are some of the strict requirements for high-risk AI systems that radiology IT teams should be aware of:

  • Rigorous Risk Management & Data Governance: Providers must implement a robust risk management system and ensure high-quality, bias-mitigated datasets for training and validation. This means your AI vendor should be practicing continuous risk assessment and carefully curating training data to avoid biases that could affect patient safety.
  • Transparency and Human Oversight: The Act mandates clear information to users about how the AI works and requires human oversight in clinical use. In practice, radiologists should always be in control – AI tools must be designed so that a human can understand their outputs and override decisions. (In fact, EU regulators insist that critical decisions cannot be fully left to AI alone.)
  • Technical Documentation & Conformity Assessment: Developers of radiology AI need to compile extensive technical documentation to prove compliance. Every high-risk AI will undergo a third-party conformity assessment (think of it like an AI-specific certification, analogous to CE marking under MDR). For IT leaders, this means any AI solution you deploy should come with proper certification and evidence of compliance.
  • Post-Market Monitoring and Reporting: Compliance isn’t one-and-done – it’s continuous. The Act requires ongoing monitoring of the AI’s performance and prompt reporting of any serious incidents or malfunctions. Hospitals using the AI may need to assist in this process by monitoring outcomes and cooperating with vendors on vigilance. Expect AI providers to offer update mechanisms, support, and guidance for post-market surveillance (similar to what medical device regulations already mandate).
  • Conforming Use by Deployers: Deployers must take technical and organisational measures to ensure the AI system is used strictly according to the provider’s instructions (e.g., user manuals).
  • Audit‑Ready Logging: As part of the monitoring, deployers must ensure that automated logs of the AI system’s operation are kept, stored, and accessible in a format that allows oversight and auditing.

EU AI Act vs. MDR vs. UK’s AI Sandbox

How does the new AI Act intersect with existing regulations in healthcare? Let’s briefly compare:

  • EU MDR (Medical Device Regulation): The MDR is the baseline medical device law in Europe, under which most radiology AI software is already classified as a medical device requiring CE marking for safety and effectiveness. The AI Act complements MDR, focusing on AI-specific risks like algorithmic bias, transparency, and data quality. There is overlap – for example, both MDR and the AI Act require a risk management system, robust quality assurance, and post-market surveillance. To reduce duplication, manufacturers will likely integrate AI Act compliance into their existing MDR processes (the Act even allows combining documentation to streamline efforts). Still, the AI Act adds new dimensions (like assessing data for bias or requiring user transparency) that go beyond MDR’s scope. IT leaders should ensure vendors not only have an MDR CE mark but are also preparing for these AI-specific compliance steps, and obtain their AI Act certification once this is offered by third-party conformity assessors.
  • UK’s AI Sandbox Approach: Outside the EU, the United Kingdom has chosen guidance and a voluntary “AI Airlock” sandbox led by the UK’s Medicines and Healthcare products Regulatory Agency (MHRA) and NHS partners. Experts test AI in real clinics while regulators watch. The sandbox is iterative and pro‑innovation, unlike the EU’s binding law. However, vendors must still obtain a UKCA mark—largely aligned with the former EU MDD—or a CE mark under the EU MDR framework. The UK continues to accept CE-marked devices for an extended transition period, including legacy MDD devices. As a result, the overall compliance burden remains broadly comparable in the near to medium term.

Key Takeaways for IT Leaders in Radiology

Regulation might sound daunting, but it ultimately promotes safer, more reliable AI – a positive for patients and providers alike. Here’s what IT leaders should do to navigate this new landscape:

  • Ensure Regulatory Clearance: Verify that any radiology AI solution you consider is MDR-cleared (CE marked) and, moving forward, ask vendors about their plans for EU AI Act compliance. Reputable vendors should be able to demonstrate their risk management processes, bias testing, and provide technical documentation on request.
  • Strengthen AI Governance: Work with your compliance officers, radiology department, and data privacy team to develop internal guidelines for AI use. This includes maintaining human-in-the-loop policies (e.g. radiologist review of AI outputs), procedures for monitoring AI performance, and channels to report issues both internally and to the manufacturer/regulator if needed. Ensure AI systems are always used in line with the provider’s user manual, and that logs of AI operations are automatically recorded, stored securely, and made accessible for oversight. Proactively instituting such governance will keep you ahead of regulatory requirements and build trust with clinicians.
  • Leverage Trusted Platforms: Consider using integrated AI platforms or marketplaces that centralize compliance and monitoring. For example, deepc’s radiology AI platform, deepcOS®, takes a centralized approach, simplifying implementation and ensuring compliance across the board. All AI tools available through it are vetted for MDR clearance, and the platform embeds security, privacy, and compliance into every layer of integration. By adopting solutions that bake in regulatory compliance and oversight, IT leaders can save time and reduce risk when deploying multiple AI applications.
  • Stay Informed and Engaged: The AI regulatory environment will continue to evolve. Keep an eye on updates – whether it’s guidance on the EU Act’s implementation, new standards (e.g. harmonized standards for AI quality), or opportunities like sandbox programs and pilot studies. Engaging in industry forums or regulatory sandbox initiatives can provide clarity and even a voice in how rules take shape.

Conclusion

The EU AI Act marks a new era for AI in radiology, bringing needed clarity and safeguards for AI-powered care. Yes, compliance will require effort from vendors and vigilance from healthcare IT teams. Yet, this is also a marketing opportunity for those who get it right – by championing trusted, transparent AI, you build credibility with both hospital leadership and patients. IT leaders who educate their organizations and partner with solution providers that prioritize compliance will not only stay ahead of regulations but also foster greater confidence in the AI tools that are transforming radiology. With the right preparation and the right partners (like deepc, for example), you can navigate the EU AI Act and related frameworks with ease, and keep your focus on delivering innovative, quality care.

Want to Learn More?

Watch our "Mastering AI in Healthcare: Northwell Health’s Blueprint for a Robust Information Security Strategy" webinar