Guide AI Healthcare

Using AI in your healthcare organisation without creating GDPR exposure

Healthcare AI is moving fast. Administrative tools, clinical decision support, and diagnostic aids are all reaching procurement committees at hospitals, clinics, and care groups across the Netherlands and UK. The problem is that most of these tools process health data, and health data under GDPR sits in a category with specific rules that generic AI guidance ignores entirely.

CT
Cyvra Team
Cyvra Consultancy
8 May 2026
7 min read
Key takeaways
  • Patient data is Article 9 special category data, the highest GDPR protection tier, with strict processing conditions
  • AI systems used in clinical decision-making are classified as high-risk under Annex III of the EU AI Act
  • A DPIA is mandatory before deploying any AI tool that processes special category data at scale
  • If your DPIA identifies high residual risk, you must consult your national supervisory authority before deployment
  • US-based AI tools create data transfer obligations under GDPR Chapter V. Contracts alone may not be sufficient

The actual risk: health data and AI are a specific combination

Health data is special category data under GDPR Article 9. Processing it requires a lawful basis under Article 6 and a separate condition under Article 9(2). The conditions are narrower than most organisations assume. Legitimate interests, the most flexible basis for ordinary personal data, is not available for special category data. You need explicit consent, a legal obligation, vital interests, or a specific healthcare provision basis.

This matters for AI because many AI tools process data in ways that are not obvious from the user interface. A clinical summarisation tool receives patient notes. A scheduling AI may process diagnosis codes to optimise appointment slots. An administrative chatbot might store conversation history that contains symptoms or medication information. Each of these constitutes processing of special category health data, and each requires a valid Article 9(2) condition and a documented legal basis before the tool goes live.

The risk is compounded by the third-party nature of most AI tools. When your organisation uses a SaaS AI product, the vendor processes data on your behalf as a data processor. That means you, as the controller, remain responsible for lawfulness of the processing. A vendor's privacy policy that states "we do not use your data to train our models" does not substitute for your own legal basis assessment.

Art. 9
GDPR: health data requires a separate lawful basis beyond the standard Art. 6 legal basis
High risk
EU AI Act classification for AI systems used in healthcare decision-making
DPIA
mandatory before deploying any AI that processes health data at scale

The EU AI Act classification for healthcare AI

The EU AI Act entered into force in August 2024, with most provisions applying from August 2026. Healthcare organisations deploying AI need to understand how the Act classifies their tools now, because the compliance steps required before deployment take time.

AI systems used in clinical settings are classified as high-risk under Annex III of the Act. This covers systems intended to be used as medical devices, systems that assist in clinical diagnosis, and systems that influence treatment decisions. High-risk AI systems require a conformity assessment before they are placed on the market or put into service, maintain detailed technical documentation, implement human oversight mechanisms, and be registered in the EU database for high-risk AI systems.

AI systems used purely for administrative tasks, such as scheduling, HR, or financial management within a healthcare organisation, may not fall into the high-risk category under the AI Act. The distinction matters because the compliance burden is substantially lower for non-high-risk systems. However, GDPR obligations apply regardless of AI Act classification. An administrative AI that processes health data still needs a valid legal basis and a DPIA.

Key distinction

A tool the vendor markets as "AI for healthcare administration" may or may not be high-risk under the AI Act, depending on whether it influences clinical decisions. The marketing description is not the legal test. The question is whether the output of the system is likely to be used to take or influence decisions about a patient's health. If yes, it is high-risk regardless of how the vendor positions it.

Five questions to ask before deploying any AI tool

Run these questions across every AI tool under evaluation. They apply to tools you procure from vendors and to AI features embedded in existing software platforms (which are increasingly common in EHR, scheduling, and communication systems).

1
What personal data does this tool process, and does any of it qualify as health data under GDPR?
Map data flows before procurement, not after. Ask vendors for a data flow diagram that shows what the tool receives, where it is processed, and how long it is retained. Include subprocessors.
2
What is our legal basis for processing this data?
Identify the specific Article 6 basis and the specific Article 9(2) condition. Document them. If you cannot identify both, the tool cannot go live until you can.
3
Does this system qualify as high-risk under the EU AI Act?
If the tool might influence clinical decisions, treat it as high-risk until you have a clear legal opinion otherwise. Request the vendor's conformity assessment documentation.
4
What rights do patients have over the data this tool processes?
GDPR Article 22 gives individuals the right not to be subject to solely automated decisions that significantly affect them. If the AI tool produces outputs that feed directly into clinical or administrative decisions about patients without human review, you have an Article 22 obligation to disclose this and offer a human review option.
5
Where does this tool's data go if we stop using it?
Understand data deletion and portability terms before signing. Health data retained by a vendor after contract termination creates ongoing liability for your organisation as the controller.

When a DPIA is mandatory (and why most are being skipped)

A Data Protection Impact Assessment is mandatory under GDPR Article 35 before beginning processing that is "likely to result in a high risk" to individuals' rights and freedoms. The Article 29 Working Party guidelines (now the EDPB) specify nine criteria for triggering a mandatory DPIA. Processing health data at scale satisfies criterion one and criterion two: sensitive data and large-scale processing. Any healthcare AI tool deployed across a clinical operation is almost certainly inside mandatory DPIA territory.

The practical problem is that DPIAs take time: typically two to six weeks when done properly. Procurement timelines and the pressure to deploy AI tools quickly lead many organisations to skip the assessment or to conduct a superficial version that does not address the actual risks. Supervisory authorities are aware of this pattern. The Dutch AP (Autoriteit Persoonsgegevens) and the UK ICO have both issued guidance specifically on AI and data protection, and both include DPIA requirements as a primary compliance check.

A DPIA for a healthcare AI tool needs to cover: the processing purpose and necessity, data flows and retention, risks to patient rights (including discrimination risk if the AI produces biased outputs), security measures, and the vendor's role as data processor. It also needs to address the Article 22 automated decision-making question explicitly.

Important

If your DPIA identifies a high residual risk that cannot be mitigated, you are obligated to consult your supervisory authority before proceeding. Prior consultation is a formal process, not an informal conversation. Building adequate DPIA time into AI procurement decisions avoids the situation where a tool is already contracted before you discover prior consultation is required.

What safe deployment looks like in practice

Three structural choices determine whether your AI deployment creates risk or manages it.

Data minimisation. Audit what data the tool actually needs to function. Many AI tools are configured by default to receive more data than necessary. A clinical summarisation tool that requires full patient records to summarise a consultation note should be challenged: can it work with just the consultation content, stripped of other identifiers? The less health data flowing to a third-party system, the narrower your exposure.

Local deployment versus cloud API. AI tools that process data locally within your infrastructure, rather than sending it to a cloud API, reduce third-party processor risk substantially. This is more relevant for administrative AI than for SaaS clinical tools, where local deployment is rarely an option. For internal AI tools being built or configured by your team, local model deployment should be the default consideration for anything touching health data.

Human oversight mechanisms. The EU AI Act requires high-risk AI systems to include features allowing human oversight, including the ability for a human to intervene, override, or stop the system. Build your deployment architecture around this requirement. An AI system that produces clinical summaries for review is compliant. The same system, configured to automatically update patient records without review, may not be.

Healthcare AI offers genuine operational value, and the goal here is not to block deployment but to structure it so your organisation is protected. Organisations that do the DPIA, document the legal basis, and build in oversight mechanisms are in a strong position with regulators. Those that deploy first and assess later are not.

Frequently asked questions

Is AI in healthcare classified as high-risk under the EU AI Act?

Yes, in most clinical and administrative contexts. The EU AI Act Annex III classifies AI systems used in the management and operation of critical infrastructure (including hospitals), AI-assisted clinical diagnosis, treatment recommendation, and monitoring of patient outcomes as high-risk. High-risk classification requires technical documentation, conformity assessment, registration in the EU database, and human oversight mechanisms before deployment.

When is a DPIA mandatory for AI in a healthcare setting?

A Data Protection Impact Assessment (DPIA) is mandatory under GDPR Article 35 when processing is likely to result in a high risk to individuals. For healthcare AI, this threshold is almost always met. Processing Article 9 special category data (health data) at scale, automated decision-making with significant effects, and systematic monitoring of individuals are all triggers. The DPIA must be completed before the processing starts. Retrofitting is not compliant.

What happens if a DPIA finds high residual risk?

If a DPIA concludes that high residual risk remains after mitigation measures are applied, GDPR Article 36 requires prior consultation with the competent supervisory authority before processing begins. The authority has up to 8 weeks to respond, extendable by a further 6 weeks in complex cases. Deploying without completing this step exposes the organisation to enforcement action and potential fines of up to €20 million or 4% of global turnover.

What is the legal basis for processing health data with AI tools?

Health data (Article 9 special category data) requires both a lawful basis under GDPR Article 6 and an additional condition under Article 9(2). In healthcare, the most common conditions are explicit patient consent, processing necessary for the provision of healthcare services, or processing for reasons of public health. Legitimate interest, commonly used for general AI tools, does not satisfy Article 9. Each AI use case needs its own documented legal basis.

Can we use US-based AI tools to process patient data?

It is possible but requires careful structuring. Transferring personal data outside the EEA requires a transfer mechanism under GDPR Chapter V. Standard Contractual Clauses are the most common route for US transfers, but must be supplemented by a Transfer Impact Assessment. Some US providers participate in the EU-US Data Privacy Framework, which can simplify the legal basis. For high-sensitivity health data, many organisations are choosing EEA-hosted processing to reduce legal risk.

Talk to Cyvra

Evaluating an AI tool for your healthcare organisation?

We help healthcare organisations assess AI tools for GDPR compliance, EU AI Act readiness, and security, before they go live.