- Patient data is Article 9 special category data, the highest GDPR protection tier, with strict processing conditions
- AI systems used in clinical decision-making are classified as high-risk under Annex III of the EU AI Act
- A DPIA is mandatory before deploying any AI tool that processes special category data at scale
- If your DPIA identifies high residual risk, you must consult your national supervisory authority before deployment
- US-based AI tools create data transfer obligations under GDPR Chapter V. Contracts alone may not be sufficient
The actual risk: health data and AI are a specific combination
Health data is special category data under GDPR Article 9. Processing it requires a lawful basis under Article 6 and a separate condition under Article 9(2). The conditions are narrower than most organisations assume. Legitimate interests, the most flexible basis for ordinary personal data, is not available for special category data. You need explicit consent, a legal obligation, vital interests, or a specific healthcare provision basis.
This matters for AI because many AI tools process data in ways that are not obvious from the user interface. A clinical summarisation tool receives patient notes. A scheduling AI may process diagnosis codes to optimise appointment slots. An administrative chatbot might store conversation history that contains symptoms or medication information. Each of these constitutes processing of special category health data, and each requires a valid Article 9(2) condition and a documented legal basis before the tool goes live.
The risk is compounded by the third-party nature of most AI tools. When your organisation uses a SaaS AI product, the vendor processes data on your behalf as a data processor. That means you, as the controller, remain responsible for lawfulness of the processing. A vendor's privacy policy that states "we do not use your data to train our models" does not substitute for your own legal basis assessment.
The EU AI Act classification for healthcare AI
The EU AI Act entered into force in August 2024, with most provisions applying from August 2026. Healthcare organisations deploying AI need to understand how the Act classifies their tools now, because the compliance steps required before deployment take time.
AI systems used in clinical settings are classified as high-risk under Annex III of the Act. This covers systems intended to be used as medical devices, systems that assist in clinical diagnosis, and systems that influence treatment decisions. High-risk AI systems require a conformity assessment before they are placed on the market or put into service, maintain detailed technical documentation, implement human oversight mechanisms, and be registered in the EU database for high-risk AI systems.
AI systems used purely for administrative tasks, such as scheduling, HR, or financial management within a healthcare organisation, may not fall into the high-risk category under the AI Act. The distinction matters because the compliance burden is substantially lower for non-high-risk systems. However, GDPR obligations apply regardless of AI Act classification. An administrative AI that processes health data still needs a valid legal basis and a DPIA.
A tool the vendor markets as "AI for healthcare administration" may or may not be high-risk under the AI Act, depending on whether it influences clinical decisions. The marketing description is not the legal test. The question is whether the output of the system is likely to be used to take or influence decisions about a patient's health. If yes, it is high-risk regardless of how the vendor positions it.
Five questions to ask before deploying any AI tool
Run these questions across every AI tool under evaluation. They apply to tools you procure from vendors and to AI features embedded in existing software platforms (which are increasingly common in EHR, scheduling, and communication systems).
When a DPIA is mandatory (and why most are being skipped)
A Data Protection Impact Assessment is mandatory under GDPR Article 35 before beginning processing that is "likely to result in a high risk" to individuals' rights and freedoms. The Article 29 Working Party guidelines (now the EDPB) specify nine criteria for triggering a mandatory DPIA. Processing health data at scale satisfies criterion one and criterion two: sensitive data and large-scale processing. Any healthcare AI tool deployed across a clinical operation is almost certainly inside mandatory DPIA territory.
The practical problem is that DPIAs take time: typically two to six weeks when done properly. Procurement timelines and the pressure to deploy AI tools quickly lead many organisations to skip the assessment or to conduct a superficial version that does not address the actual risks. Supervisory authorities are aware of this pattern. The Dutch AP (Autoriteit Persoonsgegevens) and the UK ICO have both issued guidance specifically on AI and data protection, and both include DPIA requirements as a primary compliance check.
A DPIA for a healthcare AI tool needs to cover: the processing purpose and necessity, data flows and retention, risks to patient rights (including discrimination risk if the AI produces biased outputs), security measures, and the vendor's role as data processor. It also needs to address the Article 22 automated decision-making question explicitly.
If your DPIA identifies a high residual risk that cannot be mitigated, you are obligated to consult your supervisory authority before proceeding. Prior consultation is a formal process, not an informal conversation. Building adequate DPIA time into AI procurement decisions avoids the situation where a tool is already contracted before you discover prior consultation is required.
What safe deployment looks like in practice
Three structural choices determine whether your AI deployment creates risk or manages it.
Data minimisation. Audit what data the tool actually needs to function. Many AI tools are configured by default to receive more data than necessary. A clinical summarisation tool that requires full patient records to summarise a consultation note should be challenged: can it work with just the consultation content, stripped of other identifiers? The less health data flowing to a third-party system, the narrower your exposure.
Local deployment versus cloud API. AI tools that process data locally within your infrastructure, rather than sending it to a cloud API, reduce third-party processor risk substantially. This is more relevant for administrative AI than for SaaS clinical tools, where local deployment is rarely an option. For internal AI tools being built or configured by your team, local model deployment should be the default consideration for anything touching health data.
Human oversight mechanisms. The EU AI Act requires high-risk AI systems to include features allowing human oversight, including the ability for a human to intervene, override, or stop the system. Build your deployment architecture around this requirement. An AI system that produces clinical summaries for review is compliant. The same system, configured to automatically update patient records without review, may not be.
Healthcare AI offers genuine operational value, and the goal here is not to block deployment but to structure it so your organisation is protected. Organisations that do the DPIA, document the legal basis, and build in oversight mechanisms are in a strong position with regulators. Those that deploy first and assess later are not.