Thanks to Elena Mandarà for collaborating on this article

On October 12, 2023, the Italian Data Protection Authority (“Garante per la protezione dei dati personali,” hereinafter the “Garante”) published a document setting forth ten basic principles to be followed by the national healthcare system when using artificial intelligence (the “AI Guidelines”).

In a nutshell, the AI Guidelines clarify how principles regulating the processing of personal data under GDPR[1] should be applied when artificial intelligence systems are implemented within the national healthcare system.

The Garante also cited the Proposal for an EU Regulation on Artificial Intelligence (“AI Act”), currently under discussion, according to which AI systems that have impact on health should be classified as high-risk systems.

Overall, the use of AI systems in the healthcare sector must be governed by ethical principles, including healthcare professionals’ deontological duties, and useful recommendations for how these may be followed in practice should be provided.

Legal basis and role of the parties

The Garante states that the legal basis for processing shall be identified based on both Article 22 of GDPR, which sets forth rules on automated decision-making, and Italian sector-specific data protection rules.

Indeed, Article 22 GDPR points out that automated decision-making, including profiling, cannot be based on the processing of health data, unless data subjects have consented to such processing or the data is processed for reasons of public interest according to Member State law. Accordingly, Article 2-sexies of the Italian Data Protection Code (Legislative Decree No. 196/2003) provides that the processing of healthcare data is allowed when permitted by EU or national law (including regulations and administrative decisions, to the extent that they specify the nature of the processed data, the operations allowed, the relevant public interests, and specific measures to protect fundamental rights and freedoms of data subjects).

The controller should also ensure that any third party to whom the data is communicated relies on proper legal grounds as described above. For this purpose, the roles of the parties must be properly identified, with particular attention paid to both legal duties and activities actually carried out by those who are involved in the processing. It must be kept in mind that (i) an AI national system in the healthcare sector could be accessed by many individuals acting as autonomous data controllers for different purposes and (ii) a public entity acting as data controller may appoint data processors, including private entities.

Privacy-by-design and privacy-by-default

The principle of privacy-by-design is crucial when deploying AI systems in the healthcare sector: adequate technical and organizational measures must be implemented to ensure that the processing is proportionate to the public interest pursued.

Additionally, implementing adequate measures is necessary to ensure the integrity and confidentiality of data and to protect data from unauthorized or unlawful processing, accidental loss, destruction, or damage. Risks must be evaluated in light of the characteristics of databases and analytics models.

Particular attention should be paid to potential biases arising from the use of machine learning techniques. In the AI Guidelines, the Garante provides useful insights on how to mitigate these risks, which are directly connected not only to the quality of data used to train AI systems but also to the logic applied by algorithms. Algorithms may identify associations and trends in human behavior that ultimately result in biased decisions. To avoid these risks, the Garante suggests taking these into account in a data processing impact assessment (“DPIA”) and explaining the algorithmic logic used to generate data, as well as the services provided to the implemented AI systems.

Main principles on the use of algorithms

In line with the Italian case law on algorithmic decisions, the Garante summarized three main principles applying to the use of algorithms and AI in the field of public interest:

  • Transparency, meaning that data subjects have the right to be aware of the existence of automated decision-making tools and to be informed about the underlying logic. The AI Guidelines identify several measures to make AI systems transparent, such as:
    • ensuring that the legal basis of the processing is clear, foreseeable, and knowable to the data subjects;
    • consulting stakeholders and data subjects in carrying out a DPIA;
    • publishing, even partially, the results of the DPIA;
    • providing data subjects with clear, straightforward, and concise information, pursuant to Articles 13 and 14 GDPR;
    • providing data subjects with additional information, such as in which phase the processing is carried out (i.e., training or application), whether healthcare professionals are obligated to use AI-based healthcare systems, and diagnostic and therapeutic benefits;
    • ensuring that data processing tools used for therapy purposes are used only upon request of a healthcare professional;
    • regulating a healthcare professional’s liability related to the choice to rely on AI systems to process their patients’ health data.
  • Non-exclusivity principle, meaning that humans must have control over automated decisions (known as having a “human in the loop”). Interestingly, the European Data Protection Board and the European Data Protection Supervisor in their Joint Opinion 05/2021 on the AI Act highlighted the importance of human control over AI systems to avoid risks. In the healthcare sector, the Garante considers it pivotal to ensure such control in the training of algorithms.
  • Principle of algorithmic non-discrimination, meaning that only reliable AI systems shall be used, and their accuracy must be checked periodically to mitigate the risk of errors and discrimination. This principle is strictly related to the reliability of AI systems, which depends firstly on the quality of the data: data must be correct and updated and measures to promptly delete or rectify incorrect data must be implemented.

Data Protection Impact Assessment (DPIA)

Finally, the Garante stressed that when an AI system is implemented in the healthcare sector, it is mandatory for the data controller to conduct a DPIA. Notably, the DPIA is considered crucial both in assessing the level of risk to data subjects’ rights and freedoms and in identifying adequate measures to mitigate such risk. Specifically, a DPIA shall take into account the risks related to a database containing health data of the entire national population (e.g., the loss of the quality of data, the withdrawal of consent, data subjects’ re-identification), risks arising from using mathematics-based algorithms to identify general trends about human behavior based on the processed data, and risks related to profiling activities for making automated decisions having an impact on individual health status.

[1] Regulation (EU) 679/2016