The WHO’s publication of its Ethics and governance of artificial intelligence for health: Guidance on large multi-modal models from January 18, 2024 is an almost 100 page document, including over 20 pages of footnotes, which takes a deep dive into the risks and challenges of Large Multi-Modal Models (LMMs) developed, provided, and/or deployed for health care purposes. It then sets out extensive guidance for developers, providers, deployers, as well as governments on how to address and mitigate these risks.
This article reiterates a brief overview of the risks the WHO surfaces in its Guidance, and then focuses on the governance proposals that pertain to data protection. This approach makes it clear that data protection is by far not the only concern, but given the extensive scope of the WHO’s Guidance, we zero in on just one, giving it the space a thorough analysis requires.
Risk and Challenges posed by LMMs in Health Care
Here are two lists of challenges the WHO is drawing attention to in its Executive Summary of the Guidance on Ethics and Governance of LMMs:
In addition to the risks and challenges listed above, the WHO also points out that LMMs’ water and carbon footprint increase the health challenges particularly of middle- and low-income communities due to the immense compute power required to train and run LMMs.
Privacy Risks in Detail – Uses
As mentioned, privacy protection is one among many risks with which the Guidance is concerned. Let’s take a look at the importance the Guidance places on privacy. Table 1 lists potential privacy risks in one of the LMM use cases, patient-guided use, and table 2 includes erosion of trust as a result of privacy violations as a risk to the healthcare system overall. With foreseeable reliance on LMMs in the provision of healthcare services, the concern is that a lack of trust in the privacy preservation by LMMs will mean a decrease in trust in the healthcare system overall as people may perceive access to healthcare only being available at the risk of their privacy.
The body of the Guidance makes it clear that privacy along with bias are the two risks that must be addressed in all three phases, development, provision, and deployment of AI.
Privacy also features in the first of the 6 ethical principles that the WHO gained consensus on, following its established method on how to provide guidance.
Turning first to the risks for privacy associated with the use of LMMs in healthcare, the Guidance cites a study that suggests that “transitioning from a large language model that is used for answering medical questions to a tool that can be used by healthcare providers, administrators, and consumers will require considerable additional research to ensure the safety, reliability, efficacy and privacy of the technology.” In other words, the WHO cautions that the status quo in terms of the privacy posture of existing LMMs likely falls considerably short, making them ill-suited for use in healthcare at this time.
Here is why: First, the Guidance draws attention to the fact that the use of LMMs in the recent past frequently led to the disclosure of personal, health, and other confidential information. This indicates that there are not sufficient guardrails in place to protect such information. The WHO notes further that once disclosed to the LMM developer the data can usually not be retrieved as future iterations of the model may be trained on this data despite potentially lacking a legal basis that permits this use. Another problematic avenue of disclosure in the context of the use of LMMs is sharing sensitive information with other users of the LMM rather than (just) the company developing it.
Secondly, on the topic of a legal basis for the use of personal data, it is an unsolved issue whether consent by individuals is required for the training of LMMs, and if so, how it can be effectively obtained. The WHO suggests that instead of extensive web scraping, developers may have to make do with smaller data sets as a consequence of these regulatory requirements. Using smaller datasets may on the other hand increase the risk of re-identification of individuals whose data is contained in these data sets.
Here is the complete list of privacy issues the WHO sets out in its Guidance before the backdrop of the General Data Protection Regulation and from which it concludes that LMMs “may never be compliant with the General Data Protection Regulation or other data protection laws.”
- LMMs scraped and used the personal data of individuals from the internet without their consent (and without a ‘legitimate interest’ for collecting such data);
- LMMs that do not have an appropriate “age-gating” system to filter out users under the age of 13 and those aged 13–18 for whom parental consent has not been given;
- LMMs that cannot inform people that they are using their data or give them the right to correct mistakes, to have their data erased (the “right to be forgotten”) or to object to use of such data;
- LMMs that are not fully transparent in their use of sensitive data provided to a chatbot or other consumer interface, although, by law, a user must be able to delete chat log data;
- LLM developers may generally keep the data for longer than required in violation of the data minimization principle;
- LMMs that cannot prevent breaches of personal information;
- LMMs that publish inaccurate personal information, due partly to hallucinations; and
- LLMs might violate the right to explanation when personal information is used for automated decisions.
As a result of the inability of companies to abide by existing laws, the WHO tells us, a major LMM developer indicated that it may not be possible to offer their LMM in Europe. The WHO is concerned that this trade-off may result in an erosion of privacy rights and healthcare that is only accessible when individuals forego certain human rights in return. Furthermore, such statements raise serious concern about forthcoming AI regulations and their effectiveness in regulating some key players in the industry.
Recommendations to Enhance Privacy Protection
Privacy protection is important at every stage of the AI lifecycle, the development, provision, as well as deployment. Consequently, the WHO provides guidance for data protection practices at different stages and directed at different stakeholders.
In addition, it provides guidance to governments on what to consider when regulating AI. We will again focus only on the privacy-related recommendations.
Recommendations for the Development Phase
One concern the WHO is raising is a possible lack of expertise in what is required to protect health information by LMM developers, conceding, however, that this may well improve over time. Yet it is only the developers of LMMs that have control over the data that are included in the training of the model and how they are obtained. Missing the opportunity to do so in a privacy-preserving manner cannot easily be mitigated later on. The Guidance cautions that currently the work required to properly prepare the training data for LLM is undervalued in light of lacking incentives to do so. The WHO therefore recommends that developers are being held accountable for design flaws at this stage, considering also that this is where there are deep pockets that can meaningfully compensate for downstream harms.
In line with that, the recommendation to governments entails ensuring there are more stringent mechanisms in place than voluntary codes of conduct or ethical commitments, i.e., strict enforcement of applicable privacy laws as well as their revision, considering that these laws were mostly put in place before the emergence of generative AI and thus may have not been intended to apply to the use case at hand.
We already mentioned above that the WHO recommends considering the use of smaller data sets to enable developers to obtain meaningful consent from individuals whose data is concerned. A further recommendation for LMM developers is conducting Data Protection Impact Assessments (DPIAs). DPIAs can surface risks arising from the processing of personal information and effective mitigation strategies.
Irrespective of the size of the training data set, the WHO calls upon developers to follow privacy laws and best practices, including the use of privacy preserving technologies, when collecting the data. It advises not to use third-party sources, and if that is not possible, to select such third parties carefully. In turn, governments should consider certifying third parties that provide data broker services to ensure that the data collection process is lawful.
To support downstream providers, deployers, and users, LMM developers must be transparent about the data they collect and use, something the WHO remarks is less and less the case in recent releases of LMMs. Without such knowledge, exercising privacy rights by the users and ensuring compliance by LMM providers is near impossible.
Recommendations for the Provision Phase
A provider of an LMM in the healthcare sector would be someone who integrates an LMM in a specific healthcare-related application. We have seen a push toward regulating only these kinds of uses of LMMs during the last phase of the EU AI Act negotiations, which gladly did not prevail. The WHO opines that placing the burden of compliance solely on providers, deployers, and user is misguided, as much of the risk mitigation as it relates to privacy and many other risks is most effectively, if not only, addressed in the development phase.
Hence, the WHO favors and recommends a collaborative approach between developers and providers to preserve privacy. Similarly, liability for harm should be shared, with the burden of proof of compliance falling on the developer and the provider, not the user.
If a developer finds that the use of the LMM in healthcare is not supported by its abilities or is too risky, including because the LMM should not handle health-related personal information, the developer can restrict the use by a licensing regime or by blocking healthcare related queries if they are made by users directly. Alternatively, a warning sign might be issued when output contains medical information, including guidance on how to reach a medical professional, where necessary.
Governments are also called upon when determining what uses are permitted or not and which ones are considered high-risk, and we already see that in the forthcoming EU AI Act, for example. However, the WHO envisions healthcare use-specific prohibitions and permissions. Even if an AI were safe to deploy for other purposes, it may not be suitable for healthcare.
On the status quo, the Who says: “Newly proposed regulations on AI technologies for medical devices in the European Union and the USA will probably integrate ethical principles related to the use of AI in health, including “explainability”, control of bias and transparency. It is unlikely that current chatbots that include LMMs could meet such standards.” We would add that the strict privacy requirements often imposed by health-specific privacy laws and regulations such as HIPAA are unlikely to be met by LMMs today.
Recommendations for the Deployment Phase
The deployer of an LLM could be the same as the developer or the provider of an LMM or an application built upon an LMM. The guidance provided for this phase is based on the insight that LMMs can be used in unanticipated ways or generate responses that change over time. Some of these risks can again only be mitigated by the developer or provider, and not the user. From a privacy perspective, the risks in this phase relate to data entered into and put out by an LMM.
The Guidance advocates for impact assessments to be conducted for any LMM, regardless of whether it is initially considered low risk. It also recommends post-release auditing and impact assessments by independent third parties when an LMM is deployed on a large scale. These assessments should ideally be published and include outcomes and impacts disaggregated by the type of user, including for example by age, race or disability.
Healthcare providers using LMMs must be appropriately trained, including on privacy risks and requirements of informed consent. The WHO calls out the risk associated with submitting protected health information into chatbots which, once trusted, may disclose this information more broadly than it is obvious (see above).
Conclusion
The picture the WHO paints of the status quo, that is, the current suitability of LMMs to be used in healthcare applications is rather bleak. One reason is the uncertainty around the ability of LMMs to protect health-related information. More likely than not, current LMMs violate data protection requirements such as consent, data minimisation, retention, and disclosure restrictions.
The good news is that the WHO’s Guidance provides lots of ideas on how to improve the status quo. As we know, regulations are usually slow to be enacted and while voluntary standards are likely not sufficient in the long term, LMM developers, providers, and deployers are called upon to take steps to preserve individual privacy.
Private AI can help with that. Using the latest advancements in Machine Learning, our algorithms can accurately identify protected health information and other personal information in large, unstructured data sets and then replace these data points to facilitate privacy compliance. To see the tech in action, try our web demo, or request an API key to try it yourself on your own data.