World Health Organization: Guidance On Large Multi-Modal Models

We provided input to the World Health Organization's comprehensive guidance on the governance of large multi-modal models (LMMs), a subset of generative AI, for applications in healthcare. The guidance builds on WHO's 2021 framework on AI ethics and aims to help member states assess the implications of LMMs and define regulatory pathways for their ethical, safe, and equitable deployment.

Related

Key Stats and Insights

LMMs represent the fastest adoption rate of any consumer technology in history, with ChatGPT reaching 100 million users in two months (January 2023). These models can process and generate multiple data types—text, images, audio, video—and are being piloted across clinical care, education, clerical tasks, and research. Several LMMs have passed the US Medical Licensing Examination, though performance remains variable outside well-represented datasets.

Training datasets demonstrate substantial scale and diversity across medical applications. Examples include models trained on 30,000+ medical case reports, 100,000+ chest X-rays, and millions of electronic health records. Performance evaluations show varying outcomes, with a 2023 U.S. study indicating ChatGPT responses were preferred over physicians' responses in approximately 80% of 195 cases analyzed from an online medical forum.

Risk Overview

  • Accuracy: Studies show hallucination rates of 3–27%, depending on prompt and model.

  • Bias: Training data often overrepresent high-income regions; e.g., genetic datasets skew toward people of European descent.

  • Automation Bias: Physicians may overly rely on AI outputs, risking clinical errors.

  • Environmental Impact: LMMs contribute significantly to carbon and water footprints, raising sustainability concerns.

Workforce implications present multifaceted challenges. LMMs may displace certain health roles while simultaneously requiring new skills and retraining initiatives. The technology can result in low-wage, psychologically taxing data labor conditions. Patient autonomy faces particular risks through compromised informed consent processes and epistemic injustice, with marginalized groups experiencing disproportionate vulnerability.

Governance Framework

WHO recommends structured action across the AI lifecycle: Development → Provision → Deployment.

Development Phase – Responsibilities

  • Developers

    • Conduct data protection impact assessments

    • Involve diverse stakeholders

    • Optimize energy efficiency

    • Provide transparency on training data

  • Governments

    • Enact strong data protection laws

    • Mandate pre-certification and outcome-based standards

    • Fund open-source LMMs

    • Enforce carbon and water accountability

Provision Phase: Governments should assign regulatory approval authority, require third-party audits and transparency regarding source code and data input, apply consumer protection laws, and prohibit non-trial experimental use without oversight.

Deployment Phase: Deployers (e.g., ministries, hospitals) must avoid LMM use in unfit settings, communicate risks openly, ensure equitable pricing and linguistic access, and train professionals on LMM limitations, cybersecurity, and ethical use.

The WHO framework is grounded in six core ethical principles: Protect Autonomy, Promote Well-being & Safety, Ensure Transparency & Explainability, Foster Accountability, Ensure Inclusiveness & Equity, and Promote Sustainability. To strengthen user protection, the guidance recommends that governments consider strict liability regimes, no-fault compensation mechanisms, and presumptions of causality to reduce barriers to redress. WHO further urges equitable international governance, involving not only high-income nations and corporations but also low- and middle-income countries, civil society, and multilateral institutions through a model of networked multilateralism.

• • •

References

¹ World Health Organization. "Ethics and governance of artificial intelligence for health: Guidance on large multi-modal models." Geneva: WHO; 2024. ISBN: 978-92-4-008475-9.

² World Health Organization. "WHO releases AI ethics and governance guidance for large multi-modal models." January 18, 2024.

³ World Health Organization. "Ethics and governance of artificial intelligence for health: WHO guidance." Geneva: WHO; 2021. ISBN: 978-92-4-003419-8.

⁴ World Health Organization. "WHO issues first global report on Artificial Intelligence (AI) in health and six guiding principles for its design and use." June 28, 2021.

⁵ WHO IRIS (Institutional Repository for Information Sharing). "Ethics and governance of artificial intelligence for health." 2021.

⁶ World Health Organization, Chief Scientist and Science Division, Health Ethics & Governance Unit. "Ethics and governance of artificial intelligence for health: Guidance on large multi-modal models." 2024.

⁷ National Center for Biotechnology Information, PMC. "Large multi-modal models – the present or future of artificial intelligence in medicine?" PMC10915764. March 2024.

⁸ Nature Digital Medicine. "Global Initiative on AI for Health (GI-AI4H): strategic priorities advancing governance across the United Nations." 2025.

⁹ National Center for Biotechnology Information, PMC. "The ethics of advancing artificial intelligence in healthcare: analyzing ethical considerations for Japan's innovative AI hospital system." PMC10390248. 2023.