What the AI report gets right—and what should alarm you

September 19, 2024 by Bilal Mateen, MBBS, MPH, PhD

PATH’s Chief AI Officer reflects on the United Nation’s vision for artificial intelligence (AI) and global equity.

Governing AI for Humanity report by the United Nations.

The United Nations Secretary-General's High-level Advisory Body on AI's final report—authored by a panel of 40 experts, and contributed to by a wider group of 100 experts, including PATH's Chief AI Officer Dr. Bilal Mateen. Photo: United Nations.

This morning, the United Nations Secretary General’s High-Level Advisory Board on Artificial Intelligence (AI) released its final report, offering 99 pages of expert analysis on the current state of AI, its potential to advance the Sustainable Development Goals (SDGs), and the challenges ahead. The report also outlines seven ambitious recommendations for the global community.

  1. Establish an International Scientific Panel to provide impartial, reliable scientific knowledge about AI.
  2. Foster a policy dialogue on AI governance to promote regulatory interoperability.
  3. Facilitate an AI standards exchange to ensure technical interoperability of AI systems across borders.
  4. Create a global AI capacity development network, offering training, compute resources, and AI datasets to researchers and social entrepreneurs.
  5. Develop a global AI fund to address gaps in capacity and collaboration.
  6. Establish a global AI data framework to standardize data-related definitions, principles, and stewardship.
  7. Establish a small AI office within the UN Secretariat to support and catalyze the implementation of these proposals.

Though further work remains, the report marks a significant step forward in addressing the complex issues surrounding AI governance, a conversation initiated in 2020. The Office of the UN Secretary-General's Special Envoy on Technology (UN OSET), led by Amandeep Singh Gill, and the 39 industry, government, and social impact leaders who contributed to this effort (referred to as the “expert group”), should be commended.

The authors’ humility will stand out to those who decide to read the report’s full text. Its pages provide glimpses into the difficulties of wrestling with such a vast but profound topic, and it is in that spirit that I am sharing my own reflections on what I found most compelling and where I don’t think we’ve yet reached the “right” answer.

The urgent need to address equity in AI governance

If there were ever a moment to sound the alarm that AI could intensify global health inequities—this is that moment.

The report highlights this risk, emphasizing both the widespread exclusion of many nations and the pressing challenges that need to be addressed. For example, an examination of inter-regional AI governance initiatives[1] illustrates that while seven countries are signatories to all of them (Canada, France, Germany, Italy, Japan, UK, USA), 118 countries aren’t party to any of them.

Of those 118 excluded countries, 48 are African nations, 44 are from the Asia-Pacific region, and 25 are from Latin America and the Caribbean. As we collectively decide what the guardrails should look like for AI, most of the world doesn’t have a seat at the table, let alone a voice in the room.

…most of the world doesn’t have a seat at the table, let alone a voice in the room.

Figure 15, and the additional data from the appendices, tell an even more troubling story about who is likely to benefit from AI in health. While 10 percent of the experts polled by the UN OSET (myself included) expected AI-driven transformation to have a positive impact on SDGs in high-income countries (H, in the figure below), not a single expert thought this was true for lower-middle or lower-income countries (L, in the figure below). Notably, more than twice as many experts (29 percent) expected no positive impact in lower-income countries versus higher-income countries (11 percent).

Figure 15_AI Report

Looking at AI’s impact today, the report shows the majority experts polled—61 percent—either don’t believe AI is having any positive impact in lower-income countries, or at best, only a minor impact.

Of course, the challenge is not only about the scale of AI’s impact in lower-income countries, but also the time it would take to achieve any impact. Figure 14 illustrates the much longer time horizons that experts expect it will take for AI to accelerate scientific discoveries in lower-income countries versus high-income ones. This is just one example of how the global community could leave behind entire continents and countries if we don’t actively work to ensure equitable access and impact.

Figure 14_AI Report

The data in this report serve as a sobering reminder that we have much to do to level the playing field. They confirm many hypotheses that the global community has instinctively understood for years: it is, by and large, the lack of communications infrastructure, inadequate access to compute, and scarce local pools of adequately trained talent that prevent lower-income countries from fully leveraging this transformative technology (see Figure 16).

Figure 16_AI Report

Exploring the case for a global AI fund

There is a growing recognition of the limited capital available to address the polycrisis that characterizes this particular moment in human history. We’re already struggling to figure out how—in one of the most robustly funded aspects of global development (human health)—we’ll reach the ambitious replenishment targets for critical mechanisms like the Global Fund, International Development Association, and World Health Organization.

So, is another fund (for AI) the right answer?

Some may argue that AI, as a foundational technology, requires dedicated funding. The report rightly points out that no existing fund specifically address the digital divide. But can we truly address the issues of governance and capacity at a foundational level, cutting across all the SDGs, without deeply understanding the specific contexts in which these issues arise?

In other words, is there a one-size-fits-all approach to training or safeguards? Or are the needs of each community of practice different and thus better suited to being addressed by those that understand them more deeply? Moreover, with limited development-focused capital available, we will inevitably need to make hard choices about what to invest in and where. How would a new entity facilitate those difficult conversations surrounding opportunity costs and trade-offs?

One alternative approach could involve strengthening existing mechanisms, such as the Global Fund to Fight AIDS, TB and Malaria, to better manage AI-focused investments. At PATH, we’ve debated whether AI for TB diagnosis is an effective use of our limited development capital, especially in light of results that suggest it isn’t cost-effective. In that same vein, many of the current challenges aren’t new to the health community (aside from the novelty of GenAI); the community has long grappled with these questions of how to regulate AI (or, more broadly, software) in the medical device sector.

In my work at Wellcome Trust, I saw firsthand the challenges of building a health-focused talent pipeline in Africa. There was often a misconception that having the foundational skills of an informatician, software engineer, or technologist was enough to apply AI in a clinical setting. To suggest that all these questions of appropriate governance and effective safeguards are novel for all domains overlooks the investments already made in regulatory science and research within the health sector.

On the other hand, maybe it’s not an either-or scenario. Perhaps the answer isn’t simply a new fund but rather a financing mechanism that complements existing efforts without duplicating them. For example, we don’t need subject matter-specific compute infrastructure, but we may well need domain-specific capacity strengthening of governance capabilities.

In short, I support the argument that we need dedicated investment in AI, but the idea of a new (standalone) fund being the right mechanism for distributing those resources requires deeper exploration. At PATH, we are excited about the opportunity to support the UN and its partners in designing a solution that allows us to do the most with what we already have.

Foundational vs. functional AI knowledge

Throughout the report, the expert group highlights the need for greater capacity and foundational knowledge of what AI is, its risks, and how to appropriately use it across a range of stakeholders, from developers and innovators to regulators and policymakers.

This is an incredibly important point and one that many in the digital health field will recognize, as we’ve increasingly observed failures in good governance and best practices that arise from a functional understanding of digital technology rather than a truly foundational understanding. This difference is exemplified by how community health workers are being trained to use smartphone/tablet-based electronic community health information systems without being adequately trained to understand the risk of dis- or misinformation, predatory lending, and their obligations as stewards of personal information.

Jumping straight to education around AI without first addressing the lack of foundational digital literacy risks failing to fully enfranchise users. And many of these users will be the first and last line of defense against the familiar and novel challenges that will be faced when these tools are rolled out in health systems.

Thus, in this regard, the report falls short of fully addressing the need for foundational knowledge. Functional understanding of AI alone is insufficient without a deeper grasp of digital technology and data. And there is much to be done to build that broader foundation of digital literacy which can then be used to develop a fundamental understanding of what AI is and can do to help advance SDGs in all countries—not only high-income ones. I would have loved to see the authors use the impetus of AI to revitalize the campaign to address foundation digital literacy needs.

The role of regulatory cooperation and harmonization

Finally, at the G20 this year, I had the privilege of giving a keynote during Brazil’s 3rd health working group meeting, where I made a plea for greater global solidarity on regulatory science that concerns AI4Health.

Without it, I argued, we risk regulation and good governance becoming a competitive advantage that countries will hoard, and thus limit who benefits from AI. To hear the UN Secretary-General talk about harmonization of standards and cooperation between standard settings bodies (and see it reflected in the expert panel’s report) is phenomenal—a truly game-changing initiative if it can be realized. Its importance cannot be overstated.

Conclusion

Often, you find that someone else has perfectly articulated your thoughts on a topic, and I find myself in that position today. I’m sure those of you who got as far as Line 211 of the report will agree that it strikes the right balance of hope and realism around the current state of AI in health, but also further afield.

“[…] we remain optimistic about the future of AI and its potential for good. That optimism depends, however, on realism about the risks and inadequacy of structures and incentives currently in place. We also need to be realistic about international suspicions that could get in the way of the global collective action needed for effective and equitable governance. The technology is too important, and the stakes are too high, to rely only on market forces and a fragmented patchwork of national and multilateral action.”

As I stated in my introduction, the recommendations for how to address the challenges identified might not be perfect, and—even where they are—the devil will be in the details of operationalizing those calls to action. Regardless, I’m excited to see a meaningful step in the right direction and proud that PATH is so committed to being part of the solution. Alongside all our local and global partners, we will continue working to ensure AI drives positive, equitable change for all. The stakes are too high to do anything else.

[1] Sample: OECD AI Principles (2019), G20 AI principles (2019), Council of Europe AI Convention drafting group (2022–2024), GPAI Ministerial Declaration (2022), G7 Ministers’ Statement (2023), Bletchley Declaration (2023) and Seoul Ministerial Declaration (2024).