Briefing: AI Governance and The Role of Compute Providers

· 4 min read
By

As AI regulation continues to gain attention, policymakers are increasingly looking towards physical computational resources, such as data centers and semiconductors, as crucial policy levers for governing AI. The sub-field of AI governance concerned with the regulation of the physical resources necessary for developing AI systems is known as compute governance (compute being short for “computing power”). 

To help Mission representatives and civil society actors in Geneva better understand the field, on May 31st, 2024, the Simon Institute for Longterm Governance (SI), hosted a briefing titled “AI Governance and the Role of Compute Providers”, in collaboration with the Permanent Mission of Costa Rica to the United Nations. The briefing featured two experts, Robert Trager, Co-Director of the Oxford Martin AI Governance Initiative, and Lennart Heim, Associate Information Scientist at RAND. 

Lennart opened the briefing by introducing the concept of compute governance, emphasizing its role in the AI triad (compute, data, and algorithms), outlining reasons for which compute is particularly well suited as a target for regulation, and discussing strategies for governing compute. He went on to provide examples of existing compute governance practices, including export restrictions on semiconductors. Key takeaways from Lennart’s presentation include: 

  • Compute has three key properties that make it a good target for regulation: 

    • Excludable: Compute can only be used by one actor at a time.
    • Quantifiable: The amount of compute an actor is using can be measured.
    • Necessary: Compute is essential for running AI applications.
  • There are three key strategies for governing compute: 

    • Monitor Usage: Leverage the quantifiable nature of compute to track usage and identify high-risk systems.
    • Restrict Usage: Deny access to certain resources based on the excludable property of compute, such as export restrictions on AI chips.
    • Promote Usage: Provide subsidized access to resources to enable responsible use and safety research.
  • While compute governance has the potential to be highly effective, it should not be seen as a stand alone solution. It should be seen as a tool within a broader set of AI governance regimes targeting other components of the AI triad, i.e. algorithms and data.

Following Lennart’s introduction, Robert discussed the potential for an international reporting and monitoring regime for AI developers and compute providers as a solution for ensuring accountability over the most advanced AI systems. Under such a regime, AI companies and compute providers would be required to report details on the computing power used by their most advanced AI models to an international body. Such a regime could: 

  • Enhance transparency, by revealing which companies are using the most computing resources and running the largest training runs;
  • Serve as an enforcement mechanism, allowing compute providers to deny access to their hardware if AI companies fail to meet specified reporting criteria;
  • Prevent regulatory arbitrage by establishing international standards on computing limits, stopping companies from relocating to “compute havens” with lax rules, akin to tax havens.

During the Q&A, audience members asked about regulatory oversight across borders, how compute governance might contribute to benefit-sharing, and how compute can be used to effectively monitor AI labs, especially as AI labs increasingly become their own compute providers. Many questions also centered around linkages between compute governance and data governance, as well as the advantages large tech companies currently enjoy due to the lack of international standards on data usage.

To close the session, SI’s Director of Policy, Belinda Cleeland, brought the discussion back to a UN context. She noted how many have argued against creating new institutions before the science has been fully developed, and asked Robert and Lennart for their advice to international actors on where to start.

Lennart outlined that it’s useful, at the very least, for the international system to discuss AI and compute governance even amidst uncertainty, as such discussions help raise baseline understandings. Other useful actions might include establishing an international agreement on acceptable risk thresholds and having a forum to harmonize national-level reporting efforts. Robert outlined that an ideal approach for the international system would be to have a path for engaging with the various AI safety institutes emerging around the globe – perhaps through an AI office at the UN to aid capacity building. Whatever the path forward, it’s crucial that the UN aligns its efforts with other work happening globally.

Here at the Simon Institute, we’re closely monitoring the Global Digital Compact, responding to drafts as they arise, supporting Member States in understanding AI governance, and bridging actors who are thinking about institutional proposals for international AI governance. You can learn more about our work here.