The Executive Office of the Secretary-General (EOSG) of the United Nations is publishing a series of policy briefs to inform the Our Common Agenda processes. The Simon Institute for Longterm Governance teamed up with Riesgos Catastróficos Globales to review the policy briefs and provide substantial input to contribute to the impact of these efforts.
Summary of the policy brief
- The Executive Office of the Secretary-General (EOSG) of the United Nations is publishing a series of policy briefs to inform the Our Common Agenda processes. On May 25th 2023, the EOSG published its fifth policy brief on the Global Digital Compact.
- The Global Digital Compact is an intergovernmental process led by two Member State co-facilitators, currently Sweden and Rwanda, with the aim of “shaping a shared vision on digital cooperation by providing an inclusive global framework.” (Policy brief, p. 2) The process is building upon the Secretary General’s Roadmap for Digital Cooperation, as well as the ‘Our Common Agenda’ report, and should be included as part of a larger Pact for the Future adopted at the Summit of the Future in September 2024.
- This policy brief presents objectives, principles and recommendations to be included in the Global Digital Compact. It focuses on:
- Closing the digital divide and advancing the Sustainable Development Goals
- Ensuring an open and safe online environment for all
- Governing artificial intelligence for the benefit of humanity
- Some of the principles proposed in this brief include:
- Closing the digital divide and empowering people through capacity-building
- Making targeted investments in digital public infrastructure and services
- Making human rights the foundation of a digital future
- Ensuring that data are governed for the benefit of all and in ways that avoid harming people and communities
- Providing a transparent, safe and reliable design of artificial intelligence (AI)
Our response
We appreciate the comprehensive and detailed proposals presented in this policy brief. Leveraging our expertise in the field of Artificial Intelligence (AI), we focus our response on how the Global Digital Compact can most effectively frame AI governance issues to help States achieve the best possible outcomes.
We emphasize the importance of addressing both current harms and future risks related to AI, and pay particular attention to the significant risks posed by unregulated AI. We further describe why it is imperative that international efforts address and prioritize the governance of AI foundation models. Finally, we highlight the importance of ensuring that AI governance discussions are inclusive, with major AI developers, their host states, as well as low- and middle-income countries all included in key dialogues.
It is important to note that while our primary focus lies within the realm of AI governance, we recognize the value and importance of the broader spectrum of proposals to ensure a comprehensive and inclusive Global Digital Compact.
AI’s promise demands tackling current harms and future risks
AI holds enormous potential to benefit society and could unlock extraordinary opportunities across all sectors, significantly advancing the Sustainable Development Goals. If well governed, AI could increase economic efficiency, support disaster response efforts, and even help mitigate the impacts of climate change. However, as highlighted in the policy brief, AI also has the potential to be incredibly disruptive, necessitating an urgent need to better understand its harms and effective mitigation strategies. This need is being tackled by the growing field of AI Ethics, which seeks to create fair and accountable AI systems that address present and near-term harms such as:
- Threats to privacy and other human rights stemming from the use of AI for large-scale surveillance systems intended to monitor, track, and surveil citizens, especially by authoritarian states.
- Economic disruption and job losses arising as AI automation replaces millions of jobs and disrupts global labor markets.
- The rise of misinformation and breakdown of civil public discourse as AI exploits social media algorithms to perpetuate false narratives and create deepfakes.
- Entrenched discrimination resulting from the continual training of AI systems on datasets that reflect and perpetuate existing societal biases.
- Widespread destruction or international conflict resulting from the misuse or failure of AI-powered autonomous weapons systems.
With the rapid development of AI, and the generative, opaque nature of the technology, it is not enough to focus on present and near-term harms. We must also acknowledge the less visible, potential risks of AI, especially those that could lead to the loss of human control and pose an existential risk to humanity. The growing field of AI Safety is focusing on mitigating such risks by researching how to create safe and secure AI systems aligned with human interests and values.
To reduce harms and risks, we need to focus on foundation models
The most powerful AI systems, known as ‘foundation models’, can be used in and adapted to a wide range of applications for which they were not intentionally and specifically designed. They form the basis of many applications including OpenAI’s ChatGPT, Microsoft’s Bing, many website chatbots, financial service bots, as well as image generation tools such as Midjourney or DALL-E. Because foundation models form the basis for an increasingly broad range of applications, any errors or issues at the foundation-model level impact all applications built on top of that foundation model. These intricacies make their regulation both challenging and essential.
The generative capabilities of foundation models bring with them a new set of important risks and exacerbate the harms discussed above. Risks that are specifically related to AI foundation models mainly have to do with unintended consequences. With foundation models, we are creating machines that operate as black boxes, with complex internal mechanisms that escape human understanding and control. This creates challenges for society’s ability to adapt and respond effectively to the consequences triggered by AI, as AI’s outcomes may end up misaligned with core human ethics and values. To correct for biases and prevent catastrophic outcomes, we therefore need to make sure that the AI systems are transparent and interpretable.
AI ethics and AI safety are fundamentally interlinked, not opposed
There is a striking parallel to be drawn between the AI governance dilemma we are facing today and the climate governance dilemma we have been dealing with for decades. Governments have long focused on responding to and mitigating climate-related crises such as droughts, floods, hurricanes and wildfires. Yet, it took a long time for the international community to agree and accept that focusing on reducing CO2 emissions as the root cause of these climate-related crises deserved equal, if not more urgent, attention. In a similar vein, AI governance needs to address both the present harms AI is causing such as privacy violations, discrimination, and labor market disruptions (AI ethics), as well as the root cause of these harms and of the existential risk facing us all (AI safety).
By focusing solely on the downstream harms caused by AI without addressing the root issue of foundation models, AI governance efforts would resemble the attempts to solely manage climate change through adaptation measures without curbing CO2 emissions. While the focus should be on the underlying cause, it is vital to note that this does not diminish the significance of addressing and remedying the immediate harms.
Although the policy brief does mention many of the major harms caused by AI, it treats the technology in a broad manner, without addressing that not all AI systems carry the same level of harms and risk.
We therefore suggest that the Global Digital Compact acknowledge the importance of foundation models for both AI ethics and AI safety, and their interwoven nature. It should, crucially, recognize the scale of harm and risk we must confront – from compounding inequalities to human extinction – and single out foundation models as a priority focus for AI governance efforts and the most effective pathway to AI regulation.
AI governance must be inclusive to be effective
It is essential to recognize that cutting-edge AI technologies such as foundation models, are now almost exclusively developed by a handful of private companies in high-income countries, especially the United States. Currently, only a few laboratories in the world have the resources to develop such AI models, and their regulation will, therefore, primarily be implemented by the States in which those private companies are operating. As such, the success of AI governance depends on how much these States cooperate and participate in relevant processes.
Nevertheless, we cannot forget that the deployment of AI will have significant consequences, both positive and negative, all over the world. For this reason, all States must pay attention to the AI governance debate, and the multilateral system must ensure that all voices can be heard in the development of AI regulation. For a couple of reasons, it is especially important for low- and middle-income countries (LMICs) to actively participate in the development of AI governance.
Firstly, there is the issue of the digital divide, which the policy brief acknowledges is ‘still a gulf’. The existence of this divide heightens the risk that a significant part of the global population will be left behind, as LMICs with lower levels of connectivity could miss out on many of the benefits AI has to offer. This has the potential to both amplify existing inequalities and create new ones.
Secondly, there is the issue of societal and cultural values. If AI systems are created with biases and assumptions that reflect the values and experiences of their developers (predominantly from high-income countries), they will spread these values and experiences, perpetuating existing inequalities and failing to address the unique challenges LMICs face.
AI governance at the multilateral level should therefore support simultaneous progress in bringing together the main AI-developer States and fostering LMICs participation. These two goals must not be seen as exclusive but fundamentally interlinked.
The policy brief notes the need for a global and multidisciplinary conversation about AI governance but does not mention the particular importance of involving LMICs.
The Global Digital Compact should recognize that AI will have global consequences but that its benefits, harms and risks are not evenly distributed and, thus, meaningful representation of LMICs in shaping AI governance is needed to mitigate potential adverse effects and promote equitable outcomes for all.
The next steps for AI governance
By convening experts, policymakers, and industry leaders, the UN can create an inclusive and collaborative environment to shape the future of AI governance. That vision is already clear in the policy brief, which crucially understands that the UN’s role should not lie in dictating strict rules but rather in promoting a multilateral regime that encourages responsible development and deployment of AI. We have argued that the plan of action proposed in the policy brief would be greatly strengthened by: acknowledging both current harms and the potential existential risks posed by AI, focusing on the governance of foundation models as a priority, and ensuring that the development of AI governance is inclusive, with active participation from LMICs, in particular.
In addition, we would like to highlight several important proposals made by the Secretary-General in the Global Digital Compact Policy Brief and suggest how these could be complemented or refined for greater impact while preserving agility:
- The appointment of a high-level advisory board on AI to ensure alignment of AI development with human rights and values, and to provide practical guidance for researchers and innovators in creating responsible and trustworthy AI.
→ We welcome the recent announcement of this high-level advisory board for artificial intelligence. We recommend it to focus on foundation model governance, concentrating on the analysis of catastrophic risks that would irreversibly shape the course of humanity, and highlighting the gaps where multi-stakeholder action is required.
- The development of a framework for the agile governance of AI combining international and national technical standards and norms.
→ This framework needs to be developed with the active contribution of diverse stakeholders. This includes the States that can directly regulate the private labs developing foundation models, as well as academic experts and civil society. The framework should provide measures to monitor training runs, audit AI systems, and ensure LMICs benefit fully from AI developments.
- A Digital Cooperation Forum to regularly assess the implementation of the Global Digital Compact and keep pace with technological developments.
→ This forum should promote the participatory governance of AI development and include the necessary capacity-building support for low- and middle-income countries to develop national expertise on the topic, as well as train talent to develop AI applications that help reduce inequalities. The input gathered by this forum would then guide the development of foundation models and deployment of applications, ensuring they align with human aspirations and values and accurately represent the world’s invaluable diversity.
By following this path forward, and in conjunction with the other objectives and actions to be included in the Global Digital Compact, the multilateral system can lead the way towards a multilateral regime complex for AI.