February 13, 2026

Appointment of the Independent International Scientific Panel on AI

  • Commentary
  • AI Governance
  • Multilateralism
  • Institutional Design

On February 3, the UN Secretary General has proposed a list of 40 experts from the pool of about 2700 nominations for the Independent, International Scientific Panel. On February 12, the UN General Assembly has confirmed their appointment with 117 to 2 votes. This appointment marks another milestone in setting up the first global scientific assessment of AI opportunities, risks, and impacts for policymakers.

While we note that the selection process could have been more transparent, we do welcome the Panel’s confirmation. The Panel includes some of the world’s leading AI experts, including Prof. Yoshua Bengio (#1 most cited AI scientist in the world) and Prof. Bernhard Schölkopf (#18). We congratulate all Panel members on their selection and wish them the best in this important endeavor.

At the Simon Institute, we have persistently engaged in the negotiations to set-up the Independent International Scientific Panel on AI. As Panel Members begin its implementation, we want to highlight a few key success factors and next steps based on the Panel’s Mandate.

Key success factors for the Panel

a) Scientific independence

As the Mandate explicitly states, all members of the Panel have been selected in their personal capacity. The credibility of the Panel depends on its ability to follow the scientific evidence wherever it leads without political interference. The general guiding principles of the Panel are defined as “independence, scientific credibility and rigour, multidisciplinarity and inclusive participation.” 

There is some guidance from the modalities resolution on organizational elements that are expected from the Panel. However, in the absence of explicit instruction from the modalities resolution, it is the mandate of the Panel members to self-organize in whichever way is most conducive to fulfill its core task, that is, the timely synthesis of large volumes of AI-related evidence for policymakers. 

b) Policy-relevance

The modalities resolution mandates that the Panel’s annual report should be policy-relevant. That means it should provide evidence on topics and questions that are relevant to policymakers. This most notably includes the topic areas 4a to 4g which Member States have designated as particularly relevant for the Global Dialogue on AI Governance, and which should be informed by the findings of the Panel. 

Policy-relevance can go as far as highlighting different technical and policy options. However, the Panel’s report must stay away from any policy prescriptions. It is the job of the scientists to provide a state-of-the-art summary of expected risks, opportunities, and impacts. It is the job of the policymakers to decide on how to weigh trade-offs and respond to uncertainties.

c) Leveraging existing synthesis

The Panel’s Mandate is “synthesizing and analysing existing research”, not to produce its own original research. This focus also means that the Panel should fully leverage existing synthesis work where available. The time window and financial resources for report production are limited and, as stakeholders have repeatedly expressed during the negotiation of the mandate, the Panel should leverage synergies rather than duplicating various existing efforts.

On risks, the Panel should build on the International AI Safety Report, written by over 100 independent experts with an advisory panel nominated by over 30 countries. On opportunities and impacts, it could draw from materials such as the ITU’s AI for Good Impact Report. On policy options, it could expand on the work of the OECD AI Policy Observatory. Graphs of trends and projections may be found at places like EpochAI and the Stanford AI Index.

The Panel should not try to reinvent the wheel. Its unique value lies in bringing them together into a single, coherent, globally legitimate assessment, and in tailoring that synthesis to the topics that Member States have identified for the Global Dialogue.

d) Leveraging external experts

The Panel’s 40 Members are trusted by policymakers, they are all experts in their fields, and they are expected to be substantially involved in report writing. At the same time, it would be unrealistic to expect that every expert is an expert on every aspect of AI. There are more than 200,000 scientific publications on AI per year. Similarly, the International AI Safety Report which has a more narrow mandate, already required over 100 independent experts, including the employment of writers who were compensated for their time. 

To ensure that the report produced by the Panel is state-of-the-art, the Panel should involve external experts as needed. The Mandate’s guidance that the Panel “may also consult informally with external experts” explicitly allows for this function. However, given the need for speed, this should be organized “informally”. Meaning, there is no expectation to run a formal (and often time-consuming) call for authors as it is the case for the IPCC and the IPBES.

First Steps for Panel Members

As the selected Panel Members prepare for their first meeting, the following are some of the concrete next steps that they are expected to take as part of their self-organization.

a) Elect two Co-Chairs

At its first meeting the Panel should “elect two Co-Chairs from among its members, one from a developed country and one from a developing country”. The Co-Chairs have to participate in an interactive dialogue with the UN General Assembly twice a year. Our recommendation would be to choose Co-Chairs with strong scientific credentials and a strong public profile. The reports of the Independent, International Scientific Panel on AI will have more weight if it is led by figures that are widely recognized within the scientific community and the public at large.

b) Create working groups

The Panel has the broad open-ended mandate to “issue evidence-based scientific assessments synthesizing and analysing existing research related to the opportunities, risks and impacts of artificial intelligence”. In practice, all scientific synthesis panels with a similarly broad scope divide their tasks into working groups. The modalities resolution also explicitly foresees that the Panel will “establish working groups as needed”.

In our original recommendations report a year ago we had suggested “capabilities & risks”, “opportunities for the SDGs”, “macroeconomics”, as well as cross-cutting “foresight & forecasting”, and “Global AI Policy Observatory” for structuring working groups. Given the composition of the Panel, we do not think that separate working groups on macroeconomics or foresight & forecasting make sense. Instead, we suggest three working groups broadly covering a) frontier AI capabilities, safety & trustworthiness, b) socioeconomic impacts & AI for sustainable development, and c) ethics & governance. This could cover the Dialogue topics 4a–4g while matching groups to the expertise actually present on the Panel. Foresight elements can be integrated across all groups.

c) Elect up to three Vice-chairs

The Panel is also invited to self-select from its members “up to three Vice-Chairs, taking into account geographical and gender balance”. In scientific assessments a Vice-Chair would typically be the head of a working group. So, it might make sense to create three working groups with a Vice-Chair each. If needed, this number could go up to five, with Co-Chairs also covering working groups. Working groups can further subdivide their work as needed. Typically, in scientific assessment reports, it is the Co-Chairs that work with the Vice-Chairs to develop a first draft of the Report Outline for each subsection. 

Five Months to Deliver

Again, we welcome the Panel’s confirmation. The Panel is expected to present its first report at the Global Dialogue on AI Governance in Geneva this July. Getting the operational structure right quickly is important to ensure that this deadline is credible. We wish all Panel Members the best in this important and urgent endeavor.

Kevin Kohler

Related content

Event
Global Dialogue Briefing on the Responsible Diffusion of Open AI
  • January 30, 2026
  • 8 min read
Annual Report
2024 Annual Report
  • May 1, 2025
  • 3 min read
Update
SI’s Post Summit of the Future Plans
  • October 1, 2024
  • 4 min read