Today, all stakeholders are invited to informally share their views on the Independent International Scientific Panel on AI and the Global Dialogue on AI Governance. We are working to finalize a full recommendations report that builds on our interim report. In the meantime, we want to take this opportunity to briefly highlight three key considerations on the panel:
1. We need a scientific panel with the capacity to deliver
A scientific panel can serve different functions. A high-level committee of 30-50 senior experts that meets a few times per year may provide strategic direction and make some recommendations to policymakers. Such bodies particularly make sense in a national context where they advise a specific policymaking authority or if they focus on narrow, well-understood topics.
However, the Independent International Scientific Panel on AI has a much broader mandate: to help policymakers across a wide range of contexts, to make sense of complex risks, opportunities, and impacts of AI. Hence, the scientific panel needs to have the capacity to synthesize large amounts of research and to write assessment reports.
A traditional committee structure is not well-suited for producing such reports, especially if senior experts have limited time to dedicate to the panel’s work. This creates a risk that the panel’s outputs will be perceived as “ghostwritten” by a secretariat, potentially undermining credibility and impact.
Our suggestion
- Establish a 10-12 member scientific steering committee that guides the scientific work, drafting outlines and selecting authors.
- Create dedicated working groups with authors that have the time and specialized expertise to conduct in-depth research.
- Consider technical support units hosted at relevant institutions that can provide additional capacity to specific working groups
2. We need a panel with both political legitimacy and scientific independence
This is the UN, not Nature magazine. A panel with only scientific independence may produce high-quality findings, but policymakers may not trust the reports and there is no forcing function for them to engage with the topic. Examples of fully independent scientific panels, such as the Villach Report or the International Panel on Chemical Pollution, show that producing good work alone does not guarantee the attention and trust needed to inform policy.
Conversely, a panel with only political legitimacy may be broadly representative but is unlikely to produce state-of-the-art insights. The reality is that cutting-edge AI research, AI safety institutes, and third-party safety testers are unevenly distributed. Ignoring this for political reasons will lead to frustration and disengagement from the best experts. As a result, both AI specialists and policymakers will de facto rely on other sources of information, diminishing the panel’s relevance.
Our suggestion
- Create a political expert consultative body that is open to universal membership amongst states. This body can request focus areas to be covered, it can nominate authors and reviewers and it can approve annual global AI assessments. It could potentially also convene on the sidelines of the Global Dialogue on AI Governance.
- Ensure that the scientific steering committee is independent. Its members should be chosen in personal capacity based on clearly defined eligibility criteria (academic impact, real-world experience, peer recognition, conflict-of-interest checks) rather than as representatives of specific groups. A peer expert committee such as the High-Level Advisory Body on AI could help to pre-screen CVs with the final selection by the UN Secretary-General.
3. We need to face the pacing problem
AI technology evolves extremely quickly, whereas formal review and editorial processes can be slow. Consequently, an annual global assessment report risks being partially outdated before it is even published. For example, the International Scientific Report on the Safety of Advanced AI finalized its editorial process in December but was released in early February. In this time interval several new notable developments such as OpenAI’s o3, DeepSeek’s R1, and OpenAI’s Deep Research emerged.
Our suggestion
- Allow for interim updates on significant AI developments between global AI assessment reports without requiring political approval
- Consider the option of operating a “live dashboard” showing capabilities and resource requirements of AI systems
- Include forward-looking analysis in the panel through elements such as expert forecasts and if-this-then-that scenario analysis.
Finding efficient solutions
For the Independent International Scientific Panel on AI to be effective, it should strive to be politically universal, scientifically independent, and agile. These goals can be in tension, but there are reasonable solutions.
An independent scientific steering committee combined with dedicated working groups can ensure research synthesis and writing capacity. A state-inclusive advisory body can provide an avenue for addressing political requests and concerns. Finally, the panel can supplement annual reports with interim updates and explore real-time monitoring to address the pacing problem.
Feel free to get in touch with kevin@simoninstitute.ch for any questions or to discuss these recommendations further.