{"id":825,"date":"2025-07-08T11:23:43","date_gmt":"2025-07-08T11:23:43","guid":{"rendered":"https:\/\/simoninstitute.ch\/?p=825"},"modified":"2025-12-08T09:08:09","modified_gmt":"2025-12-08T09:08:09","slug":"cern-for-ai-one-analogy-many-visions","status":"publish","type":"post","link":"https:\/\/simoninstitute.ch\/blog\/post\/cern-for-ai-one-analogy-many-visions","title":{"rendered":"CERN for AI: One Analogy, Many Visions"},"content":{"rendered":"\n<p>The idea of a \u201cCERN for AI\u201d was first proposed by cognitive scientist Gary Marcus at the AI for Good Summit <a href=\"https:\/\/www.nytimes.com\/2017\/07\/29\/opinion\/sunday\/artificial-intelligence-is-stuck-heres-how-to-move-it-forward.html\">in 2017<\/a>. He invoked CERN, the European Organization for Nuclear Research, as a model of international, publicly funded scientific collaboration that could be replicated for AI.<\/p>\n\n\n\n<p>Since then, the idea of a \u201cCERN for AI\u201d has gained momentum in AI governance. The idea has received support from actors as diverse as Turing Award winner Yoshua Bengio, researchers at leading U.S. AI companies, French and German academic coalitions, think tanks in the U.K., and India, to Switzerland\u2019s Strategy for Digital Foreign Policy and the United Nations Conference on Trade and Development (UNCTAD). Indeed, \u201cCERN for AI\u201d is no longer just an idea, it has entered the policy agendas of key decision makers. At the 2025 Paris AI Summit, European Commission President Ursula von der Leyen launched a \u20ac20 billion CERN-inspired <a href=\"https:\/\/digital-strategy.ec.europa.eu\/en\/policies\/ai-factories\">\u201cAI Gigafactories\u201d<\/a> initiative to accelerate public compute infrastructure across Europe:<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>\u201cWe want to replicate the success story of the CERN laboratory in Geneva. CERN hosts the largest particle accelerator in the world. And it allows the best and the brightest minds in the world to work together. We want the same to happen in our AI Gigafactories. We provide the infrastructure for large computational power. Researchers, entrepreneurs and investors will be able to join forces.\u201d <\/p>\n\n\n\n<p>&#8211; <a href=\"https:\/\/ec.europa.eu\/commission\/presscorner\/detail\/en\/SPEECH_25_471\">Ursula von der Leyen, 2025<\/a><\/p>\n<\/blockquote>\n\n\n\n<p>At the same time, the \u201cCERN for AI\u201d label does not always refer to the same underlying ideas. Some proponents imagine a publicly funded counterweight to commercial AI labs, focused on open science and AI applications for socially beneficial purposes. Others imagine an International AI Safety Institute, that tests frontier AI models and advises regulators. Yet others envision it as a way to ensure European competitiveness in compute infrastructure. Yet, yet, others imagine it as a joint international project to develop frontier AI models.<\/p>\n\n\n\n<p>This blog post is intended as an input to the <a href=\"https:\/\/www.gcsp.ch\/events\/cern-ai-models-international-technical-cooperation-ai-geneva-security-debate\">Geneva Security Debate<\/a> hosted on July 10, 2025, at the Geneva Centre for Security Policy on the margins of the AI for Good Summit.&nbsp;<\/p>\n\n\n\n<p>The first section offers context on the structure and historical motivations for CERN. The second section provides a comparison of 14 prominent proposals and 3 existing projects that have used the \u201cCERN for AI\u201d label. Finally, a few key discussion questions are highlighted.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">1. How CERN works<\/h2>\n\n\n\n<p>CERN is the world\u2019s leading institute on particle physics and operates the world\u2019s biggest particle accelerator, the Large Hadron Collider. In 2012, the accelerator confirmed the existence of the Higgs boson, a finding for which the Nobel Prize in Physics was awarded in 2013. CERN describes <a href=\"https:\/\/home.cern\/about\/who-we-are\/our-mission\">its mission<\/a> on its website as follows:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>perform world-class research in fundamental physics.<\/li>\n\n\n\n<li>provide a unique range of particle accelerator facilities that enable research at the forefront of human knowledge, in an environmentally responsible and sustainable way.<\/li>\n\n\n\n<li>unite people from all over the world to push the frontiers of science and technology, for the benefit of all.<\/li>\n\n\n\n<li>train new generations of physicists, engineers and technicians, and engage all citizens in research and in the values of science.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">a) Governance structure<\/h3>\n\n\n\n<p>CERN is governed by a <a href=\"https:\/\/home.cern\/about\/who-we-are\/our-governance\">Council<\/a> composed of representatives from its <a href=\"https:\/\/home.cern\/about\/who-we-are\/our-governance\/member-states\">25 member states<\/a>. Each member state has two representatives: one from the government and one from the scientific community. Each country has one vote, and all decisions aim for consensus. The organization operates a central laboratory in Geneva, Switzerland, with about 2\u2019500 staff, while the research community is highly decentralized. A significant share of CERN\u2019s 12\u2019000 scientific users come from non-member states.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">b) Motivations for CERN<\/h3>\n\n\n\n<p><strong>Cost-sharing for \u201cbig science\u201d:<\/strong> Particle accelerators are expensive, indivisible, and central to fundamental discoveries. After World War II, European countries lacked the capacity to fund such infrastructure alone. CERN\u2019s cost-sharing model allowed Europe to pool resources and maintain scientific leadership in high-energy physics. Its annual budget is around <a href=\"https:\/\/cds.cern.ch\/record\/2888205\/files\/English.pdf#page=18\">1.4 billion CHF,<\/a> with member state contributions roughly proportional to GDP.<\/p>\n\n\n\n<p><strong>European Integration:<\/strong> Even though CERN is based in Geneva, it is not a UN organization. The United States, China, and Russia are all not member states of CERN. Instead, CERN\u2019s founding in 1954 reflected broader efforts to rebuild trust and cooperation in post-war Europe. Scientists from former adversary nations worked side by side on unclassified, civilian research, helping build a European <a href=\"https:\/\/www.tandfonline.com\/doi\/abs\/10.1080\/00963402.1970.11457764\">scientific identity<\/a>.<\/p>\n\n\n\n<p><strong>Civilian and open by design:<\/strong> Unlike nuclear research centers in the <a href=\"https:\/\/en.wikipedia.org\/wiki\/Brookhaven_National_Laboratory\">US<\/a> or <a href=\"https:\/\/en.wikipedia.org\/wiki\/Atomic_Energy_Research_Establishment\">UK<\/a>, CERN conducts only fundamental, non-military research. This restriction, which is <a href=\"https:\/\/council.web.cern.ch\/en\/content\/convention-establishment-european-organization-nuclear-research#2\">enshrined<\/a> in its founding convention, helped enable full openness. Research results are published, data and software are open, and all work is civilian. This decision is directly tied to the geography of its membership. Having \u201cthe Germans in\u201d meant having \u201cthe reactors out\u201d as Germany had <a href=\"https:\/\/tile.loc.gov\/storage-services\/service\/ll\/llmlp\/61035888_Volume-III\/61035888_Volume-III.pdf#page=120\">strict postwar restrictions<\/a> on dual-use nuclear research.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">c) CERN vs. AI research<\/h3>\n\n\n\n<p>The CERN model can serve as an important inspiration and discussion starter for AI governance, but any AI-focused proposal should ultimately be adapted to its own context. Selected differences between particle physics and AI include:&nbsp;<\/p>\n\n\n\n<p><strong>Commercial interest in research:<\/strong> Particle physics relies entirely on public funding due to high costs and uncertain returns. By contrast, AI is commercially lucrative, with private labs like DeepMind operating at CERN-scale budgets. Even fundamental AI research often finds private support, challenging the need for a purely public counterpart.<\/p>\n\n\n\n<p><strong>Dual use and misuse risks of research:<\/strong> Nuclear research has always raised concerns about military use and dual-use. However, CERN was intentionally limited in its focus on fundamental physics with minimal relevance to both nuclear weapons and nuclear reactors. This in turn has enabled a strong <a href=\"https:\/\/cds.cern.ch\/record\/2835057\/files\/CERN-OPEN-2022-013.pdf\">open-science policy<\/a>. If we compare fundamental particle physics with the entire field of AI, it\u2019s clear that the latter has a much larger risk of dual use and misuse. Indeed, historically, the least open part of CERN\u2019s own history was not its physics research, but its <a href=\"https:\/\/www.degruyterbrill.com\/document\/doi\/10.7208\/chicago\/9780226820378-005\/html?lang=en&amp;srsltid=AfmBOoqPMluNJMhiD7lzih9AphTH5WmITG-cjkKeyMDQk4_-_zHQiGv5\">high-performance computing infrastructure<\/a>. So, a \u201cCERN for AI\u201d would likely either need clear restrictions on the types of AI research conducted or a tiered access policy.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">2. \u201cCERN for AI\u201d proposals<\/h2>\n\n\n\n<p>The following is a high-level overview of proposals that have been framed as a \u201cCERN for AI\u201d. The entries are in chronological order. The first column reflects the proposing authors or their institutional affiliation. Dark grey backgrounds indicate areas where a proposal has substantial similarity to CERN.&nbsp;<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"1020\" height=\"2560\" src=\"https:\/\/simoninstitute.ch\/wp-content\/uploads\/2025\/07\/CERNforAIPicture-scaled.jpg\" alt=\"\" class=\"wp-image-832\" srcset=\"https:\/\/simoninstitute.ch\/wp-content\/uploads\/2025\/07\/CERNforAIPicture-scaled.jpg 1020w, https:\/\/simoninstitute.ch\/wp-content\/uploads\/2025\/07\/CERNforAIPicture-350x879.jpg 350w, https:\/\/simoninstitute.ch\/wp-content\/uploads\/2025\/07\/CERNforAIPicture-700x1757.jpg 700w, https:\/\/simoninstitute.ch\/wp-content\/uploads\/2025\/07\/CERNforAIPicture-768x1928.jpg 768w, https:\/\/simoninstitute.ch\/wp-content\/uploads\/2025\/07\/CERNforAIPicture-612x1536.jpg 612w, https:\/\/simoninstitute.ch\/wp-content\/uploads\/2025\/07\/CERNforAIPicture-816x2048.jpg 816w, https:\/\/simoninstitute.ch\/wp-content\/uploads\/2025\/07\/CERNforAIPicture-1200x3012.jpg 1200w, https:\/\/simoninstitute.ch\/wp-content\/uploads\/2025\/07\/CERNforAIPicture-150x377.jpg 150w\" sizes=\"auto, (max-width: 1020px) 100vw, 1020px\" \/><\/figure>\n\n\n\n<p>To manage the ballooning mental complexity of divergent \u201cCERN for AI\u201d proposals, it makes sense to think of them in four overlapping clusters:<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">a) CERN for AI Academics<\/h3>\n\n\n\n<p><strong>Vision:<\/strong> Publicly funded AI research infrastructure to rebalance power between academia and industry.<\/p>\n\n\n\n<p><strong>Rationale:<\/strong> The academic sector is increasingly outpaced by commercial labs in compute and AI talent. This cluster argues for \u201cbig science\u201d in AI, with these large-scale public investments expected to enable open, fundamental research. Proponents also argue this would address private sector neglect of AI for Good and AI safety.<\/p>\n\n\n\n<p><strong>Examples:<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong><a href=\"https:\/\/cairne.eu\/wp-content\/uploads\/2019\/10\/CLAIRE-vision.pdf\">CLAIRE<\/a> (EU):<\/strong> European academic network that calls for large-scale public AI funding and infrastructure to match the ambitions of private tech companies and maintain European sovereignty.<\/li>\n\n\n\n<li><strong><a href=\"https:\/\/bigscience.huggingface.co\/\">BigScience<\/a> (France):<\/strong> Project to create an open-source large language model involving academic and volunteer communities, supported by public computing power.<\/li>\n\n\n\n<li><strong><a href=\"https:\/\/op.europa.eu\/en\/publication-detail\/-\/publication\/d6d8ed54-32a8-11ef-a61b-01aa75ed71a1\/language-en\">Group of Chief Scientific Advisers<\/a> (EU):<\/strong> Recommended increased support for AI in science via public compute infrastructure.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">b) CERN for AI for Good<\/h3>\n\n\n\n<p><strong>Vision:<\/strong> A global or regional public institution focused on socially beneficial AI.<\/p>\n\n\n\n<p><strong>Rationale:<\/strong> Market incentives may overlook socially beneficial AI applications in domains like climate, healthcare, and education. This cluster promotes AI aligned with Sustainable Development Goals (SDGs).<\/p>\n\n\n\n<p><strong>Examples:<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong><a href=\"https:\/\/indiaai.gov.in\/documents\/pdf\/NationalStrategy-for-AI-Discussion-Paper.pdf#page=61\">NITI Aayog<\/a> (India):<\/strong> Called for more research to ensure that AI is inclusive and good, including explainability, privacy, AI for development.<\/li>\n\n\n\n<li><strong><a href=\"https:\/\/icain.ch\/index.html\">International Computation &amp; AI Network<\/a> (CH):<\/strong> Project co-led by academic networks, this initiative aims to provide publicly supported AI compute to support research on SDGs in low- and middle-income countries.<\/li>\n\n\n\n<li><strong><a href=\"https:\/\/www.brookings.edu\/wp-content\/uploads\/2022\/11\/FCAI-October-2022.pdf\">Forum for Cooperation on AI<\/a> (US):<\/strong> Called for research focusing on privacy-enhancing technologies and AI as a tool for climate change monitoring and management. AI that supports democratic values.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">c) CERN for AI Safety<\/h3>\n\n\n\n<p><strong>Vision:<\/strong> A publicly backed hub to accelerate safety research, risk assessment, and testing for frontier AI models.<\/p>\n\n\n\n<p><strong>Rationale:<\/strong> Private labs may underinvest in safety and transparency. A CERN-like institution could serve as a coordination point for red-teaming, evaluation, and safety tooling.<\/p>\n\n\n\n<p><strong>Examples:<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong><a href=\"https:\/\/garymarcus.substack.com\/p\/a-cern-for-ai-and-the-global-governance\">Gary Marcus<\/a> (2023):<\/strong> Proposed a global, neutral, non-profit International Agency for AI that develops technical solutions to promote safe, secure and peaceful AI technologies.<\/li>\n\n\n\n<li><strong><a href=\"https:\/\/institute.global\/insights\/politics-and-governance\/new-national-purpose-ai-promises-world-leading-future-of-britain\">Tony Blair Institute<\/a> (UK):<\/strong> Recommended a public laboratory focused on researching and testing safe AI that also acts as a regulatory sandbox (becoming the \u201cbrain\u201d for regulators).<\/li>\n\n\n\n<li><strong><a href=\"https:\/\/cfg.eu\/cern-for-ai-eu-report\/\">Centre for Future Generations<\/a> (EU):<\/strong> Has called for a \u201cCERN for Trustworthy AI\u201d, making solving the scientific problem of trustworthy AI its core mission, and tackling it through multiple, targeted research bets.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">d) CERN for AGI&nbsp;<\/h3>\n\n\n\n<p><strong>Vision:<\/strong> An international lab jointly developing <a href=\"https:\/\/www.simoninstitute.ch\/blog\/post\/what-is-artificial-general-intelligence-agi-an-explainer-for-policymakers\/\">artificial general intelligence<\/a>, in some proposals with monopoly power.<\/p>\n\n\n\n<p><strong>Rationale:<\/strong> Reduce risks of competitive races and dual-use misuse by pooling efforts into a single high-capacity global lab.<\/p>\n\n\n\n<p><strong>Examples:<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong><a href=\"https:\/\/arxiv.org\/pdf\/2310.09217\">Hausenloy et al.<\/a>:<\/strong> Proposed creating an international public monopoly on AGI development.<\/li>\n\n\n\n<li><strong><a href=\"https:\/\/arxiv.org\/pdf\/2402.08797\">Sastry et al.<\/a> (one option):<\/strong> Proposed a \u201cCERN for Frontier AI\u201d with shared model access.<\/li>\n\n\n\n<li><strong><a href=\"https:\/\/milesbrundage.substack.com\/p\/my-recent-lecture-at-berkeley-and\">Brundage<\/a>:<\/strong> Recommended a 5-4-5 plan: First achieve RAND <a href=\"https:\/\/www.rand.org\/pubs\/research_reports\/RRA2849-1.html\">security level 5<\/a> to protect AI model weights. Then figure out how to achieve Anthropic\u2019s <a href=\"https:\/\/www-cdn.anthropic.com\/872c653b2d0501d6ab44cf87f43e1dc4853e4d37.pdf\">AI Safety Level 4<\/a>. Then build and distribute the benefits of <a href=\"https:\/\/www.bloomberg.com\/news\/articles\/2024-07-11\/openai-sets-levels-to-track-progress-toward-superintelligent-ai\">level 5 AGI<\/a> capabilities defined by OpenAI as AI systems resembling the behavior and intelligence of a human-run organization.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">3. Open questions<\/h2>\n\n\n\n<p>The following are some relevant discussion questions with regards to \u201cCERN for AI\u201d models.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">a) What types of AI research are insufficiently provided by markets?<\/h3>\n\n\n\n<p>Fundamental research has often been framed as a global public good, work that benefits many but is difficult to monetize, and therefore underfunded by private actors. Many companies are doing serious work in fundamental AI. However, there are still areas that have been suggested as neglected by different parties:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>AI for Good:<\/strong> Applications of AI for public benefit, such as healthcare, agriculture, and education, often with a particular focus on developing countries.<\/li>\n\n\n\n<li><strong>Neurosymbolic AI:<\/strong> This approach combines neural networks with symbolic reasoning to improve how models think, explain themselves, and reason through problems.&nbsp;<\/li>\n\n\n\n<li><strong>AI safety &amp; security:<\/strong> As models become more powerful, they also become harder to predict and harder to control. There&#8217;s increasing recognition that we need better tools to test and verify what these systems can and can\u2019t do.<\/li>\n<\/ul>\n\n\n\n<p>Should a CERN for AI prioritize one or multiple of these areas?&nbsp;<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">b) Should Europe\u2019s AI Gigafactories be compared to US tech giants or to national AI research resources?<\/h3>\n\n\n\n<p>AI Gigafactories in Europe are sometimes benchmarked against the hyperscale data centers of U.S. tech giants. However, the infrastructure of companies like Google or Microsoft is supported by products, cash flows, and massive user bases. The EU has no comparable business model attached to Gigafactories.<\/p>\n\n\n\n<p>Instead, the Gigafactories seem to follow the model of the European High Performance Computing Joint Undertaking (EuroHPC): academic researchers apply for free access, while companies can use the resources for a fee. As such, they would be functionally closer to the US National AI Research Resource (NAIRR) or the UK AI Research Resource (AIRR). At the same time,&nbsp; with over $7 billion\/year in public spending, Gigafactories are more than ten times larger than their US and UK equivalents.&nbsp;<\/p>\n\n\n\n<p>This raises questions about sustainability, usage, and strategic intent: is the goal enabling academic science, or establishing geopolitical parity?<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">c) How to handle the trade-off between open-source and misuse risks?<\/h3>\n\n\n\n<p>Open-source AI promotes transparency, collaboration, and diffusion, but it also carries risks when powerful models with dangerous capabilities are made widely accessible. This tension is especially sharp for general-purpose foundation models. Some proposals suggest tiered levels of openness, remote access to AI models, or license-based release of models. What governance frameworks might balance these competing priorities?<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">d) Do we need a single centralized hub vs. a decentralized network?<\/h3>\n\n\n\n<p>The original CERN model combines centralized infrastructure with decentralized research and analysis, and shared governance between member states. In AI, both centralized and decentralized designs are being proposed with regards to:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Human talent:<\/strong> \u201cherding nerds\u201d in a central hub vs. a talent network<\/li>\n\n\n\n<li><strong>Computing power:<\/strong> one very large datacenter vs. multiple, geographically spread datacenters<\/li>\n\n\n\n<li><strong>Governance:<\/strong> who decides on research or model releases<\/li>\n<\/ul>\n\n\n\n<p>The advantages and disadvantages of these designs depend on a range of factors, including the goals of the institution. For example, centralized compute infrastructure might make it easier to enforce very high security standards as some have argued, but it might be easier to spread the AI chips based on energy availability and political buy-in from multiple countries. Which combination is the most fitting and for which \u201cCERN for AI\u201d proposal?<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">e) How does \u201cCERN for AI\u201d compare to other models for international technical cooperation on AI and\/or European competitiveness in AI?&nbsp;<\/h3>\n\n\n\n<p>It\u2019s good to consider a broader range of analogies and models for the goal that \u201cCERN for AI\u201d aims to achieve. For example, Europe has top academic talent, strong regulatory frameworks, and a strong tradition of public investment. However, it lags behind the US and China in its commercial AI ecosystem. What mix of public-private coordination might be needed to foster both competitiveness and trustworthiness? What other models of international technical cooperation should be considered?<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Join the discussion<\/h2>\n\n\n\n<p>To explore these questions and more, join us on Thursday, July 10, 18:30-20:00 for the Geneva Security Debate \u201cCERN for AI: Models for International Technical Cooperation in AI\u201d hosted at the Geneva Centre for Security Policy. Please <a href=\"https:\/\/www.gcsp.ch\/events\/cern-ai-models-international-technical-cooperation-ai-geneva-security-debate\">register here<\/a>.<\/p>\n\n\n<div class=\"acf-hidden-block\" style=\"display:none;\">Kevin Kohler <\/div>","protected":false},"excerpt":{"rendered":"The idea of a \u201cCERN for AI\u201d was first proposed by cognitive scientist Gary Marcus at the AI for Good Summit in 2017. He invoked CERN, the European Organization for Nuclear Research, as a model of international, publicly funded scientific collaboration that could be replicated for AI. Since then, the idea of a \u201cCERN for [&hellip;]","protected":false},"author":4,"featured_media":828,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[37,39,6],"tags":[54,53],"post_formats":[21],"post_types":[23],"class_list":["post-825","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai-governance","category-multilateralism","category-institutional-design","tag-ai","tag-cern","post_formats-research","post_types-explainer"],"acf":[],"yoast_head":"<title>CERN for AI: One Analogy, Many Visions &#8211; Simon Institute for Longterm Governance<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/simoninstitute.ch\/blog\/post\/cern-for-ai-one-analogy-many-visions\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"CERN for AI: One Analogy, Many Visions &#8211; Simon Institute for Longterm Governance\" \/>\n<meta property=\"og:description\" content=\"The idea of a \u201cCERN for AI\u201d was first proposed by cognitive scientist Gary Marcus at the AI for Good Summit in 2017. He invoked CERN, the European Organization for Nuclear Research, as a model of international, publicly funded scientific collaboration that could be replicated for AI. Since then, the idea of a \u201cCERN for [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/simoninstitute.ch\/blog\/post\/cern-for-ai-one-analogy-many-visions\" \/>\n<meta property=\"og:site_name\" content=\"Simon Institute for Longterm Governance\" \/>\n<meta property=\"article:published_time\" content=\"2025-07-08T11:23:43+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-12-08T09:08:09+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/simoninstitute.ch\/wp-content\/uploads\/2025\/07\/20250708_CERNforAI-700x392.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"700\" \/>\n\t<meta property=\"og:image:height\" content=\"392\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/simoninstitute.ch\\\/blog\\\/post\\\/cern-for-ai-one-analogy-many-visions#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/simoninstitute.ch\\\/blog\\\/post\\\/cern-for-ai-one-analogy-many-visions\"},\"author\":{\"name\":\"sofia mikton\",\"@id\":\"https:\\\/\\\/simoninstitute.ch\\\/#\\\/schema\\\/person\\\/ea8369a078364dda9f9ee8732a521b4e\"},\"headline\":\"CERN for AI: One Analogy, Many Visions\",\"datePublished\":\"2025-07-08T11:23:43+00:00\",\"dateModified\":\"2025-12-08T09:08:09+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/simoninstitute.ch\\\/blog\\\/post\\\/cern-for-ai-one-analogy-many-visions\"},\"wordCount\":2278,\"commentCount\":0,\"image\":{\"@id\":\"https:\\\/\\\/simoninstitute.ch\\\/blog\\\/post\\\/cern-for-ai-one-analogy-many-visions#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/simoninstitute.ch\\\/wp-content\\\/uploads\\\/2025\\\/07\\\/20250708_CERNforAI-scaled.jpg\",\"keywords\":[\"ai\",\"cern\"],\"articleSection\":[\"AI Governance\",\"Multilateralism\",\"Institutional Design\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\\\/\\\/simoninstitute.ch\\\/blog\\\/post\\\/cern-for-ai-one-analogy-many-visions#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/simoninstitute.ch\\\/blog\\\/post\\\/cern-for-ai-one-analogy-many-visions\",\"url\":\"https:\\\/\\\/simoninstitute.ch\\\/blog\\\/post\\\/cern-for-ai-one-analogy-many-visions\",\"name\":\"CERN for AI: One Analogy, Many Visions &#8211; Simon Institute for Longterm Governance\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/simoninstitute.ch\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/simoninstitute.ch\\\/blog\\\/post\\\/cern-for-ai-one-analogy-many-visions#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/simoninstitute.ch\\\/blog\\\/post\\\/cern-for-ai-one-analogy-many-visions#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/simoninstitute.ch\\\/wp-content\\\/uploads\\\/2025\\\/07\\\/20250708_CERNforAI-scaled.jpg\",\"datePublished\":\"2025-07-08T11:23:43+00:00\",\"dateModified\":\"2025-12-08T09:08:09+00:00\",\"author\":{\"@id\":\"https:\\\/\\\/simoninstitute.ch\\\/#\\\/schema\\\/person\\\/ea8369a078364dda9f9ee8732a521b4e\"},\"breadcrumb\":{\"@id\":\"https:\\\/\\\/simoninstitute.ch\\\/blog\\\/post\\\/cern-for-ai-one-analogy-many-visions#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/simoninstitute.ch\\\/blog\\\/post\\\/cern-for-ai-one-analogy-many-visions\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/simoninstitute.ch\\\/blog\\\/post\\\/cern-for-ai-one-analogy-many-visions#primaryimage\",\"url\":\"https:\\\/\\\/simoninstitute.ch\\\/wp-content\\\/uploads\\\/2025\\\/07\\\/20250708_CERNforAI-scaled.jpg\",\"contentUrl\":\"https:\\\/\\\/simoninstitute.ch\\\/wp-content\\\/uploads\\\/2025\\\/07\\\/20250708_CERNforAI-scaled.jpg\",\"width\":2560,\"height\":1435},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/simoninstitute.ch\\\/blog\\\/post\\\/cern-for-ai-one-analogy-many-visions#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/simoninstitute.ch\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"CERN for AI: One Analogy, Many Visions\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/simoninstitute.ch\\\/#website\",\"url\":\"https:\\\/\\\/simoninstitute.ch\\\/\",\"name\":\"Simon Institute for Longterm Governance\",\"description\":\"\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/simoninstitute.ch\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/simoninstitute.ch\\\/#\\\/schema\\\/person\\\/ea8369a078364dda9f9ee8732a521b4e\",\"name\":\"sofia mikton\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/8da2245bb6cde0137cba4dc3bb8bfd3c1c4e2f4ac351dd962ce5183da32cc5db?s=96&d=mm&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/8da2245bb6cde0137cba4dc3bb8bfd3c1c4e2f4ac351dd962ce5183da32cc5db?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/8da2245bb6cde0137cba4dc3bb8bfd3c1c4e2f4ac351dd962ce5183da32cc5db?s=96&d=mm&r=g\",\"caption\":\"sofia mikton\"}}]}<\/script>","yoast_head_json":{"title":"CERN for AI: One Analogy, Many Visions &#8211; Simon Institute for Longterm Governance","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/simoninstitute.ch\/blog\/post\/cern-for-ai-one-analogy-many-visions","og_locale":"en_US","og_type":"article","og_title":"CERN for AI: One Analogy, Many Visions &#8211; Simon Institute for Longterm Governance","og_description":"The idea of a \u201cCERN for AI\u201d was first proposed by cognitive scientist Gary Marcus at the AI for Good Summit in 2017. He invoked CERN, the European Organization for Nuclear Research, as a model of international, publicly funded scientific collaboration that could be replicated for AI. Since then, the idea of a \u201cCERN for [&hellip;]","og_url":"https:\/\/simoninstitute.ch\/blog\/post\/cern-for-ai-one-analogy-many-visions","og_site_name":"Simon Institute for Longterm Governance","article_published_time":"2025-07-08T11:23:43+00:00","article_modified_time":"2025-12-08T09:08:09+00:00","og_image":[{"width":700,"height":392,"url":"https:\/\/simoninstitute.ch\/wp-content\/uploads\/2025\/07\/20250708_CERNforAI-700x392.jpg","type":"image\/jpeg"}],"twitter_card":"summary_large_image","schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/simoninstitute.ch\/blog\/post\/cern-for-ai-one-analogy-many-visions#article","isPartOf":{"@id":"https:\/\/simoninstitute.ch\/blog\/post\/cern-for-ai-one-analogy-many-visions"},"author":{"name":"sofia mikton","@id":"https:\/\/simoninstitute.ch\/#\/schema\/person\/ea8369a078364dda9f9ee8732a521b4e"},"headline":"CERN for AI: One Analogy, Many Visions","datePublished":"2025-07-08T11:23:43+00:00","dateModified":"2025-12-08T09:08:09+00:00","mainEntityOfPage":{"@id":"https:\/\/simoninstitute.ch\/blog\/post\/cern-for-ai-one-analogy-many-visions"},"wordCount":2278,"commentCount":0,"image":{"@id":"https:\/\/simoninstitute.ch\/blog\/post\/cern-for-ai-one-analogy-many-visions#primaryimage"},"thumbnailUrl":"https:\/\/simoninstitute.ch\/wp-content\/uploads\/2025\/07\/20250708_CERNforAI-scaled.jpg","keywords":["ai","cern"],"articleSection":["AI Governance","Multilateralism","Institutional Design"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/simoninstitute.ch\/blog\/post\/cern-for-ai-one-analogy-many-visions#respond"]}]},{"@type":"WebPage","@id":"https:\/\/simoninstitute.ch\/blog\/post\/cern-for-ai-one-analogy-many-visions","url":"https:\/\/simoninstitute.ch\/blog\/post\/cern-for-ai-one-analogy-many-visions","name":"CERN for AI: One Analogy, Many Visions &#8211; Simon Institute for Longterm Governance","isPartOf":{"@id":"https:\/\/simoninstitute.ch\/#website"},"primaryImageOfPage":{"@id":"https:\/\/simoninstitute.ch\/blog\/post\/cern-for-ai-one-analogy-many-visions#primaryimage"},"image":{"@id":"https:\/\/simoninstitute.ch\/blog\/post\/cern-for-ai-one-analogy-many-visions#primaryimage"},"thumbnailUrl":"https:\/\/simoninstitute.ch\/wp-content\/uploads\/2025\/07\/20250708_CERNforAI-scaled.jpg","datePublished":"2025-07-08T11:23:43+00:00","dateModified":"2025-12-08T09:08:09+00:00","author":{"@id":"https:\/\/simoninstitute.ch\/#\/schema\/person\/ea8369a078364dda9f9ee8732a521b4e"},"breadcrumb":{"@id":"https:\/\/simoninstitute.ch\/blog\/post\/cern-for-ai-one-analogy-many-visions#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/simoninstitute.ch\/blog\/post\/cern-for-ai-one-analogy-many-visions"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/simoninstitute.ch\/blog\/post\/cern-for-ai-one-analogy-many-visions#primaryimage","url":"https:\/\/simoninstitute.ch\/wp-content\/uploads\/2025\/07\/20250708_CERNforAI-scaled.jpg","contentUrl":"https:\/\/simoninstitute.ch\/wp-content\/uploads\/2025\/07\/20250708_CERNforAI-scaled.jpg","width":2560,"height":1435},{"@type":"BreadcrumbList","@id":"https:\/\/simoninstitute.ch\/blog\/post\/cern-for-ai-one-analogy-many-visions#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/simoninstitute.ch\/"},{"@type":"ListItem","position":2,"name":"CERN for AI: One Analogy, Many Visions"}]},{"@type":"WebSite","@id":"https:\/\/simoninstitute.ch\/#website","url":"https:\/\/simoninstitute.ch\/","name":"Simon Institute for Longterm Governance","description":"","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/simoninstitute.ch\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/simoninstitute.ch\/#\/schema\/person\/ea8369a078364dda9f9ee8732a521b4e","name":"sofia mikton","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/8da2245bb6cde0137cba4dc3bb8bfd3c1c4e2f4ac351dd962ce5183da32cc5db?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/8da2245bb6cde0137cba4dc3bb8bfd3c1c4e2f4ac351dd962ce5183da32cc5db?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/8da2245bb6cde0137cba4dc3bb8bfd3c1c4e2f4ac351dd962ce5183da32cc5db?s=96&d=mm&r=g","caption":"sofia mikton"}}]}},"_links":{"self":[{"href":"https:\/\/simoninstitute.ch\/api\/wp\/v2\/posts\/825","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/simoninstitute.ch\/api\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/simoninstitute.ch\/api\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/simoninstitute.ch\/api\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"https:\/\/simoninstitute.ch\/api\/wp\/v2\/comments?post=825"}],"version-history":[{"count":21,"href":"https:\/\/simoninstitute.ch\/api\/wp\/v2\/posts\/825\/revisions"}],"predecessor-version":[{"id":1389,"href":"https:\/\/simoninstitute.ch\/api\/wp\/v2\/posts\/825\/revisions\/1389"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/simoninstitute.ch\/api\/wp\/v2\/media\/828"}],"wp:attachment":[{"href":"https:\/\/simoninstitute.ch\/api\/wp\/v2\/media?parent=825"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/simoninstitute.ch\/api\/wp\/v2\/categories?post=825"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/simoninstitute.ch\/api\/wp\/v2\/tags?post=825"},{"taxonomy":"post_formats","embeddable":true,"href":"https:\/\/simoninstitute.ch\/api\/wp\/v2\/post_formats?post=825"},{"taxonomy":"post_types","embeddable":true,"href":"https:\/\/simoninstitute.ch\/api\/wp\/v2\/post_types?post=825"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}