𝗘𝗺𝗯𝗿𝗮𝗰𝗲 𝘁𝗵𝗲 “𝗛𝗔𝗜𝗣”: 𝗛𝗼𝘄 𝘁𝗵𝗲 𝗔𝗜 𝘁𝗿𝗮𝗻𝘀𝗽𝗮𝗿𝗲𝗻𝗰𝘆 𝗿𝗲𝗽𝗼𝗿𝘁𝗶𝗻𝗴 𝗳𝗿𝗮𝗺𝗲𝘄𝗼𝗿𝗸 𝗰𝗮𝗻 𝗱𝗲𝗳𝗶𝗻𝗲 𝗰𝗼𝗺𝗺𝗼𝗻 𝗲𝘅𝗽𝗲𝗰𝘁𝗮𝘁𝗶𝗼𝗻𝘀 𝗮𝗻𝗱 𝘀𝗶𝗺𝗽𝗹𝗶𝗳𝘆 𝗔𝗜 𝗴𝗼𝘃𝗲𝗿𝗻𝗮𝗻𝗰𝗲 𝗳𝗼𝗿 𝗯𝘂𝘀𝗶𝗻𝗲𝘀𝘀 There’s plenty of hype around AI transparency — but behind the scenes, the #HAIP Reporting Framework is turning principles into practice. Developed under the #OECD at the request of the G7, the HAIP provides a voluntary, practical way for organisations developing advanced AI systems to report how they manage risks and ensure responsible innovation. Visit our latest newsletter to watch videos, read blog posts, and learn more about how HAIP adds value to business and AI governance. #AIPolicy #TrustworthyAI #ArtificialIntelligence #G7 #HiroshimaProcess
OECD.AI
International Affairs
Paris, Île-de-France 49,059 followers
OECD.AI is a platform to share and shape trustworthy AI. Sign up below for email alerts and visit our blog OECD.AI/WONK/
About us
Visit our blog, the AI Wonk: https://oecd.ai/wonk/ The OECD AI Policy Observatory is a tool at the disposal of governments and businesses that they can use to implement the first intergovernmental standard on AI: the OECD AI Principles. The OECD AI Principles focus on how governments and other actors can shape a human-centric approach to trustworthy AI. The Observatory includes a blog for its group of international AI experts (ONE AI) to discuss issues related to defining AI and how to implement the OECD Principles. OECD countries adopted the standards in May 2019, along with a range of partner economies. The OECD AI Principles provided the basis for the G20 AI Principles endorsed by Leaders in June 2019. OECD.AI combines resources from across the OECD, its partners and all stakeholder groups. OECD.AI facilitates dialogue between stakeholders while providing multidisciplinary, evidence-based policy analysis in the areas where AI has the most impact. As an inclusive platform for public policy on AI – the OECD AI Policy Observatory is oriented around three core attributes: Multidisciplinarity The Observatory works with policy communities across and beyond the OECD – from the digital economy and science and technology policy, to employment, health, consumer protection, education and transport policy – to consider the opportunities and challenges posed by current and future AI developments in a coherent, holistic manner. Evidence-based analysis The Observatory provides a centre for the collection and sharing of evidence on AI, leveraging the OECD’s reputation for measurement methodologies and evidence-based analysis. Global multi-stakeholder partnerships The Observatory engages governments and a wide spectrum of stakeholders – including partners from the technical community, the private sector, academia, civil society and other international organisations – and provides a hub for dialogue and collaboration.
- Website
-
https://oecd.ai/
External link for OECD.AI
- Industry
- International Affairs
- Company size
- 11-50 employees
- Headquarters
- Paris, Île-de-France
- Type
- Government Agency
- Founded
- 2020
Locations
-
Primary
Get directions
2 rue André Pascal
Paris, Île-de-France 75016, FR
Employees at OECD.AI
Updates
-
It is clear that access to AI compute is becoming a key factor in national competitiveness, innovation capacity and AI policy. Yet we had little visibility on where public cloud AI compute is physically located. 𝗔 𝗻𝗲𝘄 𝗽𝗼𝘀𝘁 𝗼𝗻 𝘁𝗵𝗲 𝗔𝗜 𝗪𝗼𝗻𝗸, 𝗧𝗵𝗲 𝗚𝗲𝗼𝗴𝗿𝗮𝗽𝗵𝘆 𝗼𝗳 𝗔𝗜 𝗖𝗼𝗺𝗽𝘂𝘁𝗲: 𝗠𝗮𝗽𝗽𝗶𝗻𝗴 𝘄𝗵𝗮𝘁 𝗶𝘀 𝗮𝘃𝗮𝗶𝗹𝗮𝗯𝗹𝗲 𝗮𝗻𝗱 𝘄𝗵𝗲𝗿𝗲, covers a new OECD working paper that introduces the first methodology to map the global geography of AI compute. The authors: Vili Lehdonvirta, Bóxī Wú, Zoe Jay Hawkins, Celine Caira and Lucia Russo 𝗪𝗵𝘆 𝘁𝗵𝗶𝘀 𝗺𝗮𝘁𝘁𝗲𝗿𝘀 🔹 AI compute is a strategic input for AI development 🔹 Public cloud compute underpins today’s AI ecosystem, including generative AI 🔹 Physical location still matters — for access, resilience, security and economic development The methodology developed by the authors maps AI compute by counting cloud regions from nine major public cloud providers and identifying where different types of AI accelerators are available. It provides a scalable and transparent way to compare compute access by geography and helps close current evidence gaps in AI policy. The pilot study conducted in October 2023 identified 187 cloud regions across 39 economies, 101 of which are in OECD countries. However, 𝗼𝗻𝗹𝘆 𝟭𝟯 𝗢𝗘𝗖𝗗 𝗰𝗼𝘂𝗻𝘁𝗿𝗶𝗲𝘀 𝗰𝘂𝗿𝗿𝗲𝗻𝘁𝗹𝘆 𝗵𝗼𝘀𝘁 𝗔𝗜 𝗰𝗼𝗺𝗽𝘂𝘁𝗲 𝗶𝗻𝗳𝗿𝗮𝘀𝘁𝗿𝘂𝗰𝘁𝘂𝗿𝗲 𝗰𝗮𝗽𝗮𝗯𝗹𝗲 𝗼𝗳 𝘁𝗿𝗮𝗶𝗻𝗶𝗻𝗴 𝗮𝗱𝘃𝗮𝗻𝗰𝗲𝗱 𝗔𝗜 𝘀𝘆𝘀𝘁𝗲𝗺𝘀, showing that global access is still highly concentrated. These findings highlight that AI compute access is emerging as a new dimension of digital inequality. Countries without sufficient local or trusted access to compute risk falling behind in AI innovation and adoption. This work provides evidence to support national compute strategies, international cooperation, and responsible AI governance. 🔗 𝗥𝗲𝗮𝗱 𝘁𝗵𝗲 𝗳𝘂𝗹𝗹 𝗯𝗹𝗼𝗴 𝗮𝗻𝗱 𝗮𝗰𝗰𝗲𝘀𝘀 𝘁𝗵𝗲 𝘄𝗼𝗿𝗸𝗶𝗻𝗴 𝗽𝗮𝗽𝗲𝗿 — 𝗹𝗶𝗻𝗸 𝗶𝗻 𝘁𝗵𝗲 𝗰𝗼𝗺𝗺𝗲𝗻𝘁𝘀 𝗯𝗲𝗹𝗼𝘄. 👇 #AICompute #AIInfrastructure #CloudComputing #OECD #AIWonk #Innovation #AIPolicy #OECDAI
-
𝗙𝗶𝗻𝗱 𝗻𝗮𝘁𝗶𝗼𝗻𝗮𝗹 𝗽𝗼𝗹𝗶𝗰𝗶𝗲𝘀 𝗮𝗻𝗱 𝗶𝗻𝗶𝘁𝗶𝗮𝘁𝗶𝘃𝗲𝘀 𝗿𝗲𝗹𝗮𝘁𝗲𝗱 𝘁𝗼 𝗔𝗜 In an era where AI is reshaping societies, economies, and governance, understanding and comparing national and international AI policies has become more critical than ever. The 𝗢𝗘𝗖𝗗.𝗔𝗜 𝗣𝗼𝗹𝗶𝗰𝘆 𝗡𝗮𝘃𝗶𝗴𝗮𝘁𝗼𝗿 is a unique resource for policymakers to research and benchmark public AI policies and initiatives worldwide. 𝗙𝗲𝗮𝘁𝘂𝗿𝗲𝘀 𝗶𝗻𝗰𝗹𝘂𝗱𝗲: • Policy initiatives and sources: The Policy Navigator now covers over 1,300 initiatives from more than 80 jurisdictions and international organisations. • Timestamped updates: Users can view the last update time for each entry. • Intuitive interface: The Policy Navigator is easy to navigate, with filters and enhanced accessibility, which provides policymakers and researchers with fast access to relevant AI policies and initiatives. 𝗩𝗶𝘀𝗶𝘁 𝘁𝗵𝗲 𝗢𝗘𝗖𝗗.𝗔𝗜 𝗣𝗼𝗹𝗶𝗰𝘆 𝗡𝗮𝘃𝗶𝗴𝗮𝘁𝗼𝗿 𝗮𝗻𝗱 𝗶𝘁𝘀 𝗻𝗮𝘁𝗶𝗼𝗻𝗮𝗹 𝗱𝗮𝘀𝗵𝗯𝗼𝗮𝗿𝗱𝘀 𝗮𝘁 🔗 𝘁𝗵𝗲 𝗹𝗶𝗻𝗸 𝗶𝗻 𝗰𝗼𝗺𝗺𝗲𝗻𝘁𝘀 𝗯𝗲𝗹𝗼𝘄. #AIPolicy #ArtificialIntelligence#NationalPolicies #TrustworthyAI #OECDAIPrinciples
-
𝗥𝗲𝗯𝘂𝗶𝗹𝗱𝗶𝗻𝗴 𝗱𝗶𝗴𝗶𝘁𝗮𝗹 𝘁𝗿𝘂𝘀𝘁 𝗶𝗻 𝘁𝗵𝗲 𝗮𝗴𝗲 𝗼𝗳 𝗔𝗜 𝗶𝘀 𝗻𝗼𝘄 𝘂𝗿𝗴𝗲𝗻𝘁. Deepfakes, synthetic media and automated misinformation are eroding shared reality. But they are also pushing us to ask a critical question: ➡️ How can we protect trust, truth and accountability in the digital age—while enabling innovation? A new 𝗔𝗜 𝗪𝗼𝗻𝗸 𝗯𝗹𝗼𝗴 𝗽𝗼𝘀𝘁 by Sebastian Hallensleben and Melisa Basol, PhD explores this challenge and introduces the Digital Trust Convention 2025, taking place 6–7 November 2025 at Mila - Quebec Artificial Intelligence Institute (Montréal, Canada). 𝗧𝗵𝗲 𝗲𝘃𝗲𝗻𝘁 𝘄𝗶𝗹𝗹 𝗯𝗿𝗶𝗻𝗴 𝘁𝗼𝗴𝗲𝘁𝗵𝗲𝗿 𝗹𝗲𝗮𝗱𝗶𝗻𝗴 𝗲𝘅𝗽𝗲𝗿𝘁𝘀 shaping the future of digital governance, AI safety and online integrity, including participation from: CEE Digital Democracy Watch • European University Institute • CyberPeace Institute • Technische Universität Darmstadt • Sorbonne Université • Partnership on AI • University of California, Berkeley • HLRS - High-Performance Computing Center Stuttgart (HLRS) • Broadcom • University of Massachusetts Boston Honors College • Geneva Academy of International Humanitarian Law and Human Rights • The Centre for Media, Technology and Democracy • Cognizant • World Privacy Forum • Avaaz Foundation • WITNESS • Swisscom • @ITU SG17 • The Brookings Institution • Barcelona Supercomputing Center • University of Cambridge 𝗞𝗲𝘆 𝘁𝗵𝗲𝗺𝗲𝘀 𝗮𝘁 𝘁𝗵𝗲 𝗗𝗶𝗴𝗶𝘁𝗮𝗹 𝗧𝗿𝘂𝘀𝘁 𝗖𝗼𝗻𝘃𝗲𝗻𝘁𝗶𝗼𝗻 𝟮𝟬𝟮𝟱: 🔹 Synthetic media, deepfakes and authenticity 🔹 AI transparency, safety and governance 🔹 Trusted information ecosystems 🔹 Cybersecurity and AI misuse 🔹 Human rights and AI accountability 🔹 Democratic resilience and public trust 🔹 Digital identity and provenance 🔹 Responsible AI innovation 🔗 𝗟𝗶𝗻𝗸𝘀 𝘁𝗼 𝘁𝗵𝗲 𝗔𝗜 𝗪𝗼𝗻𝗸 𝗮𝗿𝘁𝗶𝗰𝗹𝗲 𝗮𝗻𝗱 𝗲𝘃𝗲𝗻𝘁 𝗿𝗲𝗴𝗶𝘀𝘁𝗿𝗮𝘁𝗶𝗼𝗻 𝗮𝗿𝗲 𝗶𝗻 𝘁𝗵𝗲 𝗰𝗼𝗺𝗺𝗲𝗻𝘁𝘀 𝗯𝗲𝗹𝗼𝘄. Join this conversation about trustworthy AI, online safety, and societal resilience. #DigitalTrust #TrustworthyAI #OECD #OECDAI
-
-
As artificial intelligence capabilities advance, transparency has become a foundational requirement for trustworthy and accountable AI governance. A new blog post on the AI Wonk by Amanda Craig Deckard, Nicolas Miailhe, Hector de Rivoire and Robert Trager offers an operational perspective on how transparency can move from principle to practice through HAIP RF - the G7 Hiroshima AI Process Reporting Framework. The authors argue that, as AI systems grow in complexity and potential impact, the long-standing “black box challenge” underscores the urgent need for comparable, structured, and evidence-based transparency mechanisms. The HAIP RF represents one of the most pragmatic tools currently available for operationalising AI transparency across jurisdictions and actors in the AI value chain. The authors identify three key recommendations for strengthening the future role of HAIP: ✅ Expand scope across the AI value chain to include model developers, deployers, system integrators, compute providers and hardware manufacturers. ✅ Adapt reporting components to address emerging AI capabilities and risks, including agentic systems, model security threats and frontier evaluations. ✅ Synchronise with international initiatives such as the NIST AI RMF, the International AI Safety Report, and ongoing OECD and GPAI processes to promote interoperability and reduce governance fragmentation. By aligning voluntary transparency reporting with international governance efforts, the HAIP RF has the potential to establish a shared global baseline for credible AI transparency. 🔗 𝗪𝗮𝘁𝗰𝗵 𝗮 𝗾𝘂𝗶𝗰𝗸 𝘃𝗶𝗱𝗲𝗼 𝗼𝗳 𝗔𝗺𝗮𝗻𝗱𝗮 𝗖𝗿𝗮𝗶𝗴 𝗳𝗿𝗼𝗺 𝗠𝗶𝗰𝗿𝗼𝘀𝗼𝗳𝘁 𝗮𝗻𝗱 𝗿𝗲𝗮𝗱 𝘁𝗵𝗲 𝗳𝘂𝗹𝗹 𝗮𝗻𝗮𝗹𝘆𝘀𝗶𝘀 𝗼𝗻 𝗔𝗜 𝗪𝗼𝗻𝗸 𝗮𝘁 𝘁𝗵𝗲 𝗹𝗶𝗻𝗸 𝗶𝗻 𝘁𝗵𝗲 𝗰𝗼𝗺𝗺𝗲𝗻𝘁𝘀 𝗯𝗲𝗹𝗼𝘄. 👇 #AIPolicy #TrustworthyAI #HAIPFramework #G7 #OECD #OECDAI
-
🗣️🇪🇺 𝗛𝗮𝘃𝗲 𝘆𝗼𝘂𝗿 𝘀𝗮𝘆: 𝗛𝗲𝗹𝗽 𝘀𝗵𝗮𝗽𝗲 𝗵𝗼𝘄 𝗔𝗜 𝗶𝗻𝗰𝗶𝗱𝗲𝗻𝘁𝘀 𝗮𝗿𝗲 𝗿𝗲𝗽𝗼𝗿𝘁𝗲𝗱 𝗶𝗻 𝗘𝘂𝗿𝗼𝗽𝗲 The European Commission has published draft guidance and a reporting template for “serious AI incidents” under the EU AI Act. 📄 𝗢𝗽𝗲𝗻 𝗰𝗼𝗻𝘀𝘂𝗹𝘁𝗮𝘁𝗶𝗼𝗻 → https://lnkd.in/eXTrHpGc The approach is to seek alignment with international initiatives, including the OECD AI Incidents Monitor and Common Reporting Framework, to promote coherent, global standards for AI incident reporting and learning. 🗓️ 𝗙𝗲𝗲𝗱𝗯𝗮𝗰𝗸 𝗱𝗲𝗮𝗱𝗹𝗶𝗻𝗲: 𝟳 𝗡𝗼𝘃𝗲𝗺𝗯𝗲𝗿 𝟮𝟬𝟮𝟱 Stakeholders are invited to share views and help shape future implementation. 💬 𝗖𝗼𝗺𝗺𝗲𝗻𝘁 𝗯𝗲𝗹𝗼𝘄 𝗶𝗳 𝘆𝗼𝘂 𝗽𝗹𝗮𝗻 𝘁𝗼 𝗰𝗼𝗻𝘁𝗿𝗶𝗯𝘂𝘁𝗲 — 𝗮𝗻𝗱 𝗹𝗲𝘁 𝘂𝘀 𝗸𝗻𝗼𝘄 𝘄𝗵𝗶𝗰𝗵 𝗮𝘀𝗽𝗲𝗰𝘁𝘀 𝘆𝗼𝘂 𝗳𝗶𝗻𝗱 𝗺𝗼𝘀𝘁 𝗶𝗺𝗽𝗼𝗿𝘁𝗮𝗻𝘁. #EUAIAct #AIGovernance #OECD #AITransparency #AIIncidents
-
𝗗𝗶𝗴𝗶𝘁𝗮𝗹 𝗧𝗿𝘂𝘀𝘁 𝗖𝗼𝗻𝘃𝗲𝗻𝘁𝗶𝗼𝗻 𝟮𝟬𝟮𝟱 𝗡𝗼𝘃 𝟲–𝟳, 𝟮𝟬𝟮𝟱 • 𝗠𝗼𝗻𝘁𝗿𝗲𝗮𝗹, 𝗖𝗮𝗻𝗮𝗱𝗮 𝗵𝗼𝘀𝘁𝗲𝗱 𝗯𝘆 𝗠𝗶𝗹𝗮 In an age of generative AI, deepfakes, and algorithmic amplifiers, how do we build trust in our digital space? Following a successful inaugural edition at the OECD, this edition will bring together experts across policy, research, civil society, and industry to confront the urgent challenges of digital trust. 𝗔𝘁𝘁𝗲𝗻𝗱𝗲𝗲𝘀 𝗮𝘁 𝘁𝗵𝗶𝘀 𝘆𝗲𝗮𝗿’𝘀 𝗗𝗶𝗴𝗶𝘁𝗮𝗹 𝗧𝗿𝘂𝘀𝘁 𝗖𝗼𝗻𝘃𝗲𝗻𝘁𝗶𝗼𝗻 𝘄𝗶𝗹𝗹 𝗱𝗶𝘀𝗰𝘂𝘀𝘀 𝗶𝘀𝘀𝘂𝗲𝘀 𝘀𝘂𝗰𝗵 𝗮𝘀: • How can we (re-)establish trust in people and information? • What approaches, instruments and measures might be effective and sustainable? • How can we reliably recognise real human beings in the digital space without compromising privacy? • How can we ensure that every citizen/consumer has a fair chance to be heard and have an impact? • What incentive structures and commercial models are needed to favour constructive, compassionate and consensus-building discourse? There will be keynote panels, interactive breakout sessions, and deep-dive discussions to foster collaborative outputs and practical commitments. 𝗦𝗽𝗲𝗮𝗸𝗲𝗿𝘀 𝗶𝗻𝗰𝗹𝘂𝗱𝗲: Jakub Szymik, Founder, CEE Digital Democracy Watch Andrew Wyckoff, Part-time Professor at the European University Institute’s School of Transnational Governance Francesca Bosco, Chief Strategy Officer, CyberPeace Institute Andreas Kaminski, TU Darmstadt, Institute for Philosophy ; High-Performance Computing Center of the Universität Stuttgart (HLRS) Katherine Evans, Assistant Professor, Sorbonne Université, Paris Rebecca Finlay, Chief Executive Officer, Partnership on AI David Evan Harris, Chancellor’s Public Scholar at University of California, Berkeley Sebastian Bücker, Scientific researcher at the Institute of Philosophy, TU Darmstadt Arnaud Taddei, Global Security Strategist | Broadcom Enterprise Security Group | ITU-T SG17 chair Romilla Syed, University of Massachusetts Boston, Associate Professor, Information Technology and Analytics Steven James Barela, Senior Research Fellow, Geneva Academy of International Humanitarian Law and Human Rights (University of Geneva) #TrustworthyAI #OECD #Mila #Trust #ArtificialIntelligence
This content isn’t available here
Access this content and more in the LinkedIn app
-
As artificial intelligence expands, so does its demand for energy, data, and compute power. How can countries ensure these resources are used efficiently and fairly? At the AI Impact Summit 2026 in New Delhi, world leaders will examine how to manage the global resource footprint of AI. In his new AI Wonk article, Dr. Jon Truby outlines how this summit could lay the groundwork for international governance of AI’s resource use. Key insights: The summit could lead to an agreement on AI resource governance, addressing the energy, data, and compute needed for AI systems. Countries could adopt reporting and transparency standards for AI resource consumption. Governments can use procurement and innovation policies to promote more efficient AI infrastructure. International coordination—through the UN, OECD, and G20—can help balance innovation with responsible use of shared global resources. India’s theme of People, Planet, Progress captures this balance: ensuring AI benefits society without exhausting the systems that sustain it. Read the full AI Wonk blog post by Dr Jon Truby to learn how global collaboration can make AI growth more resource-efficient and equitable. 🔗 Link in comments👇 #AI #Resources #Governance #Policy #Innovation #AIWonk #OECD #India
-
-
𝗛𝗼𝘄 𝗺𝘂𝗰𝗵 𝗮𝗿𝗲 𝗴𝗼𝘃𝗲𝗿𝗻𝗺𝗲𝗻𝘁𝘀 𝗮𝗻𝗱 𝗰𝗼𝗺𝗽𝗮𝗻𝗶𝗲𝘀 𝗿𝗲𝗮𝗹𝗹𝘆 𝗶𝗻𝘃𝗲𝘀𝘁𝗶𝗻𝗴 𝗶𝗻 𝗮𝗿𝘁𝗶𝗳𝗶𝗰𝗶𝗮𝗹 𝗶𝗻𝘁𝗲𝗹𝗹𝗶𝗴𝗲𝗻𝗰𝗲? Despite frequent announcements, consistent and verifiable data on AI investment have been lacking for a long time. A new report from the OECD and the European Commission helps to close that gap. In this AI Wonk blog post, Lucia Russo, Sara Marchi, Jeff Mollins and Antoine-Alexandre André present a new report and harmonised framework to quantify AI spending across the EU. 𝗞𝗲𝘆 𝗳𝗶𝗻𝗱𝗶𝗻𝗴𝘀: * Total AI investment in the EU27 reached EUR 257 billion in 2023. * Private investment dominates, accounting for 73% of total spending. * Skills development is the largest category, followed by data and infrastructure. * Germany, France and the Netherlands lead overall, while Ireland tops per capita investment. This new methodology provides policymakers and researchers with a common baseline to answer vital questions: Are we investing enough in AI? In the right areas? How do we compare globally? Read the full AI Wonk blog post to explore how this framework helps bring clarity and accountability to one of the most important areas of digital transformation. 🔗 𝗟𝗶𝗻𝗸 𝗶𝗻 𝗰𝗼𝗺𝗺𝗲𝗻𝘁𝘀 👇 #OECD #AI #Investment #Data #Europe #Policy #AIWonk #EuropeanCommission
-
-
𝗧𝗵𝗲𝗿𝗲 𝗮𝗿𝗲 𝗺𝗮𝗻𝘆 𝘄𝗮𝘆𝘀 𝘁𝗼 𝗺𝗶𝗻𝗶𝗺𝗶𝘀𝗲 𝘄𝗮𝘁𝗲𝗿 𝘂𝘀𝗲 𝗿𝗲𝗹𝗮𝘁𝗲𝗱 𝘁𝗼 𝗔𝗜 𝗼𝗽𝗲𝗿𝗮𝘁𝗶𝗼𝗻𝘀, 𝗯𝘂𝘁 𝘁𝗵𝗲𝘆 𝗺𝗮𝘆 𝗻𝗼𝘁 𝗯𝗲 𝘄𝗵𝗮𝘁 𝘆𝗼𝘂 𝘁𝗵𝗶𝗻𝗸 AI’s water consumption is not just a technical issue—it’s a resource management challenge that requires innovation, coordination, and place-based strategies. Following Shaolei Ren’s 2023 AI Wonk post on AI’s water footprint, this new article, coauthored with Amy Luers, PhD (Microsoft), explores how both data centres and the electricity systems behind them drive rising water demand—and what can be done to manage consumption. They show that: • AI workloads are the fastest-growing driver of data-centre electricity and cooling needs. • Direct water use from cooling is only part of the picture — up to 80% of total water impact often comes from electricity generation. • Solutions depend on innovation, collaboration, and local context — from alternative cooling systems and reclaimed water use to shifting power sources toward low-water renewables. AI’s water challenge is not only about technology but about the broader systems that power it. Understanding both is key to building a more sustainable digital future. 🔗 𝗥𝗲𝗮𝗱 𝘁𝗵𝗲 𝗳𝘂𝗹𝗹 𝗽𝗼𝘀𝘁 𝗼𝗻 𝘁𝗵𝗲 𝗔𝗜 𝗪𝗼𝗻𝗸 𝗯𝗹𝗼𝗴 𝗮𝘁 𝘁𝗵𝗲 𝗹𝗶𝗻𝗸 𝗶𝗻 𝗰𝗼𝗺𝗺𝗲𝗻𝘁𝘀. 👇 #AI #Sustainability #Water #DataCentres #OECD #AIWonk
-