|
Newsletter of the EXTRA Working Group
of the World Academy of Art and Science
|
|
|
|
|
Running on EXTRA time…
Insights from the EXTRA Working Group to help you
keep track of ‘Existential Threats and Risks to All’
|
Welcome
to the third edition
of the EXTRA Newsletter
|
EDITORIAL
|
Artificial Intelligence: Between Threat and Promise
This issue explores the meteoric rise of AI as both a new threat and a potential savior. In our video feature, readers will find an interview with Jerome Glenn, Chair of the Advisory Committee on AI Regulation to the President of the UN General Assembly, in which he outlines the current state of AI development and highlights the extent to which the regulatory process is lagging. EXTRA Director of Research, Mike Marien, provides an AI-focused overview summarizing key insights from recent reports, articles, and four 2025 books. In her article, Assistant Prof. Polonca explores the benefits and risks of AI in the education sector. Dr. Kiriti Prasad Choudhuri examines how AI could actually re-humanize overstrained healthcare systems. In his article, Samraj Mathura examines the guardrails for AI being considered by the EU. This month’s featured reports, publications, and news continue the theme.
Please take note of forthcoming events hosted by EXTRA and our network!
Lorenzo Rodriguez Co-editor, EXTRA Newsletter
Prof Thomas Reuter Co-editor & Chair, EXTRA Working Group
|
|
|
INTERVIEWS & WEBINARS
|
|
Original, short interviews or discussions with experts and
stakeholders on various categories of existential challenges.
EXTRA Interviews: Jerome Glenn & Artificial Intelligence - Urgent AGI Governance Challenges
In this issue, our guest Jerome Glenn—CEO of the Millennium Project and Chairman of the AGI Panel for the UN General Assembly—explains why we have just 3 years to consolidate adequate governance for Artificial General Intelligence before it's too late. This video feature is part of the EXTRA Interview Series, where we ask experts about crucial insights into humanity's greatest existential challenges and the race to address them responsibly.
|
 |
|
|
ARTICLES, ESSAYS & IDEAS
|
|
Original articles, op-ed pieces, and more – commissioned by EXTRA.
|
|
Recent Reports and Articles on the AI Race, Impacts, and Needed Guardrails
|
 |
|
Mike Marien
Director of Research, EXTRA
|
|
For better and worse, Artificial Intelligence (AI) is already widespread and continues to evolve, potentially reaching AGI and superintelligence within the next few years. This overview aims to summarize the key points and highlights of recent reports and articles, as well as four books published in 2025. It is divided into three major parts: A) The AI Race between the US and China, and a handful of massively spending US technology organizations; B) Impacts of AI: both current and expected; and C) Creating Guardrails for this emerging and influential technology.
Read more
|
Balancing Benefits and Risks: The Role of AI in Education
|
 |
|
Polonca Serrano
Assist. Prof., Alma Mater Europaea University
|
|
AI is transforming education through intelligent tutoring systems, chatbots, and analytics tools that adapt to individual learning styles, provide real-time feedback, and improve outcomes. However, it introduces risks like superficial learning, reduced emotional resilience, deepening inequalities, threats to academic integrity, and platform dependence. AI cannot replace human judgment, critical thinking, and interpersonal guidance. It explores AI's benefits and risks in education, emphasizing ethical, inclusive, and strategic implementation.
Read more
|
|
|
|
 |
|
Kiriti Prasad Choudhury
Manager, Beximco Pharmaceuticals
|
|
|
|
Beyond Efficiency: AI for Rhythm- Aware, Compassionate Healthcare
|
|
AI changes healthcare, but technology alone can't solve aging populations, chronic diseases, and system strain. Modern medicine excels in precision yet lacks empathy. Integrating Medicine, Nature, Mind, and Rhythms—principles from Ayurveda and Chinese Medicine, now research-validated—is essential. Healthcare remains fragmented. It outlines a rhythm-aware framework using humane AI.
Read More
|
|
|
|
Europe’s Moral Compass for AI: From Regulation to Realisation
|
|
AI is rapidly transforming society. The EU has responded with a comprehensive framework, including the AI Act and the new Apply AI Strategy, to regulate AI risks and promote responsible deployment across sectors. The vision of an evolving "agentic web" highlights AI’s growing autonomy while maintaining responsible oversight.
Read More
|
|
 |
|
Samraj Matharu
Founder, The AI Lyceum
|
|
|
|
|
|
UPCOMING EVENTS
|
|
A selection of events to be aware of that are organized
by EXTRA, allies, partners and organizations on our radar.
|
|
|
|
X-Risk Cafe, Inaugural Meeting
|
|
Existential Threats and Risks to All (EXTRA) Working Group EVENT by invitation only
On-line | 22 October 2025 @ 8:00 PM CEST
|
|
Existential Threats and Risks to All (EXTRA) Working Group EVENT by invitation only
On-line | 22 October 2025 @ 8:00 PM CEST
|
|
Read More
|
|
|
|
EXTRA is holding a special outreach meeting with all who have come forward and offered to contribute to our work in various ways. If you would like to join this informal meet-and-greet event and have not yet contacted us, please email us (extra@worldacademy.org) by October 21 and provide some information about yourself to receive a registration link.
|
|
|
|
Coping with Polycrisis and
Systemic Risks: New approaches
to assessment and governance
|
|
Existential Threats and Risks to All (EXTRA) Working Group
On-line | 7 November 2025 @ 1:30 PM CEST
|
|
Read More
|
|
|
|
Empowering Educators: Building AI Pedagogy and Literacy for Future Learning
|
|
Higher Education Sustainability Initiative (HESI)
Hybrid Event | October 21, 2025 @ 7:00-8:30 AM EST
|
|
Read More
|
|
|
|
The Future We Agreed:
One Year of the Pact
|
|
AI Ethics and Integrity International Association (AIEI)
Lisbon | November 11, 2025 @ 6:00-11:00 PM CEST
|
|
Read More
|
|
|
|
|
|
REPORTS
|
|
Our latest selection of the most notable published reports on Existential Threats and Risks. Beat the info glut by taking a look at our monthly five.
If you have time, check the 20 Notable Reports or the complete EXTRA Directory on our website.
This month, we have compiled a special package of reports on AI. The complete list does not fit in this newsletter, but can be accessed HERE.
|
|
|
|
|
Human Development Report 2025
United Nations Development Programme
March 2025, 324p.
Explores the profound, dual-edged impact of artificial intelligence (AI) on human development, noting breakthroughs in creativity and productivity alongside risks of bias, inequality, and ethical dilemmas. Finds that “AI is increasingly enabling cross-border collaboration in research and innovation, fostering new networks of knowledge production across regions” but warns of existential risks, recommending balanced governance to promote inclusion, equity, and resilient systems.
AI-Enabled Coups: How a Small Group Could Use AI to Seize Power
Forethought
April 2025, 64p.
Examines the growing risk that advanced artificial intelligence (AI) could enable small, determined groups to orchestrate coups d’état. “Powerful AI systems may be leveraged to create extraordinarily loyal forces” through secret loyalties embedded in military and autonomous systems, alongside exclusive control over AI-augmented cyber offense and strategic capabilities. Effective prevention requires robust oversight, including the diversification of AI providers, rigorous red-teaming of military AI, human-controlled fail-safes, and the broad public sharing of general-purpose AI capabilities.
Mapping AI Risk Mitigations: Evidence Scan and Draft Mitigation Taxonomy
MIT AI Risk Initiative
March 2025, 24p.
Synthesizes 831 risk mitigations from 13 frameworks, categorizing them into “Governance & Oversight, Technical & Security, Operational Process, and Transparency & Accountability Controls.” Stresses operational safeguards, continuous monitoring, and that “AI risk management is an emerging concept,” serving as a foundational resource for global decision-makers.
Superintelligence Strategy: Expert Version
Center for AI Safety
March 2025, 41p.
Frames superintelligent AI as a strategic and security challenge. Details the Mutual Assured AI Malfunction (MAIM) deterrence framework, highlights chip vulnerability and supply chain risks, and stresses the need for legal, multipolar governance frameworks. “Outcomes hinge on what we do next” underlines the urgent importance of coordinated deterrence.
The Artificial General Intelligence Race and International Security
Sarah Kreps et al., RAND Corp & Perry World House
Sept 2025, 72p.
Assesses AGI’s impact on international stability, focusing on U.S.–China competition and the transition phase before AGI’s maturity. Notes traditional arms control’s inadequacy for AGI’s dual-use nature and proposes innovative international governance approaches, including an “AI cartel”. Warns that “risks arise not only from AGI’s eventual power, but also critically from the ambiguous and volatile period preceding its arrival.”
The Singapore Consensus on Global AI Safety Research Priorities
Singapore Conference & Infocomm Media Development Authority
May 2025, 33p.
Synthesizes expert agreement on defense-in-depth measures for safe GPAI systems, spanning risk assessment, alignment, robustness, and real-time control and intervention mechanisms. Identifies cooperation in risk thresholds, incident response, and dynamic benchmarking as ways to reduce collective harm and guide global AI safety R&D.
Artificial Intelligence and Peacebuilding: Opportunities and Challenges
International Panel on the Information Environment
Sept 2025, 60p.
Analyzes AI’s dual role in peacebuilding, enhancing conflict analysis and citizen engagement while highlighting bias, misinformation, and ethical risks. Advocates rights-based, conflict-sensitive AI design, local participation, and ongoing risk assessment, emphasizing human oversight and contextual adaptation.
|
|
|
|
|
|
REVIEWS
|
|
Synopses and reviews of scientific and policy articles
from external sources not commissioned by EXTRA.
|
|
|
|
|
Introduction to AI Safety, Ethics, and Society
By Dan Hendrycks, December 2024
CRC Press
|
|
Examines AI risks and frameworks for safe development. Emphasizes proactive investment in robustness and monitoring to mitigate catastrophic scenarios. Highlights competitive pressures exacerbating risk, calling for holistic interventions. Offers actionable principles for navigating an uncertain AI future responsibly.
|
|
|
|
Global Governance of the Transition to Artificial General Intelligence: Issues and Requirements
By Jerome Clayton Glenn, August 2025
De Gruyter Brill, $130
|
|
Today's Artificial Narrow Intelligence has limited purposes. AGI could advance humanity but risks ending civilization. Based on 55 experts, topics include: governance, value alignment, international cooperation, decentralization, UN enforcement, audits, quantum computing, and a 2050 self-actualizing scenario.
|
|
|
|
|
|
NEWS from the World Press
|
|
Links to a must-read selection of news for a global outlook across the spectrum of Existential Threats and Risks sourced from the media and web.
|
|
Artificial Intelligence May Not Be Artificial. Harvard Gazette, Open Access, September 2025
An exploration of how human labor and decision-making remain embedded in AI systems, challenging assumptions about algorithmic autonomy and highlighting the role of data workers in training machine learning models.
Episode 721: Rogue AI vs Humanity?. Project Save the World, Open Access, October 16, 2025
A discussion examining existential risks posed by advanced artificial intelligence systems, exploring scenarios where AI development could threaten human autonomy and survival, and assessing current safety measures.
Artificial intelligence: the good, the bad and the ugly environmental costs. The Globe and Mail, Open Access, October 10, 2025
An analysis of AI data centers' rapidly growing energy demands and their environmental impact, examining how the rush to deploy AI infrastructure is driving energy-intensive solutions with significant carbon footprints and resource consumption.
The Coming AI Backlash. Foreign Affairs, Paywall, 2025
A policy analysis warning that mounting public concerns over AI safety, data abuses, cyberattacks, algorithmic bias, and misinformation will trigger regulatory backlash against the tech sector despite current political support for minimal AI oversight.
Fair AI for All: Kazakhstan Pushes for Global Rules on Artificial Intelligence. Qazinform, Open Access, 2025
Kazakhstan's diplomatic efforts to advance equitable AI governance frameworks at the international level, emphasizing inclusivity and fair access to emerging technologies.
AI Synthetic Data and Strong Governance. World Economic Forum, Open Access, October 2025
Analysis of synthetic data's growing role in AI development and the governance frameworks needed to ensure ethical use while maintaining innovation and privacy standards.
Governance and Artificial Intelligence: Keys to Integrated End-to-End Approach for Early Warnings for All. World Meteorological Organization, Open Access, 2025
WMO's perspective on integrating AI into global early warning systems for climate and weather disasters, emphasizing the importance of governance structures for effective implementation.
Winners and Losers of the AI Revolution: Artificial Intelligence is Radically Changing the Employment Landscape. Der Spiegel International, Paywall, 2025
An examination of AI's impact on labor markets, identifying which sectors and workers benefit from automation versus those facing displacement and economic disruption.
How People Around the World View AI. Pew Research Center, Open Access, October 15, 2025
Global survey data revealing public attitudes toward artificial intelligence across different countries, cultures, and demographic groups, including concerns about privacy, employment, and ethical implications.
India's Use of Artificial Intelligence During the Indo-Pak Four Day Crisis. The Diplomat, Open Access, October 2025
Analysis of India's deployment of AI technologies for intelligence gathering, decision support, and strategic communication during a recent security crisis with Pakistan.
Why Big Tech is Going Nuclear. BBC Worklife, Open Access, October 8, 2025
Investigation into major technology companies' investments in nuclear energy to power energy-intensive AI data centers and meet sustainability commitments.
Artificial Intelligence and Digital Sovereignty in the Face of 21st Century Powers. Pressenza, Open Access, October 2025
Discussion of how nations are asserting digital sovereignty amid AI dominance by major tech powers, exploring implications for developing countries and alternative governance models.
Statement on the General Assembly Decision on New Artificial Intelligence Governance Mechanisms. United Nations Secretary-General, Open Access, August 26, 2025
Official UN statement welcoming the General Assembly's establishment of new governance mechanisms for artificial intelligence within the multilateral system.
|
|
|
|
|