Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors

Existential Threats and Risks to All

ARCHIVED REPORTS

  Overview
  Planet
  People
  Security
  Sustainability
20 Notable Reports
Directory of Reports
Editor’s Remarks

 

SPECIAL COLLECTIONS
Recent Reports and Articles on the AI Race, Impacts, and Needed Guardrails

EXTRA Director of Research, Michael Marien
mdmarien@outlook.com

For better and worse, Artificial Intelligence or AI is already widespread, and still evolving, perhaps to AGI and superintelligence in the next few years. This overview seeks to identify many of the headlines  and bottom lines of recent reports and articles, as well as three books—all published in 2025, with two exceptions.  It is divided into three major parts:

I) The AI Race between the US and China, and a handful of  massively-spending US technology organizations;
II) Impacts of AI: both current and expected; and
III) Creating Guardrails for this emerging and influential technology.

THE AI RACE

Two recent articles provide essential background to what’s happening. AI Frenzy Escalates (1) describes Meta, Microsoft, Amazon, and Google planning to spend $320 billion on infrastructure in 2025, mostly on data centers. Critics argue that there is no guarantee that this risk will live up to its potential, but many executives believe that “the bigger risk is not spending enough to keep up with rivals.” Zuckerberg Pushes Superintelligent AI (2) reports that Meta has revamped its AI strategy to include a new team dedicated to superintelligent AI that will improve “nearly every aspect of what we do.”

This investment is given a political push by Trump Intends to Unleash AI to Spur Boom (3), reporting that the US president has signed three executive orders and outlined an “AI Action Plan to “remove red tape and onerous regulation.” America started the AI race, Trump said, and “America is going to win it.” Arguably, the EU could stop Trump (4).

The investment in AI, requiring data centers, chip factories, and power supply, is pumping up the US stock market. “Companies will spend $375 billion globally in 2025 on AI infrastructure, projected to rise to $500 billion next year.”  The big tech companies “are the largest financiers of the frenzy, but private equity firms have been pouring in capital, too. A major asset management firm estimates that “AI infrastructure will sop up $7 trillion over the next ten years.” (5) “If there were no data centers to build, dollars would flow into other types of investment (6).

A survey of the visions of AI firms, ”from the very plausible to the fantastical,” notes that tech companies “are taking a leap of faith,” fueled by FOMO, or “Fear of Missing Out, which does not come cheap (7).

In Empire of AI, a former investigative journalist with the Wall Street Journal describes a new age of empire where a small handful of globally scaled companies are in the field, headed by Chat GPT and OpenAI. This massively disruptive sector requires a huge number of resources to create large-language models, “arguably the most fateful tech arms race in history” (8).

IMPACTS

In “The Economics of Superintelligence,” a July Cover Feature of The Economist (9) argues that “Many fear a hellscape, in which AI-enabled terrorists build bioweapons that kill billions, or a misaligned AI that slips its leash and outwits humanity.” But this crowds out thinking about “the immediate, probable, predictable—and equally astonishing—effects of a non-apocalyptic AI. Hence, the possibility of an explosion of economic growth, with wild swings between stocks, “as it becomes clear which companies were winning and losing winner-take-all contests.”

But these two poles of “hellscape” vs. a new economic era of global abundance are countered by recent doubts about AGI success, as well as the negative impacts of AI already visible and possible in the near future.

Gary Marcus, a founder of two AI companies and author of six AI books, argued in September that Chat GPT-5 is nowhere near the revolution many had expected, and that “the chances of AGI’s arrival by 2027 now seem remote” (10). Servaas Storm asserted in October that the US has reached “Peak Gen AI” for current large language models, and that further scaling of chips and data centers will not deliver AGI (11).

Some negative impacts are already visible. A “flood of fake photos and images” has amplified social and partisan divisions and bolstered antigovernment sentiment (12).  OpenAI’s new “Sora” smartphone app enables the creation of videos entirely from AI, making disinformation easier and endless (13). “Be Wary of Chatbots Offering Guidance” (NY Times, 30 Sept, 2025, D6) warns of “falling prey to AI’s flattery” and AI companions that lead to social deskilling.

Cyberattacks are escalating in speed, volume, and sophistication. GenAI chatbots serving as a “force multiplier” for the global hacker toolbox (14). A report from RAND Europe warns of uncontrollable AI incidents and calls for mandatory security audits and independent oversight (15). Even scarier, an 80-page April report from Forethought warns of AI-enabled coups to seize power, reinforced by a 40-page July report from RAND on an AGI Coup as one of several scenarios (16).

An excellent overview is provided by an Elon University survey of 301 experts on “Being Human in 2035” (17), where positive changes are seen in the capacity to learn and problem-solving, with negative changes in social and emotional intelligence, capacity to think deeply, mental well-being, and sense of identity and purpose. Fundamental change in human capacities was expected by 23%, considerable change by 38%, moderate change by 31%, and little or no change by 8%. Overall, 16% saw AI as mostly for the better for most people worldwide, 50% saw equal changes for better and for worse, 23% saw changes mostly for the worse, 5% saw little or no change, and 5% don’t know.

Finally, if all of this seems too much to handle, a provocative new book by Eliezer Yudkowsky and Nate Soares, leaders of the Machine Intelligence Research Institute in Berkeley, argues that “If Anyone Builds It Everyone Dies: Why Superhuman AI Would Kill Us All”, and that the stakes are existential (18).

GUARDRAILS

In August, the UN General Assembly established an Independent Scientific Panel on AI, and an annual Global Dialogue on AI Governance, recognizing the need to regulate AI “before it becomes a threat to world stability and peace.” However, permanent members of the outdated Security Council can block initiatives to rein in the technology.  Such members include China and the US, the leading competitors in AI development. The Director of the White House Office of Science and Technology, Michael Kratsios, said on Sept 24 that “We totally reject all efforts by international bodies to assert centralized control in global governance of AI” (19).

    Three other UN bodies have advised on developing guardrails:

  • UN Economic and Social Council (Jan 2024) stressed the urgent need for governance as AI advances to AGI and “potential existential risks” (20).
  • UN High Level Advisory Body on AI (July 2024) warned that AI governance is crucial to address challenges and risks, and common ground and coordination are needed (21).
  • UN Development Programme (March 2025) explored the transformative potential of AI for better and worse, including potential “existential risks,” calling for balanced governance that supports innovation while safeguarding inclusion and equity (22).

Other organizations have also urged various guardrails:

  • Centre for International Governance Innovation  (June 2024) sponsors an AI Global Risks Initiative concerned with loss of control of advanced AI systems and AI weaponization; proposes a Framework Convention to codify shared objectives for AI cooperation (23).
  • AI Action Summit (Feb 2025) initiated by France, UNEP, and the ITU with more than 100 partners in 11 countries, the Coalition supports the SDGs and desires an AI respectful of Planetary Boundaries, warning of “a risks of fragmented and redundant initiatives” (24).
  • Center for AI Safety (March 2025) warns that superintelligent AI would be a matter of national security, and advocates precautionary regulatory frameworks and alignment with human values. Describes a Mutual Assured AI Malfunction (MAIM) deterrence framework somewhat similar to the MAD policy for nuclear deterrence (25).
  • MIT AI Risk Initiative (March 2025) synthesizes 831 risk mitigations from 13 frameworks, categorizing them into Governance & Oversight, Technical & Security, Operational Process, and Transparency & Accountability Controls (26).
  • Singapore Conference & Infocomm Media Development Authority (May 2025) synthesizes expert agreement on in-depth measures for safe GPAI systems to reduce collective harm and guide global safety R&D; proposes shared risk standards ,rapid incident response, and adaptive structures to manage cascading  AI impacts (27).
  • International Panel on the Information Environment (Sept 2025) analyzes AI’s role in peacebuilding, as well as misinformation and ethical risks. Proposes ongoing risk assessment, cross-sector risk management, and inclusive and transparent solutions with local input (28).
  • Millennium Project (Sept 2025) published a 205-page book by CEO Jerome Clayton Glenn on global governance of the transition to AGI, with chapters on what might happen if AGI is not governed, managing international cooperation, flexible response to new issues, disruptions that could complicate enforcement, reducing and preventing crime and terrorism, continuous AGI audits, etc (29).
  • Humanity AI (Oct 2025 Press Release). A philanthropic coalition to back organizations shaping AI for people and communities. Funding priorities for defending democracy, education in the best interests of all students, humanities and culture, enhancing how people work, and security for deploying AI to protect people (30).

In sum, there are plenty of thoughtful organizations to advise the UN’s Independent Scientific Panel on AI, and to participate in the annual Global Dialogue on AI Governance.

CONCLUSION

A May 2025 article in Scientific American asked, “Could  AI Really Kill Off Humans?” (31). It concludes that “no scenario can be described where AI is conclusively an extinction threat to humanity,” noting that it is very hard to kill all of us, through nuclear war, biological pathogens, or climate change. But it is potentially easy to kill off many billions of us through any number of AI-related possible catastrophes. Effective global governance of AI is obviously needed–the sooner the better, but probably too little, too late. This forecast will hopefully be wrong.

On a somewhat more positive note, the first “Time 100AI” on “The 100 Most Influential People in Artificial Intelligence” provides brief descriptions of “Leaders, Innovations, Shapers, and Thinkers”, both supporters and critics (32).

 REFERENCES
  1. AI Frenzy Escalates as Giants Go Shopping,” New York Times, 30 June 2025, B1. Describes tech companies “accelerating their spending, pumping hundreds of billions of dollars into their frantic effort to create systems that can mimic or even exceed the abilities of the human brain.”
  2. Zuckerberg Pushes Superintelligent AI as Meta’s Profits Leap Past Expectations,” New York Times, 9 Aug 2025, B3.  Mark Zuckerberg is the CEO of Meta.
  3. Trump Intends to Unleash AI to Spur Boom,” New York Times, 24 July 2025, p1. This “AI Action Plan” embraces the tech industry’s view that it must work with few guardrails, “a forceful repudiation of other governments, including the EC, that have approved regulations to govern development of AI.”
  4. Trump Wants to Let AI Run Wild.  This Might Stop Him,” New York Times, 20 Aug 2025, A22. An expert on the EU argues that the EU has put in place a number of regulations over the past decade to balance AI innovation, transparency, and accountability. To operate in international markets, US companies must follow the rules of these markets; thus, the EU, an enormous market committed to regulating AI and establishing guardrails against possible risks, could well thwart Trump’s techno-optimist vision.  ALSO SEE “California  Governor Signs AI Safety Law,” New York Times, 1 Oct 2025, B4, on “The Transparency in Frontier AI Act” requiring advanced AI companies to report safety protocols. The same article also notes that “38 states passed or enacted about 100 AI regulations” in 2025.  
  5. Money Being Poured Into AI Is propping Up Real Economy,” New York Times, 28 Aug 2025, p.1. “Optimism around the windfall that AI may generate…(is lifting) the entire domestic economy.”
  6. The AI Economy Is Booming. Everything Else is Meh,” New York Times, 7 Oct 2025, A21.  A Yale economist notes that, despite tariff rates not seen in a century, the stock market has risen to new highs 30 times in 2025, in spite of Trump’s policies, not because of them. “The coat of AI gloss is giving the administration runway to double down on bad ideas.”  The AI revolution “is masking real problems.”
  7. What Exactly are AI Firms Attempting to Build?New York Times, 29 Sept 2025, B1 & B6.
  8. Empire of AI: Dreams and Nightmares in Sam Altman’s Open AIKaren Hao.  Penguin, May 2025, 496p, $32.  Published in the UK by Allen Lane as Empire of AI: Inside the Reckless Race for Total DominationSomewhat in contrast, ALSO SEE “A Different Kind of Superpower: AI in India,” The Economist, 20 Sept 2025, 11-12. Notes that India is now the second-largest market for OpenAI, which sells in India for a fifth of the price of its cheapest American plan; some 92% of Indian office workers regularly use AI tools, compared with 64% in the US. Frugal products infused with AI could be an Indian export across the developing world, a path different from the US or China, but “no less consequential.”
  9. The Economics of Superintelligence: If Silicon Valley’s Predictions Are Even Close to Being Accurate, Expect Unprecedented Upheaval,” The Economist (Cover Feature), 26 July 2025, p.7.
  10. The Fever Dream of Imminent Superintelligence Is Finally Breaking,” Gary Marcus, New York Times, 8 Sept 2025, A22. ALSO SEE: The Cost of the AI Delusion: By Chasing Superintelligence, America Is Falling Behind in the Real AI Race, Foreign Affairs, 26 Sept 2025.  [Not seen]
  11. The AI Bubble and the US Economy: How Long Do “Hallucinations” Last?  Servaas Storm. Policy Commons, 2 Oct 2025, 13p.  Also cautions that “fixation on AGI may crown out more practical applications of existing AI.”
  12. With an Onslaught of Manipulated Images, AI Is Wearing Down Democracy,” New York Times, 29 June 2025, p.1. Free and easy to use, AI tools undermine faith in electoral integrity.
  13. Open AI’s New Video App Is Maybe Too Scary Good, for Better or Worse,New York Times, 4 Oct 2025, B1 & B4. A companion article on B4 is entitled “App Makes Dissemination of Disinformation Easier, Convincing, and Endless,” despite commenting that OpenAI “had made an effort to include guardrails” for Sora.  ALSO SEE When AI Came for HollywoodNew York Times, 5 Oct 2025, SR3, on the Sora app and creation of Tilly Norwood, a brunette actress created by AI and threatening a world run by fakes.
  14. CrowdStrike 2025 Global Threat Report. Crowdstrike (Austin ,TX), April 2025, 53p.
  15. Strengthening Emergency Preparedness and Response for AI Loss of Control Incidents. Elika Somani et al. RAND Europe, Aug 2025, 61p.
  16. AI-Enabled Coups: How a Small Group Could Use AI to Seize Power by Tom Davidson et al. Forethought. 15 April 2025, 80p (including c.135 references).  Warns that leaders could fully replace personnel with AI systems that are singularly loyal to them. ALSO SEE How Artificial General Intelligence Could Affect the Rise and Fall of Nations by Barry Pavel et al. RAND Corp, July 2025, 40p.  “AGI Coup” is one of several scenarios.
  17. Being Human in 2035: How Are We Changing in the Age of AI? Elon University, Imagining the Digital Future Center (Elon, NC), April 2025, 286p.
  18. If Anyone Builds It Everyone Dies: Why Superhuman AI Would Kill Us AllEliezer Yudkowsky and Nate Soares. Little, Brown, Sept 2025, 272p.  A full-page profile of the book and its authors is given to “A.I. Prophet Wants It All Shut Down,” New York Times,  15 Sept 2025, B1.
  19. Security Council Raises an Alarm on the Potential Dangers of AI,” New York Times, 26 Sept 2025, A7.  A subsequent article on 28 Sept (p.10), “UN Seeks Global Guardrails on AI, to the Trump Administration’s Dismay,” announces the formation of “a 40-member panel of scientific experts to synthesize and analyze the research on AI risks and opportunities,” which could result in an independent AI watchdog similar to the IAEA on atomic energy. Also mentions that a group of more than 200 leaders “called last week for global AI guardrails.”
  20. Artificial Intelligence Governance to Reinforce the 2030 Agenda. UN Economic and Social Council, 29 Jan 2024, 16p.
  21. Governing AI for HumanityUN High-Level Advisory Body on AI, 7 July 2024, 95p.
  22. A Matter of Choice: People and Possibilities in the Age of AI. Human Development Report 2025. UN Development Programme, March 2025, 324p.
  23. Framework Convention on Global AI Governance: Accelerating International Cooperation to Ensure Beneficial, Safe, and Inclusive AI. Centre for International Governance Innovation (Waterloo, Ontario, Canada), June 2024, 34p.  Describes CIGI’s Global AI Risks Initiative.
  24. Coalition for Sustainable AI. AI Action Summit, Feb 2025, 5p. Launched at the Paris AI Action Summit, initiated by France, UNEP, and the ITU.
  25. Superintelligence Strategy: Expert Vision. Center for AI Safety (Dan Hendrycks, Director), Eric Schmidt and Alexander Wang.  March 2025, 40p.
  26. Mapping AI Risk Mitigations: Evidence Scan and Draft Mitigation Taxonomy. MIT AI Risk Initiative, March 2025, 24p.
  27. The Singapore Consensus on Global AI Safety Research Priorities. Singapore Conference & Info Comm Media Development Authority, May 2025, 33p.
  28. Artificial Intelligence and Peacebuilding: Opportunities and ChallengesInternational Panel on the Information Environment, Sept 2025, 60p.
  29. Global Governance of the Transition to Artificial General Intelligence: Issues and Requirements. Jerome Clayton Glenn (CEO, Millennium Project).  De Gruyter Brill, Sept 2025, 205p., $147.  Amazon Kindle edition $104.  Derived from Requirements for Global Governance of AGI: Phase 2 of a Real-Time Delphi, Millennium Project, April 2024, 46p.
  30. Humanity AI Commits $500 Million to Build a People-Centered Future for AIHumanity AI, 14 Oct 2025 Press Release by a coalition of 10 philanthropic leaders, including the AI Opportunity program of the MacArthur Foundation.  Grants to begin in 2026.
  31. Could AI Really Kill Off Humans?Michael  J.D. Vermeer (RAND), Scientific American, 6 May 2025.
  32. Time 100AI: The 100 Most Influential People in Artificial Intelligence,” Time, 8 Sept 2025, pp 37-53.  This is followed by an AI Special Report with four articles (pp 55-71): Beyond Human Control: The Race for Artificial General Intelligence Poses New Risks to an Unstable World”, rising electricity bills due  to ”energy-guzzling data centers built to train and run AI models,”  how AI will reshape politics globally by scientific advances and job displacement, and “the agentic age” as a new frontier for AI and humans, where machines do cognitive work once performed by humans. Time has been identifying the 100 Most Influential People for 20 years, and is now specializing in a few areas, such as AI.                                 
REPORTS COLLECTION

Human Development Report 2025
United Nations Development Programme
March 2025, 324p. Explores the profound, dual-edged impact of artificial intelligence (AI) on human development, noting breakthroughs in creativity and productivity alongside risks of bias, inequality, and ethical dilemmas. Finds that “AI is increasingly enabling cross-border collaboration in research and innovation, fostering new networks of knowledge production across regions” but warns of existential risks, recommending balanced governance to promote inclusion, equity, and resilient systems. Key recommendations: Foster inclusive, trustworthy AI development; address IP challenges; implement equity-focused regulatory frameworks.

Mapping AI Risk Mitigations: Evidence Scan and Draft Mitigation Taxonomy
MIT AI Risk Initiative
March 2025, 24p. Synthesizes 831 risk mitigations from 13 frameworks, categorizing them into “Governance & Oversight, Technical & Security, Operational Process, and Transparency & Accountability Controls.” Stresses operational safeguards, continuous monitoring, and that “AI risk management is an emerging concept,” serving as a foundational resource for global decision-makers. Key recommendations: Prioritize ongoing monitoring and robust risk processes; encourage community and stakeholder feedback on actionable strategies.

Superintelligence Strategy: Expert Version
Center for AI Safety
March 2025, 41p. Frames superintelligent AI as a strategic and security challenge. Details the Mutual Assured AI Malfunction (MAIM) deterrence framework, highlights chip vulnerability and supply chain risks, and stresses the need for legal, multipolar governance frameworks. “Outcomes hinge on what we do next” underlines the urgent importance of coordinated deterrence. Key recommendations: Strengthen supply chains, legal frameworks, and international deterrence to prevent the uncontrolled escalation of superintelligence.

The Artificial General Intelligence Race and International Security
Sarah Kreps et al., RAND Corp & Perry World House
Sept 2025, 72p. Assesses AGI’s impact on international stability, focusing on U.S.–China competition and the transition phase before AGI’s maturity. Notes the inadequacy of traditional arms control for AGI’s dual-use nature and proposes innovative international governance approaches, including an “AI cartel”. Warns that “risks arise not only from AGI’s eventual power, but also critically from the ambiguous and volatile period preceding its arrival.” Key recommendations: Develop tailored governance for dual-use technology; maintain strategic communication and flexibility during AGI’s formative phase.

How Artificial General Intelligence Could Affect the Rise and Fall of Nations: Visions for Potential AGI Futures
Barry Pavel et al., RAND Corp
July 2025, 40p. Explores scenarios for AGI’s influence on global power, including the “New Renaissance” through cooperative innovation versus “Governance Failure” or an “AGI Coup” driven by misaligned, centralized superintelligence. Identifies existential risks, including economic, security, and authoritarian shifts. Stresses the importance of public-private partnerships and robust alliance structures for alignment and safety. Key recommendations Include Promoting balanced oversight, resilient alliances, and proactive safety and ethics protocols.

Strengthening Emergency Preparedness and Response for AI Loss of Control Incidents
Elika Somani et al., RAND Europe
Aug 2025, 61p. Calls for multi-layered strategies to prevent uncontrollable AI incidents, recommending mandatory reporting, security audits, independent oversight, and a safety-first development culture. Notes preparedness must precede deployment, addressing open-source risks and emphasizing information sharing. Key recommendations: Institute mandatory reporting, routine audits, and joint risk management across stakeholder groups.

The Singapore Consensus on Global AI Safety Research Priorities
Singapore Conference & Infocomm Media Development Authority
May 2025, 33p. Synthesizes expert agreement on defense-in-depth measures for safe GPAI systems, spanning risk assessment, alignment, robustness, and real-time control and intervention mechanisms. Identifies cooperation in risk thresholds, incident response, and dynamic benchmarking as ways to reduce collective harm and guide global AI safety R&D. Key recommendations: Establish shared risk standards, rapid incident response, and adaptive institutional structures to manage cascading AI impacts.

Artificial Intelligence and Peacebuilding: Opportunities and Challenges
International Panel on the Information Environment
Sept 2025, 60p. Analyzes AI’s dual role in peacebuilding, enhancing conflict analysis and citizen engagement while highlighting bias, misinformation, and ethical risks. Advocates rights-based, conflict-sensitive AI design, local participation, and ongoing risk assessment, emphasizing human oversight and contextual adaptation. Key recommendations: Design inclusive and transparent AI solutions with local input; foster international cooperation and cross-sectoral risk management for fragile states.

Multi-Stakeholder Forum on Science, Tech, and Innovation for the SDGs
UN Economic and Social Council
31 May 2024, 19p Summary. Solutions and innovations to support progress across the SDGs with a focus on Goals 1, 2, 13, 16, and 17, with >300 scientists submitting briefs and 99 passing peer review. Emphasis on AI driving precision farms to increase yields up to 70% by 2050 and AI potential in health care—but AI data centers consume 1% of global electricity and use large amounts of freshwater.

Summit of the Future Outcome Documents: Pact for the FutureGlobal Digital Compact, and Declaration on Future Generations
United Nations
Sept 2024, 66p. Final version of the Pact, listing 56 Actions on sustainable development and financing (“we will take bold, ambitious, accelerated, just and transformative actions to implement the 2030 Agenda”), international peace and security (“we will redouble our efforts to build and sustain peaceful, inclusive and just societies and address the root causes of conflicts”), science and technology, youth and future generations, and global governance. The Global Digital Compact (pp40-55) seeks to “close all digital divides” and enhance AI governance. The Declaration seeks stronger youth participation.

Crowdstrike 2025 Global Threat Report
CrowdStrike
April 2025, 53p. “Cyberattacks are escalating in speed, volume, and sophistication.” Identifies a shift in 2024 toward streamlined, scalable attacks driven by a business-like approach. “Don’t underestimate today’s enterprising adversaries,” with a “force multiplier” impact of off-the-shelf chatbots making genAI “a popular addition to the global hacker toolbox.” In 2024, China-nexus activity surged 150% across all sectors. Voice-phishing (vishing) attacks skyrocketed, with the average e-crime breakout time averaging 48 minutes. Most detections were malware-free. Access broker ads increased, and 26 new adversaries tracked by CrowdStrike raised the total to 257. North America had 53% of interactive intrusions, followed by 14% in Russia, 11% in Europe, and 7% in India. Emphasizes the need for proactive defense strategies and adaptive cyber resilience as threat actors grow more agile and commercially motivated.

Coalition for Sustainable AI
AI Action Summit
Feb 2025, 5p. Launched at the Paris AI Action Summit, the Coalition, initiated by France, UNEP, and the ITU, with >100 partners in 11 countries, unites global stakeholders to advance AI’s alignment with environmental and climate goals while addressing its environmental impact. It provides a “Platform of Engagement” to connect stakeholders and an “Initiatives Hub” to enhance collaboration, visibility, and avoid duplication. The Coalition supports the UN’s SDGs and coordinates with existing initiatives, “continuing work for an AI respectful of Planetary Boundaries.” The transformative potential of AI in tackling the climate and environmental crisis is already unfolding. Still, the environmental footprint of AI is also growing, and there is “a risk of fragmented and redundant initiatives that dilute impact.”

Being Human in 2035: How Are We Changing in the Age of AI?
Imagining the Digital Future Center (Elon Univ, Elon NC)
April 2025, 286p. A survey of 301 experts asked to predict AI impact by 2035 on 12 essential human traits and capabilities. Change is likely to be primarily positive in curiosity and capacity to learn, decision-making and problem-solving, and innovative thinking. Change is likely to be mostly negative in social and emotional intelligence, capacity to think deeply, trust in widely shared values and norms, mental well-being, sense of identity and purpose, etc. Dramatic and fundamental change in human capacities as advanced AI is broadly adapted was expected by 23% of the experts; considerable change by 38%, moderate but noticeable change by 31%, minor and barely perceptible change by 5%, and no noticeable change by 3%. Overall, 16% view AI as mostly beneficial for most people worldwide, 50% see fairly equal changes for better and worse, 23% believe changes will mostly be for the worse for most people, 6% expect little to no change overall, and 5% are unsure.

Empire of AI: Dreams and Nightmares in Sam Altman’s Open AI
Karen Hao (former Wall Street Journal writer)
Penguin, May 2025, 496p, $32. (Pub in UK by Allen Lane, as “Empire of AI: Inside the Reckless Race for Total Domination.”). An AI expert and investigative journalist describe a new and ominous age of empire, where a small handful of globally scaled companies are at the forefront, led by ChatGPT and OpenAI. The vision of success for this massively disruptive sector requires a huge number of resources to create massive large-language models, “arguably the most fateful tech arms race in history.”

Governing AI for Humanity
UN High-Level Advisory Body on AI
7 July 2024, 95p. AI governance is “crucial” to address challenges and risks, and ensure its “tremendous potential” is realized. Despite much discussion, “the patchwork of norm and institutions is still nascent and full of gaps.” On common ground and benefits, coordination and implementation gaps, a UN AI Office, etc.

Framework Convention on Global AI Challenges: Accelerating International Cooperation to Ensure Beneficial, Safe and Inclusive AI
Centre for International Governance Innovation (Waterloo, ON, Canada)
June 2024, 34p. CIGI’s Global AI Risks Initiative is concerned with loss of control of advanced AI systems and AI weaponization (misuse of AI systems to cause harm). A Framework Convention should codify the most important shared objectives for AI cooperation in addressing the most urgent issues posed by “accelerating development of AI.”

Artificial Intelligence Governance to Reinforce the 2030 Agenda
UN Economic and Social Council
29 Jan 2024, 16p. On AI’s potential to accelerate progress in poverty reduction, education, and other areas, as well as its associated risks, including job displacement and data bias. Stresses the urgent need for governance, as AI rapidly advances to AGI and potential existential risks.

Requirements for Global Governance of Artificial General Intelligence: Phase 2 of a Real-Time Delphi
The Millennium Project (Jerome Glenn, CEO)
April 2024, 46p. AGI is an advanced AI capable of autonomous learning across domains. Most experts project the emergence of AGI within 3-5 years, followed by Artificial Superintelligence (ASI). Many of the 229 respondents warn of existential risks from unregulated AGI.

Managing Extreme AI Risks Amid Rapid Progress
Science, Yoshua Bengio, G. Hinton, and 23 Others
20 May 2024, 5p. AI is progressing rapidly, as companies shift focus to generalist AI systems that act autonomously, but with risks including large-scale social harms, malicious uses, and loss of human control. “AI safety research is lagging.” Governance measures must prepare us for sudden AI breakthroughs with an automatic trigger when AI hits certain milestones.

The AI Risk Repository: A Comprehensive Meta-Review, Database, and Taxonomy of Risks
MIT Future Tech
Aug 2024, 79p. Classifies 777 risks into 7 AI risk domains and 23 sub-domains.

The Digitalist Papers: Artificial Intelligence and Democracy in America
Stanford Digital Economy Lab
24 Sept 2024, 240p. $36.72; $28.97pb from Amazon. Just as the Federalist Papers of the 18th century analyzed the great challenges of the day, these 12 essays by 19 “thought leaders” consider new challenges to democracy and participatory practices, AI’s potential to transform government operations and public service, the complex challenges of AI regulation, and the need for participatory frameworks and ethical considerations.

AI Landscapes: Exploring Future Scenarios of AI to 2030
Rachel Adams and 20 Others, Economist Impact
Feb 2024, 29p. Four AI development scenarios, ranging from global cooperation to fragmented policies.

The AI Revolution: What the New Age of Artificial Intelligence Means for Humanity
NewScientist Essential Guide No. 23
July 2024, $15. Why has AI suddenly leapt forward? Describes how the technology works, its capabilities, and “future horizons from utopia to annihilation.”

The Age of AI: And Our Human Future
Henry A. Kissinger, Eric Schmidt (former Google CEO & Chair), and Daniel Huttenlocher, Little, Brown and Company
Oct 2021, $30. Discusses the emerging human-machine partnership, the evolution of AI, the dream of AGI, global network platforms and disinformation, security and world order, conflict in the digital age, AI and the international order, managing AI, human identity and AI, and the essential need for an AI ethic.

Technology as a Force for Good: Technology Driving the Transition to a Superior Future
Force for Good, Ketan Patel
Jan 2024, 122p. Warns of the polycrisis, “a cascade of successive global disruptions diverting leaders’ attention and resources away from longer term systemic priorities.” Shows that 19 core technologies, now existing, can enable necessary transitions and advance the SDGs.

SUBSCRIBE

EXTRA NEWSLETTER