Max Tegmark, a professor of physics and AI researcher at MIT, said global AI safety regimes need to be agreed upon. Photograph: Horacio Villalobos/Corbis/Getty Images

Lesson Plan: Exploring Existential Risk and AI—Critical Thinking in the Age of Intelligent Systems

“AI companies should be required to calculate the risk of their systems escaping human control, experts have said, amid fears that the technology could pose an existential threat to humanity. The call echoes the safety calculations made before the first nuclear test in 1945, when scientists worked out the odds that the detonation would ignite the atmosphere and destroy the planet – and then went ahead anyway.” The Guardian. 

This lesson encourages students to critically examine the societal, ethical, and scientific challenges posed by artificial intelligence (AI), particularly the prospect of “existential risk.” Students will engage with current journalism, expert commentary, and academic summaries to explore whether AI systems could evolve beyond human oversight—and what safeguards are needed to prevent this outcome.

The lesson fosters critical literacy, cross-disciplinary dialogue, and ethical reasoning—skills essential for navigating the digital age.


Reading and Research: Building Context and Curiosity

Begin with a close reading of the main article:

  • The Guardian: AI firms urged to calculate existential threat amid fears it could escape human control”
    This article raises pressing concerns about AI safety, drawing a provocative parallel to the Manhattan Project’s pre-nuclear test risk assessments. It introduces the argument that tech companies must undertake—and disclose—calculations about the potential for advanced AI systems to act autonomously and unpredictably.

Encourage students to annotate the article, identifying:

  • Ethical dilemmas
  • Key historical analogies
  • The tone and rhetorical strategies used to evoke urgency or skepticism

Supplemental Readings:
These additional sources will offer a diversity of perspectives and deepen students’ understanding:

  • OpenTools.ai: AI Firms on High Alert: Call to Evaluate Existential Threats of AI Runaway”
    A more technical and policy-focused overview of how AI safety is becoming a matter of public discourse and political concern.
  • James Lau on LinkedIn (summary of Max Tegmark’s work and the Singapore Consensus on Global AI Safety Research Priorities):
    Lau offers digestible insights into the technical side of AI safety, including Tegmark’s proposal of the “Compton constant”—a probabilistic concept aimed at quantifying the likelihood of AI escape.
  • AITopics.org:
    A curated source of current AI research summaries and expert commentary that can help students contextualize the media narratives they read.

Facilitating Discussion: Encouraging Nuanced Thought

Use Socratic questioning and collaborative dialogue to structure a rich class discussion. Key topics to explore:

  1. Historical Analogies:

    • What can we learn from the Trinity nuclear test’s risk calculus?
    • Are the fears surrounding AI comparable or exaggerated?
  1. The “Compton Constant” and AI Risk Metrics:

    • How can we meaningfully quantify the likelihood of AI systems acting outside of human control?
    • What are the limitations of applying mathematical models to existential questions?
  1. The Role of International Agreements:

    • Examine the Singapore Consensus as a blueprint for global AI safety.
    • What might global governance for AI look like, and how feasible is it?
  1. Innovation vs. Regulation:

    • How can we ensure that caution doesn’t become a barrier to innovation?
    • Should governments mandate AI risk assessments before deployment?
  1. Public Perception and the Role of Media:

    • How do media narratives shape our understanding of AI risks?
    • Are we being responsibly informed—or unduly alarmed?

Encourage students to compare how different sources frame the same topic and identify potential biases or omissions. Media literacy should be central to this dialogue.


Essay Assignment: Deepening Ethical and Analytical Reasoning

Ask students to select one of the following essay prompts and craft a well-argued, evidence-based analysis. Essays should integrate course readings, incorporate at least one counterargument, and include real-world or historical examples.

Option 1:

Should AI Companies Be Legally Required to Calculate and Publicly Disclose the Existential Risks of Their Systems?”

Students should weigh the ethical, technical, and regulatory dimensions of this question. Encourage them to discuss:

  • Historical precedents (e.g., nuclear testing, aviation safety)
  • The feasibility of calculating existential risk
  • Transparency vs. corporate secrecy

Option 2:

Is the Fear of AI Escaping Human Control Justified—Or a Barrier to Innovation?”

This essay should explore:

  • The spectrum of expert opinion (e.g., from Tegmark to AI skeptics)
  • Regulatory overreach vs. laissez-faire development
  • The role of science fiction, media, and public imagination in fueling or distorting AI fears

I would love to hear from you