Lesson plan; The slopaganda, institutional shitposting, and AI images as political communication

t started with an image of Trump as a king mocked up on a fake Time magazine cover. Since then it’s developed into a full-blown phenomenon, one academics are calling “slopaganda” – an unholy alliance of easily available AI tools and political messaging. “Shitposting”, the publishing of deliberately crude, offensive content online to provoke a reaction, has reached the level of “institutional shitposting”, according to Know Your Meme’s editor Don Caldwell. This is trolling as official government communication. And nobody is more skilled at it than the Trump administration – a government that has not only allowed the AI industry all the regulative freedom it desires, but has embraced the technology for its own in-house purposes. Here are 10 of the most significant fake images the White House has put out so far. The Guardian 29.1.2026


This is not “just internet culture.” It’s about how public attention is captured, how trust is eroded, and how political identity can be strengthened through spectacle, ambiguity, and outrage cycles.


Learning goals

Students will be able to:

  • Explain and apply: slop / slopaganda, deepfake, meme politics, propaganda, mis/disinformation, and shitposting.
  • Analyze AI images as rhetoric (audience, purpose, framing, emotional triggers, and implied claims).
  • Compare how journalism and official accounts build credibility (or undermine it).
  • Produce a short, evidence-based critique of one example from the article using a structured analysis protocol.

Key terms

Slopaganda

A blend of AI “slop” (cheap, high-volume synthetic content) and propaganda (content designed to manipulate beliefs for political ends). The article attributes the term to academics who describe how even debunked visuals can linger cognitively.

Deepfake / AI-altered image

Synthetic or manipulated media that can appear authentic. The risk increases when content is posted by high-authority accounts because it borrows institutional credibility.

Shitposting (classroom-safe explanation)

A vulgar internet term for deliberately crude, provocative, often low-effort posting meant to trigger reactions, derail discussion, and spread through outrage or irony. In the article, it’s described as “trolling as official government communication” and connected to the phrase “institutional shitposting,” credited to a Know Your Meme editor (Don Caldwell).

Effects worth teaching explicitly (because students often underestimate them):

  • Attention capture: emotional bait outperforms nuance; discussion time gets stolen from policy substance.
  • Plausible deniability: “it’s just a joke” becomes armor against accountability.
  • Normalization of cruelty/stereotypes: repeated “ironic” content can launder harmful ideas into mainstream discourse.
  • Erosion of shared reality: if official sources post manipulated visuals, people become cynical—either believing fakes or doubting everything (“liar’s dividend”).

See more information here

  • What happened that led to this news story?
    Describe (in 1–2 sentences) what the White House posted and why people reacted.
  • What is the key difference between the “original” image and the version shared by the White House?
    Be specific about what appears to have been changed and what impression that change creates.
  • Who is the person in the image, and what was the context of the arrest described in the video?
    (Where/why did the protest or incident happen, according to the report?)
  • How does the video explain or support the claim that the image was altered?
    List one piece of evidence mentioned (for example: comparison to another posted version, verification process, or expert comment).
  • Why does this matter beyond one photo?
    According to the video, what are the risks or consequences of an official account posting altered or AI-manipulated images (think: public trust, misinformation, “it’s just a meme” defense, and how people interpret future evidence)?

The 10 examples:

Students get one of the following (from the article’s headings):

  1. Trump as king (fake magazine cover)
  2. Studio Ghibli meme of a woman being deported
  3. Trump as Pope
  4. Trump as Jedi
  5. Hakeem Jeffries as “a Mexican” / sombreros and tacos (with Chuck Schumer)
  6. “Welcome to the Golden Age”
  7. “Which Way, Greenland Man?”
  8. “Stand with ICE” propaganda poster
  9. The arrest image of Nekima Levy Armstrong (AI-altered “realistic” photo)
  10. The “Nihilistic Penguin” meme

Answer these questions

  • What is shown? (describe in neutral language)
  • Who is the target audience?
  • What emotion is it trying to trigger? (choose 1–2: pride, anger, disgust, fear, mockery, triumph, humiliation, nostalgia)
  • What claim does it imply—without arguing it directly?
  • What technique is it using? (choose: hero-worship fantasy; wishcasting; ridicule; stereotyping; dogwhistle; intimidation; “just a joke” shield; nostalgia aesthetic; deepfake credibility borrowing)
  • Why does it matter? (2–3 sentences)

Read this article and answer the questions below.

Click on the picture!

Answer these questions

  1. What question does the author ask ChatGPT at the beginning of the experiment, and what kind of response does it generate (format + structure)?
  2. The author describes noticing the response’s “shape” (how it looks on the page). What does the author say about the structure (for example: bullets, categories, “headline” phrases, and links)?
  3. What does ChatGPT provide alongside each news item, and why does that matter for credibility? (Think: links, sourcing, the feeling of “proof.”)
  4. What seems appealing about getting news this way? List two advantages the author experiences or implies (speed, convenience, organization, reduced effort, etc.).
  5. What concerns or problems does the author run into? Identify two risks of using ChatGPT as a news “front page” (accuracy, missing context, overconfidence, uneven sourcing, etc.).
  6. The article highlights a tension between reading the news and reading about the news. What is the difference, and how might ChatGPT blur it?
  7. How does the author’s experience change your understanding of what “news consumption” means?
    Is it closer to: receiving reported facts,
    receiving a curated summary,
    or receiving a simulation of knowing?
  8. The author’s workflow implies that ChatGPT can shape attention (what you notice first). What is one way the tool might influence what users think is important—even if the underlying sources are varied?
  9. What kinds of stories or perspectives might be systematically under-served by AI “digest” news (local reporting, slow investigations, marginalized voices, complex policy stories)? Choose one and explain why, based on the article’s description of the output.
  10. Evaluation question (media literacy): After reading the article, do you think using ChatGPT for news is best described as:
  • A helpful entry point,
  • A risky replacement, or
  • A tool that requires strict verification habits?
    Give a two-part justification: (1) one benefit, (2) one risk.

3 essay questions

  1. Institutional shitposting and democracy:
    Is “institutional shitposting” compatible with a healthy democracy? Use at least three examples from the ten images and evaluate the impact on trust, accountability, and public debate.
  2. AI imagery as propaganda technology:
    How does generative AI change propaganda—not just by faking reality, but by changing speed, volume, style, and emotional targeting? Use the Guardian article and one additional source (AP/CBS/Verge/Wired).
  3. Media literacy as civic protection:
    Design a “citizen response” toolkit for AI political imagery: what should individuals, platforms, and governments do differently? Discuss trade-offs (free expression vs harm; satire vs deception; regulation vs censorship). Ground your argument in at least two sources.

I would love to hear from you