Guide for Teachers: Discussing Deepfakes and Platform Shifts With Students
A classroom-ready lesson plan using the X deepfake controversy and Bluesky’s surge to teach digital literacy, ethics, and platform migration.
Hook: Why this matters to your classroom right now
Teachers: your students are navigating an online world where a single viral post can mislead thousands in minutes. The late-2025 X deepfake controversy — where users asked an integrated AI chatbot to generate non-consensual sexualized images — and the immediate surge in downloads for platforms like Bluesky expose two connected challenges: (1) AI-powered media can harm real people and evade casual detection, and (2) rapid platform migration changes where and how students consume news and form communities. This guide gives you a ready-to-teach lesson plan, discussion prompts, student activities, and assessment ideas to teach digital literacy and media ethics using this exact case study.
Inverted pyramid: Key takeaways for busy teachers
- Immediate focus: Teach students how to spot deepfakes, ask ethical questions, and practice respectful digital behavior.
- Case study value: Use the X/Grok controversy and Bluesky’s post-incident surge (nearly 50% jump in iOS installs by some estimates) to discuss platform trust and migration dynamics.
- Practical activities: A 2–3 day lesson plan with hands-on verification labs, role-play debates, and a policy-writing assessment.
- Safety & consent: Prioritize trauma-informed approaches; avoid showing exploitative content and model consent-focused language.
Context (2025–2026) teachers should know
Late 2025 and early 2026 brought growing regulatory and public scrutiny of AI on social media. California's attorney general opened an investigation into xAI's chatbot Grok after reports it generated non-consensual images from user prompts — a flashpoint that pushed users to explore alternatives like Bluesky. Industry data reported a sharp increase in Bluesky downloads after the controversy as users searched for platforms with different moderation policies and decentralized architectures. These developments underline two trends for 2026: heightened AI governance and continued interest in platform migration as users weigh safety, community norms, and features.
Lesson Plan Overview (2–3 class periods)
Grade levels
Grades 9–12 (adaptable for middle school with scaffolded tasks)
Duration
Two 50-minute classes plus an optional third period for presentations or deeper research.
Learning objectives
- Explain what a deepfake is and why it’s ethically problematic.
- Evaluate media credibility and apply at least three verification techniques.
- Analyze platform features, moderation policies, and the social effects of platform migration.
- Compose a short policy recommendation or classroom code of conduct addressing AI-generated content and consent.
Materials
- Short, teacher-curated packet with news excerpts (safe, non-explicit) about the X/Grok incident and Bluesky’s surge — provide links rather than images.
- Fact-checking checklist handout.
- Access to devices for verification labs (one per pair/team).
- Optional: A sandboxed AI image generator (set to safe content) to demonstrate model behavior — follow guidance on how to harden desktop AI agents and always run sandboxed examples.
Standards alignment
Aligns with digital literacy and media literacy standards (ISTE, Common Core RI/WHST for research and argumentation). Emphasize ethics and civic responsibility.
Lesson plan: Step-by-step
Class 1 — Introducing the case and verification skills (50 minutes)
- Hook (5 min): Read a short, non-graphic excerpt about the X incident and Bluesky download surge. Ask: "Why would users leave one platform for another after a safety scandal?"
- Mini-lecture (10 min): Define deepfakes, explain non-consensual image generation, and summarize the 2025 X/Grok controversy and subsequent market reaction (Bluesky installs spike). Cite sources like TechCrunch and Appfigures for context.
- Show verification toolkit (10 min): Teach three quick checks — reverse image search, metadata and source tracing, and cross-referencing reputable outlets. Demonstrate using a neutral example image (do not use sexualized or exploitative images).
- Paired activity (20 min): Give each pair a social post (teacher-vetted). Students apply the checklist, note evidence levels, and rate credibility (High/Medium/Low). Teacher circulates and prompts deeper thinking.
- Wrap-up (5 min): Quick share: which check was most revealing? Assign homework: read one short article about platform migration trends (link provided).
Class 2 — Ethics, platform migration, and policy writing (50 minutes)
- Recap (5 min): One student summarizes yesterday’s verification takeaway.
- Discussion (15 min): Use structured prompts (below) to debate platform responsibility, moderation vs. free speech, and user migration choices. Use a fishbowl format for equitable participation.
- Role-play (15 min): Groups take roles — platform engineers, content moderators, harmed user, regulator, and platform-migrant user. Give them a policy decision: allow generative AI features with opt-in safety filters, or restrict them. Each group prepares a 2-minute statement.
- Policy task (10 min): Individually draft a 150-word classroom policy on using AI-generated media in class and social channels, focusing on consent and verification.
- Exit ticket (5 min): Students submit one concrete personal action they will take to avoid sharing unverified AI content.
Optional Class 3 — Presentations & assessment
Students present short investigative reports or policy memos. Use the rubric below for assessment.
Classroom discussion prompts (for Socratic-style or debate)
- What responsibilities do platforms have when their AI tools can generate harmful content? Who should enforce those responsibilities?
- When users migrate to a new platform (e.g., Bluesky after a scandal), what social dynamics follow? How can migrations shift norms and moderation?
- Is banning certain AI features censorship or protection? Where do we draw the line between safety and expression?
- How should schools handle student-created AI content? Should it be allowed, labeled, or banned? Why?
Teacher note: Emphasize consent, privacy, and the real-world harm of non-consensual imagery. Do not show exploitative images. Frame conversations around ethics and responsibility.
Student activities & assessments
Activity 1 — Verification Lab (graded)
Pairs receive an online post (image + caption). They must produce a 1-page verification report: steps taken, evidence, confidence level, and recommended action (share, flag, ignore). Use a simple rubric:
- Evidence & methods (40%) — used 3+ verification techniques correctly
- Analysis & reasoning (30%) — clear explanation of credibility
- Recommendation & ethics (20%) — considers consent/harm
- Presentation & sources (10%) — cites verifiable sources
Activity 2 — Platform Migration Simulation
Small groups design a migration plan from a hypothetical platform after a safety scandal. They must decide: what features to preserve, which moderation policies to change, and how to communicate with users. Presentations should include ethical safeguards and community enforcement mechanisms.
Activity 3 — Policy Memo (individual)
Students write a 300–500 word memo to a school principal outlining recommended rules for student use of generative AI and deepfake tools on school networks. Expect evidence-based recommendations and references to recent trends (2025–2026).
Advanced strategies & teacher resources (for 2026 classrooms)
As AI tools and platform ecosystems evolve in 2026, teachers will need advanced techniques:
- Cross-platform verification: Check whether content appears on reputable outlets, archived pages, or reputable aggregators. Platforms often replicate viral posts; tracing earliest appearance helps determine origin. Use tools and playbooks like the edge-first verification playbook as a classroom resource.
- Metadata & provenance tools: Teach basic EXIF checks and the concept of cryptographic provenance where available. Encourage use of reputable tools like InVID, FotoForensics, and browser-based reverse image search.
- AI-detection as a signal, not proof: AI-detection models can misclassify; pair their results with human-led provenance checks and context evaluation.
- Sourcing platform policy pages: Compare moderation policies and recent feature rollouts (e.g., Bluesky’s cashtags and LIVE badges) to assess platform priorities. Feature additions can indicate strategic directions and user base targets.
- Civic literacy modules: Add units about platform economics and incentives — why migrations happen and how network effects shape discourse.
Handling sensitive topics and safeguarding students
When the case study includes sexualized or non-consensual content, follow trauma-informed practices: warn students ahead of time, avoid graphic examples, offer opt-out alternatives, and provide support resources. Coordinate with school counselors and administrators if discussions evoke distress. Emphasize that the goal is ethical reasoning and citizenship, not sensationalism.
Teacher script snippets (ready-to-use)
Explaining deepfakes briefly
"A deepfake is media—usually video or images—generated or altered by AI to look real. Sometimes they're harmless or satirical; sometimes they're designed to mislead or harm. Recently, people used an AI chatbot to make sexualized images without consent. We’re going to learn how to identify and respond to that harm."
Moderating discussion
"We will not view explicit images. We will discuss the case using news extracts and ethical reasoning. If anything feels uncomfortable, raise your hand or use the private chat to tell me. We’ll pause and provide support if needed."
Real-world example & data points
Use current reporting for authority: in early January 2026, public coverage highlighted that X’s integrated chatbot assisted in creating non-consensual images, prompting an investigation by the California attorney general. Market data from Appfigures reported Bluesky’s daily iOS downloads rising nearly 50% immediately after the news reached critical mass — a real example of how safety incidents accelerate platform migration. (Sources: TechCrunch coverage of the Grok/X situation and Appfigures install data.)
Rubrics & grading templates
Provide an easy rubric for the verification lab and final memo. Example categories: Methods (30%), Critical Thinking (30%), Ethical Reasoning (20%), Communication & Sources (20%). Use checkboxes for teachers to give quick formative feedback.
Extensions for advanced classes
- Computer Science: Have students build a simple classifier (with pre-provided datasets) that flags manipulated images — emphasize limitations and bias. Pair technical work with readings on modding ecosystems and tooling to discuss ethics and distribution.
- Government/Civics: Simulate regulatory hearings where students testify as activists, company reps, and regulators about platform liability.
- Journalism: Report a verification story tracing a viral post’s origin and publish a classroom explainer with source links and a methodology section. Consider producing a co-op podcast episode; see tips from launching a co-op podcast.
Future predictions & trends (2026–2028)
Looking ahead, expect these trends to shape classroom conversations in 2026 and beyond:
- Regulatory tightening: More government actions and transparency requirements for AI tools on social platforms.
- Platform interoperability: Momentum for shared identity and content portability could change migration dynamics (users may move but keep identity across networks).
- Safety-by-design features: Platforms will invest in human-in-the-loop moderation, provenance metadata standards, and consent-first defaults to regain user trust.
- Media literacy tools embedded in apps: Expect platforms to offer in-app verification prompts, labels for AI-generated media, and community moderation features.
Quick classroom checklist for the week you teach this unit
- Curate safe, non-explicit source excerpts on the incident and platform data (link to reporting rather than images).
- Create verification packets and rubrics ahead of time.
- Notify caregivers about sensitive topic coverage and offer opt-outs.
- Coordinate with counselors and admin for support protocols.
- Prepare alternative assignments for students who opt out.
Actionable takeaways for teachers
- Start with consent: make clear rules about sharing and displaying media; never show exploitative content.
- Teach at least three verification methods and require students to use them as a habit.
- Use current events (like the X/Grok story and Bluesky’s surge) as real-world case studies — they make media literacy urgent and relevant.
- Assess both skills and ethics: verifying facts is technical, but choosing how to act is moral.
Sources & further reading
- Reporting on the X/Grok incident and California AG investigation (TechCrunch coverage, January 2026) — use for classroom context (link provided in your packet).
- Appfigures market data on Bluesky install increases (January 2026) — demonstrates migration metrics and user behavior.
- Verification tools: InVID, FotoForensics, Google Reverse Image Search.
Closing: A call to action for educators
Equip students to be skeptical, ethical, and empowered digital citizens. Use this lesson plan this week: adapt the activities to your class, warn about sensitive content, and center consent. If you want a ready-to-print packet (teacher notes, student handouts, rubrics, and slide templates) tailored to your grade level, sign up for our educator resource pack or request a customizable version for your district.
Get the packet, adapt the lesson, and help your students navigate the next wave of AI-driven media with confidence.
Related Reading
- What Bluesky’s New Features Mean for Live Content SEO and Discoverability
- Edge-First Verification Playbook for Local Communities in 2026
- How to Harden Desktop AI Agents (Cowork & Friends)
- Review: Best Sticker Printers for Classroom Rewards (2026)
- Launching a Co-op Podcast: Lessons and Checklist
- How to Disable Microphones on Bluetooth Headphones and Speakers (No-Sweat Guide)
- From Pop-Ups to Premium Counters: How to Merchandise a Cereal Brand Like a Luxury Product
- When Allegations Make Headlines: How Karachi Venues Should Handle PR Crises
- Electric Bike Gift Guide: Affordable E-Bikes for New Commuters
- How Filoni’s Star Wars Slate Creates Bite-Sized Reaction Video Opportunities
Related Topics
explanation
Contributor
Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.
Up Next
More stories handpicked for you