Teaching AI Ethics

Start with Experience, Not Lecture

Let students encounter ethical issues firsthand:

  • Have them use AI tools and discover biases, errors, or concerning outputs themselves
  • Ask them to generate images of “a doctor” or “a CEO” and notice demographic patterns
  • Try to get AI to help with homework and see when it produces unreliable information

Core Topics to Cover

1. Bias & Fairness

  • Where does AI training data come from?
  • Whose perspectives are represented (and whose are missing)?
  • Real examples: facial recognition failures, hiring algorithms, predictive policing

2. Transparency & Accountability

  • How do we know why AI made a decision?
  • Who’s responsible when AI makes mistakes?
  • Case studies: autonomous vehicles, medical diagnosis tools

3. Privacy & Data

  • What happens to the information we give AI?
  • Who owns AI-generated content?
  • Training data ethics (was consent given?)

4. Impact on Society

  • Job displacement and economic effects
  • Environmental costs of AI (energy consumption)
  • Deepfakes and misinformation

Engaging Activities

  • Debate scenarios: “Should AI be used to grade essays?” “Should companies use AI to screen job applications?”
  • Design challenges: “Create guidelines for ethical AI use in our school”
  • Red-teaming: Try to make AI behave badly, then discuss why it happened
  • Case study analysis: Examine real AI failures and discuss what went wrong

Make It Personal

Connect to their lives: social media algorithms, recommendation systems, filters they use daily.