David Touretzky(opens in new window) wants teachers and students to understand artificial intelligence. Whether or not they do, it’s already a part of their lives.
A research professor in Carnegie Mellon University’s School of Computer Science(opens in new window), Touretzky is the founder and chair of AI4K12(opens in new window), an initiative to develop guidelines and resources for teaching AI to students in kindergarten through grade 12. He's also the principal investigator on a project funded by the National Science Foundation called AI4GA(opens in new window) that is developing an AI elective course for Georgia middle school students.
In the following Q&A, Touretzky explained why AI is such a hot topic, why learning about it is crucial for students and how it’s frequently misunderstood. This interview has been edited and condensed.
Q: Why should we teach kids about AI?
A: AI has been called the new electricity. It's powering this fourth industrial revolution. It permeates everybody's lives. Your phone, for example, is full of AI-powered applications. It's important that kids understand AI because they're growing up with it already. It's not in their future, it's their current life. By the time they go to school they’ve spent two years talking to Alexa.
Q: How much do teachers know about AI?
A: When I talk to teachers the first thing they ask me is, "What is AI?" Most adults simply cannot define it. I hear a lot of people point to science fiction characters like the Terminator or Cmdr. Data from Star Trek.
Q: So, what is AI?
A: AI is a branch of computer science concerned with getting computers to do things that when people do them are considered evidence of intelligence.
Q: Any other vocabulary words everyone should know?
A: Some people mistakenly think that AI and machine learning are the same thing, or they think that machine learning has replaced AI. Machine learning is a branch of AI that is concerned with getting computers to learn to perform tasks based on data they are given. You may also hear about artificial neural networks. Neural networks are an area of machine learning used in things like large language models.
Q: What is a large language model?
A: Large language models are artificial neural network models that have been trained on a huge amount of text, typically something like all of Wikipedia, plus several hundred thousand books, plus millions of newspaper articles — more material than any one human being could read in a lifetime. ChatGPT is the most well-known. They’re trained on this material and that gives them a kind of encyclopedic knowledge. They're not human, but they have a kind of a nonhuman understanding of the material, so that they're able to answer questions about many topics and exhibit novel reasoning.
Khan Academy is making innovative use of that technology. They have a tool called Khanmigo(opens in new window), which is based on a large language model. It will do things like check your programming assignments. If your program is incorrect, instead of just telling you what the bug is, it will guide you to discover the bug for yourself and then help you reason about how you might fix it.
Q: How do you get young kids to understand these concepts?
A: AI4K12 has identified five big ideas in artificial intelligence(opens in new window). Number one is perception(opens in new window). The first thing we want younger students to know about perception is how living things perceive the world. At that age you are just learning the names of the five senses. How do we see? Well, we use our eyes for that. How do we hear? We use our ears for that. Then you can ask the student about machines. How does the machine hear? It uses a microphone. How does a machine see? It uses a camera. We have a progression chart(opens in new window) to teach how computers perceive the world in increasing detail all the way through grade 12.
Q: Why has this blown up in the past few months?
A: When you make these large language models big enough, they develop a kind of reasoning behavior that the smaller, simpler models never showed. This was a surprise. People in the industry knew about this, but the public didn't until OpenAI released ChatGPT.
Q: Have you seen any interesting ways to use AI with K-12?
A: You can ask ChatGPT for a quiz on anything and it will do it — I just used it to quiz myself on state capitals of Nigeria (I got them all wrong). Another interesting thing people have been doing is having the large language models play the role of either a historical character or a character from literature. It can also be helpful to explain complicated concepts in simplified language. For example, one of my hobbies is studying physics. Currently, I'm studying general relativity. The math for that is pretty complicated, and there are these things called Christoffel symbols which are part of the tensor calculus for general relativity. I asked ChatGPT to explain Christoffel symbols to me like I was five and it did a really good job! I think teaching kids how to find these creative uses is part of our job as educators.
Q: What if kids use AI to cheat on their homework?
A: That's a good question. I think it's getting increasingly difficult to tell things written by ChatGPT from things written by humans. There are some telltale signs, especially for younger kids. If you don't carefully prompt it to write like an 8-year-old, it’s going to write like an adult. But I think that's the wrong thing to worry about.
I was at a workshop for educators a couple of months ago and someone told a story that resonated with me. She was talking about two different schools in the New York area. One of them was a low resource public school for kids who generally were from poor families and their big concern was "how do we stop kids from using ChatGPT?" They wanted to block it, stop it. But another school, a high resource private school was asking, "how do we teach our kids prompt engineering?" or "how do you construct prompts for ChatGPT?" Rather than trying to ban it they were trying to get their kids early exposure to build skills. That's the question you want every educator to be asking: "How do I teach kids to take advantage of this new technology?"
Q: What surprised you about your work?
A: I'm a college professor, so I'm not used to teaching young kids. One of the things we had to learn in the AI4GA project is how to present material in a way that young kids would engage with. We teach them how self-driving cars work and gave them a project of finding routes between major cities in Georgia using an AI technique typically taught in an undergraduate-level AI course called breadth-first search. So we came up with a way of teaching this to middle school students where they used colored markers to map a route from one city to another in a special way. The kids loved it and did learn the how to do breadth-first search. But here's the thing where my ego gets deflated a little bit — they liked it because they got to use colored markers. They were happy because they got to color stuff, and I was happy because we could give them new problems and they would solve them correctly.
Q: What is AI education going to look like in five or 10 years?
A: Kids are going to have robots that they interact with as part of their schoolwork. Now, if you want to know about robots, you have to watch YouTube videos. The only robots in our homes are Roombas, and all they want to do is vacuum the floor. Having a robot that you can actually talk to and will talk back to you, so you can see how it reasons and represents the world, is tremendously powerful. There are companies working on new robots for kids that I hope will be announced soon.
Q: Where is a good place for teachers to start if they want to teach their students about AI?
A: We know that teachers need a lot of assistance to run some of these activities, and we recently got funding from NEOM to create activity guides with step-by-step instructions that will allow teachers to feel comfortable running these activities in the classroom. They can find these guides and many other helpful resources at AI4K12.org(opens in new window).