Carnegie Mellon University

The Piper

CMU Community News

Piper Logo
May 02, 2011

Q&A: John R. Anderson on How His Work Has Evolved

By Shilo Raube

John R. Anderson has re-defined the field of cognitive psychology by taking a theory of how we think and using it to improve the learning process for students across the globe.

A pioneer, Anderson recently joined the likes of Albert Einstein, Thomas Edison, the late CMU professor Allen Newell, known as a father of artificial intelligence, and CMU Professor Takeo Kanade as recipients of the Benjamin Franklin Medal in Computer and Cognitive Science.

The Piper recently caught up with Anderson to talk about his career, his mentors and the future.

Why did you choose to study psychology?

It’s what I was interested in. It’s what makes humans special. I started out with an interest in literature, thinking I was going to be a writer. But, I guess along the way as an undergraduate, I became convinced that science was the way to understand that particular question, so
I chose cognitive psychology.

What was your reaction when you learned that you were going to receive the Benjamin Franklin award?
I was obviously very pleased. I was aware of some of the people who had won that particular award, including Allen Newell, Stuart Card and Takeo Kanade, so I knew I was in very special company.

What is the Adaptive Control of Thought (ACT) theory?

We first developed a version of it in the ’70s. It was an attempt to develop computer simulation of what we call the architecture of cognition, a system for modeling how the various components of the human intellectual apparatus get together and produce a coherent thought. It basically is a theory of the overall organization of human thought and cognition.

How did you start going in that direction?

In my graduate work I had worked on the fairly ambition theory of human memory, trying to account for a lot of the experimental literature. This was work I did with my graduate adviser Gordon Bower. I guess having left that work, I felt a little unsatisfied because it was a theory of what humans knew but not a theory of how they acted on what they knew. I spent my first few post-Ph.D. years knocking around trying various ideas. I finally hit upon some of the work Allen Newell was doing at the time on production systems, and it seemed to provide the bridging link between what we call “declarative knowledge,” which was what my theory of memory was concerned with, and essentially how it gets acted on, which we’ve come to call “procedural knowledge.”

How were you able to take the ACT theory and apply it to real-world situations?

We’ve applied them to a good number of tasks. I guess a number of them could be called real-world tasks. An important event in this whole process took place in the early ’90s, where we produced the current version of the ACT theory, which is called ACT-R. It was first widely available computer simulation
of the ACT theory.

A fair community of researchers developed it around that particular theory. A lot of things that have been done to apply ACT to the real world are things that I have very little to do with. Most of my research has been concerned with modeling how students learn mathematical skills, more specifically focused on algebra. There are models of how people drive cars, do all sorts of other tasks that other people have developed. Modeling in ACT-R can proceed in two ways. The less satisfying but more common way to do a task analysis — look at what is involved in that task and ask yourself how the ACT-R architecture could deal with those particular demands. Then you essentially build by hand a model of how you think people do that particular task and then do experiments to verify and tune the model. We are getting more interested in having these models in essence build themselves, modeling how people learn through instruction and example and experience. That has probably been the big new push within the ACT-R community for the last five to 10 years — to have cognitive models built up by learning rather than being programmed in by the modeler.

How did you decide to focus on math and algebra?

Part of it was out of the work we did on intelligent tutoring systems, which was at the end of the ’80s. That work was actually started as an attempt to break the current version of the ACT theory at that particular point in time. We had this theory that modeled how people saw a problem and to a certain degree how they learned to solve problems. It seemed to do a fine enough job as a descriptive model, but it seemed incredible that it could be an accurate description of what was actually going on in a human head. So we thought if we build instructional systems around what it said, certainly they wouldn’t work, and by seeing how they wouldn’t work we would learn how to improve the theory. We were surprised that it actually worked. We looked at a number of topics, but mainly we were looking at topics in high school math. So, I got very familiar with the domain of high school mathematics and all the issues about American competitiveness in mathematics.

Who are some of your mentors?

My graduate adviser was Gordon Bowers. He’s a researcher on human memory at Stanford University. He certainly had a large influence on my early development. I also learned a lot from Herb Simon and Allen Newell when I came to Carnegie Mellon in 1978. They had a big influence on me.

Newell also won the Benjamin Franklin Medal in Computer and Cognitive Science. How did his work affect your work?

He was the person who formulated the idea of using productive systems in cognitive models. Productive systems were formal ideas developed in logic that extended back to the ’40. They were really a little bit obscure and nobody before Newell really got the insight that they could actually provide the characterization of the way human cognition proceeds. The ideas about productions systems I got before I came to Carnegie Mellon. Perhaps the major reason I came to Carnegie Mellon was because that’s where work on production systems was being done.

When I came to Carnegie Mellon, Newell was working very much on this idea of a cognitive architecture, which is a larger concept of the overarching structure of the cognitive system. That was the other major influence on me. It did a lot to help the ACT theory grow to the current point that it’s at.

Where do you see the future of your work?

Neural science is such a major driver now in research. I think it has a lot to offer people.

The challenge is to describe how intellectual behavior can go forward and also go forward in a human way. Since the beginning of artificial intelligence we understood how intellectual things could happen, but the image that came out of artificial intelligence was very unhuman-like. So understanding how human intelligence is anchored in the brain is critical. I think that direction is very promising.

On the other hand, there are real limitations of what we can understand given current techniques in neural science like neural imaging, and there is a real temptation to study just the things that the techniques can shed light on. To some degree, that sheds light on relatively basic aspects of human cognition, which are not things that are uniquely human.

So, I think the interesting issue is whether we can achieve the twin goals of developing a conception that is really up to the power of human intellect while at the same time understand how it is anchored in the brain. I see that as the challenge going forward, to achieve both rather than achieving just one.