A Q&A on QA (Question Answering) Research With Eric Nyberg
Eric Nyberg, a professor in the Language Technologies Institute, builds software applications that can understand and process human language. For the past decade, he has worked on question-answering technology, often in collaboration with colleagues at IBM. Since 2007, he and his CMU colleagues have participated in the Open Advancement of Question Answering, a collaboration with IBM that led to the development of Watson, a question answering computing system that recently defeated human opponents in nationally televised matches of Jeopardy!
What was it like to be at the taping of the Jeopardy! episodes? Were you or your IBM colleagues anxious about how Watson would perform?
We actually arrived (at IBM’s Thomas J. Watson Research Center in Yorktown Heights, NY) before the main crowds did. The Watson project leader, David Ferrucci, took Ph.D. student Nico Schlaefer and I inside to see how the Jeopardy! set had been grafted onto the IBM conference room; it looked like it had been beamed down from an alien planet or Los Angeles or something.
But once everybody sat down, it was a very serious environment with palpable tension in the room. The IBM CEO and top executives were in the audience, and for four-and-a-half hours we were literally on the edges of our seats, wondering what was going to happen. It was very exciting and very tense.
Did Watson meet or exceed your expectations?
Watson met my expectations, both in terms of what it did well and also where it faltered.
If you watch the matches, you’ll see categories where Watson is dominant and where the humans have a lot of difficulty competing, but there are other categories where Watson doesn’t get a single correct answer. That’s a great outcome, because I would call it an accurate and fair representation of Watson’s capabilities.
Where do we go now with this question and answering capability?
I think there are two big areas of future research that need our attention.
One is to learn how to build systems like Watson but with fewer resources in terms of time, money and people. While Watson is a wonderful achievement, to have an impact in the business world, we need to build applications with Watson’s level of performance in new domains like financial forecasting and health care, and do it cost effectively.
The second area has to do with making Watson smarter.
Watson doesn’t grow up in the real world the way that we do, so it doesn’t have a base of common sense knowledge. That’s one of its weaknesses. An interesting question is going to be how Watson can learn to read and build a knowledge base that’s not just factual knowledge, but knowledge about how the world really works. For example, if you asked Watson whether a bathtub can hold a magnum of champagne, it might not be able to answer if there is no literal text in its knowledge base which contains the answer.
What do you say to people who feel threatened by Watson, who fear machines will replace them at work or supplant humans?
Whenever anybody expresses that concern I tell them, ‘Don’t worry, you’re smarter than Watson. Watson thinks that grasshoppers eat kosher.’ Although it’s exciting to contemplate a general machine intelligence that can carry on a dialog, make decisions, etc. Watson is very far from reaching that goal – it’s a very specialized piece of software that performs one narrow task – answering factoid questions posed by humans.
We do have a certain number of people sitting at customer service desks, answering questions about products, and librarians helping to find facts. What do you say to those people?
I think that there may come a day when a question answering system like Watson could automate the help desk; it would be relatively straightforward to let Watson read all the IBM manuals and then answer questions about IBM products on the telephone. Folks at IBM’s Tokyo research lab are thinking about this kind of application already. Question answering systems today can operate without human intervention only in simple cases. I don’t think we’re afraid to let Watson answer a question about an IBM PC or laptop. It might give more than one answer and some might be inaccurate, but it’s probably not going to tell the person something harmful.
In cases where we are deciding whether or not to launch the missiles or whether or not to target somebody with a drone, there would never be blind acceptance of the output of a machine. The machine’s output would inform a human decision.
All the work we have done for the Department of Defense in this area requires that every answer is tied to the original document and the original source it came from, so that analysts can click on any answer and immediately see where it came from, to verify that the machine used the right reasoning.
With human knowledge expanding at geometric rates, at some point do we need systems like Watson to cope with it all?
The real issue is that much of the information being produced is for human eyes. It’s not being produced for machines to read.
A typical program like a database query engine can’t do anything with the World Wide Web, because the information there hasn’t been structured into a relational database.
One of our new projects is focusing on this idea of machine reading: How can we read all of that text and digest it into a form to be used by much simpler programs to look up facts?
For example, I might read through all the websites of all the universities in Western Pennsylvania, and then automatically create a database that would allow me to answer questions about them, compare their tuition rates, etc.
The ability to take unstructured information and automatically turn it into structured information is the underlying capability we need. I think that is definitely going to be important as we see the geometric growth of textual knowledge continue.
With the attention that Watson has attracted, how will that affect the work you do at Carnegie Mellon going forward?
For us, it’s a nice achievement because it shows that IBM made the right decision in coming to us and establishing a research relationship with us.
It is also a success because our students were able to contribute directly, and I think for me as an educator, that’s the greater satisfaction.
We’re continuing to collaborate with IBM. We were collaborating with them before Watson, and we continue to collaborate with them on the (Defense Advanced Research Project Agency’s) new Machine Reading project. What the public sees in Watson is something we have already moved beyond.
For question answering research in general, public attention is very important. In the past there was confusion about how question answering is different from Google. After this, everyone will know what question answering is and why goes beyond Google. Question answering has to get the exact answer; it has to pinpoint the answer for you and do it very quickly. A human with Google would not be able to compete on Jeopardy! You wouldn’t be able to sift through the documents quickly enough.
I think Watson is going to vitalize research in question answering. People are going to realize that question answering can be fast enough and good enough to do real world tasks. That’s going to help us as we apply the Watson technology to other areas.
Eric Nyberg, center, shown watching Watson’s victorius performance on Jeopardy!.
By: Byron Spice