Tuesday, November 13, 2012
NOVA Profiles CMU ResearchThe Nov. 14 episode of PBS/NOVA scienceNOW, "What Will the Future Look Like," featured two Carnegie Mellon research projects. The episode is available online at www.pbs.org/wgbh/nova/sciencenow/.
Reporter Challenge to Just, Mitchell Leads to Mind Reading 2.0Carnegie Mellon professors Marcel Just and Tom Mitchell shocked the world in 2008 with their thought identification research, using a brain scanner to demonstrate, for the first time, the ability to read people's minds.
They did this by applying machine-learning techniques to brain imaging data to identify an individual's thoughts of concrete objects. A demonstration of how the process works - from one computer displaying a labeled picture of a concrete object (e.g. hammer) to a person in the MRI scanner, to a separate computer analyzing the resulting brain image to determine which object was shown - was featured on 60 Minutes.
Since the initial discovery, Just and Mitchell have been working to identify other types of thoughts besides concrete objects - from numbers to emotions to social interactions - all involving a visual stimulus (usually a printed word) for the person inside the brain scanner to think about.
That was, until recently.
In August, a PBS/NOVA scienceNOW media crew visited CMU's Scientific Imaging and Brain Research (SIBR) Center. Host David Pogue, who served as the research subject, was shown 10 pairs of objects two times while in the fMRI. With the cameras rolling, the brain analysis computer then correctly identified each object that Pogue had been thinking about.
Pogue was impressed but unsatisfied. As an amateur magician, he said he found it disquieting that the computer that presented the words "knew" what he would be thinking about. He was concerned that the presentation computer and the brain analysis computer could have been in cahoots with each other.
He challenged the research team to find some way to cut the presentation computer out of the loop, so that only he, Pogue, knew what he had been thinking about at any given time.
The team accepted the challenge.
"We had never tried anything like this before in the lab, let alone on national TV," said Just, the D.O. Hebb Professor of Psychology in the Dietrich College and director of CMU's Center for Cognitive Brain Imaging. "But the theory said it should be possible."
Mitchell, the E. Fredkin University Professor of Machine Learning in the School of Computer Science, added, "I always tell my students not to mess with a computer demonstration that works."
The team constructed a new experiment on the fly, and Pogue went back into the scanner. This time, nothing was presented on the screen. Instead, he could choose between two objects, a skyscraper or a strawberry. Over a series of 10 trials, he thought about a word and then wrote it down.
When the scan was complete, Just, Mitchell, Pogue and the video crew anxiously waited to see whether the brain analysis computer could correctly identify Pogue's thoughts, despite no explicit instruction of which word he should think of, and that the computer had never seen a person's fMRI data for the two words, "strawberry" and "skyscraper."
With the cameras rolling, the analysis computer issued the correct identification all 10 times. It was a 100 percent match and a major scientific experiment producing a fascinating new finding, done in the context of a demo for a science documentary.
"We were all nervous about trying this experiment for the first time with the cameras rolling," Mitchell said. "But, we had seen before that our computer model had successfully predicted neural representations for new words, and that these neural representations are remarkably similar across different people."
Just added, "It's exciting to know that it is possible to identify internally generated thoughts, and something we will follow up on more formally in our future studies."
Tom Mitchell (left), David Pogue and Marcel Just pushed science forward while filming a segment for PBS/NOVA scienceNOW.
By: Shilo Rea, firstname.lastname@example.org