CMU, Disney Research Collaborate To Develop Three New Technologies-Faculty & Staff News - Carnegie Mellon University

Thursday, August 11, 2011

CMU, Disney Research Collaborate To Develop Three New Technologies

DisneyResearchResearchers from Carnegie Mellon and Disney Research, Pittsburgh, collaborated on three new technologies that were presented this week at SIGGRAPH 2011, the International Conference on Computer Graphics and Interactive Techniques in Vancouver. 

One new development involves motion capture. While traditional techniques use cameras to record the movements of actors, this new motion capture technology uses outward-facing cameras on the actors themselves. Body-mounted cameras enable the capture of motions, such as running outside or swinging on monkey bars that would be difficult — if not impossible.

The wearable camera system makes it possible to reconstruct the motions of an actor thanks to a process called structure from motion (SfM). CMU’s Takeo Kanade, a professor of computer science and robotics and a pioneer in computer vision, developed SfM 20 years ago.

Read more: http://www.cmu.edu/news/stories/archives/2011/august/aug8_motioncapture.html

The second development is a new tactile technology called Surround Haptics, which allows video game players and film viewers to feel a wide variety of sensations, from the smoothness of a finger being drawn against skin to the jolt of a collision. In the demonstration at SIGGRAPH, the technology enhanced a high-intensity driving simulator game. Surround Haptics enabled players to feel road imperfections and objects falling on the car, sense skidding, braking and acceleration, and experience ripples of sensation when cars collide or jump and land.

Read more: http://www.cmu.edu/news/stories/archives/2011/august/aug8_disneytactiletech.html

The third development are computerized models that reflect a full range of natural expressions while also giving animators the ability to manipulate facial poses. The researchers created a method that not only translates the motions of actors into a 3D face model, but also sub-divides it into facial regions that enable animators to create the poses they need. These models could be used to animate characters for films, video games and exhibits.

Read more: http://www.cmu.edu/news/stories/archives/2011/august/aug9_disney3dfacemodels.html