Can your phone write your biography?-Silicon Valley Campus - Carnegie Mellon University

Can your phone write your biography?

We increasingly rely on our phones in our daily lives. Imagine if it also could write a diary of your daily activities, and perhaps even take different actions depending on how your day is going. This was the goal of the LifeLogger project performed by a Carnegie Mellon Silicon Valley group of students have spent the last 12 weeks working.

The team was contracted by Research in Motion to spend their summer semester credits working on a discovery project to demonstrate techniques for passively using on-board sensors, and other data sources with analysis to infer human-level events and context on a smart phone.

The project complements current ongoing research in the area of understanding smart phone context occurring both internally at RIM as well as at CMU. RIM requested that the team create an Android based application that would use the data from the diverse sensor package in a Google Nexus S and fuse this with “higher level data” such as the time of day or the user’s calendar information. The idea was that the accumulated knowledge should let the device infer the user’s current activities and contexts such as “performing morning exercise” and “eating dinner at Crazy Buffet”.

In this pursuit, the students created a logging framework that gathers and persists all the relevant pieces of information, such as accelerometer data and the user’s position, throughout his day. By creating a persistence framework from scratch, the team was able to serialized objects in such a way that they were human-readable as well as enabling the application at a later date de-serialize the text into objects again. This archived information can be loaded in at a later time for analysis and context detection.

The students decided to focus on detecting whether the user of the device is currently asleep. To accomplish this, analysis of the user’s movements was fused with other factors such as the luminance in the room and the time of day using a custom algorithm designed specifically for this purpose. To further challenge the students, the application was required to run consecutively for 12 hours on a single charge, so the data gathering process had to be optimized to minimize power usage. By enabling the sensor sampling periods and duration to be adjustable, the team could conserve power by reducing the sensor sampling rates. So when the application detects that the user is asleep, there is less need for a very accurate location so the GPS sampling can be slowed down substantially reducing the applications power consumption.

The resulting Android application can detect sleeping with a very high probability while intelligently sampling the sensors. The team’s work was presented to an audience of peers at the CMU campus in Mountain View as well as to the Advanced Research division at RIM’s offices in Redwood City. Both the client’s at RIM as well as researchers at CMU working on related research topics, have expressed admiration of the team’s work and feel that it lays a fantastic foundation for any further investigation into the area of context detection using sensor fusion.