User Engagement-Quality of Life Technology Center - Carnegie Mellon University

User Engagement

The goal of this project is to develop guidelines for virtual coaches in providing reminders and notifications in manners that best support users. Once we have accurate models of human behavior, we can begin to decide when to intervene with assistance, and how to best design those interventions.

Currently, most information display techniques assume that the display will be the primary focus of attention. Our approach develops novel information displays that will not necessarily require full attention, taking into account user preferences and cognitive load. Instead of requiring full attention, they will be able to convey information using peripheral attention to reduce cognitive load, and move from the periphery of attention to the center of attention as attention budgets.

To understand how to create useful visual, auditory, and haptic displays that can be adapted to varying levels of attentional demand, we are using a two-tiered strategy:

  1. Designing to support what we know about how people process information
  2. Designing with the goal of reducing cognitive load to free up the largest amount of working memory when possible

To do this, we are combining principles from psychology, visual, sound, haptic, and interaction design. From the fields of cognitive, sensory, and perceptual psychology, we can understand the attentional demands of visual and auditory stimuli, the ability to move perceived information to working memory, and the attentional capture effects of visual, auditory, and haptic stimuli. Knowledge in the form of design principles aggregated from many examples generated over time, is based on the implicit judgments of designers about what works in a particular situation.

These displays also take into account user preferences (for example, preferring auditory to visual notifications) and cognitive load at any time, reserving full salience notifications for only the times when a situation model suggests that they might be well received and needed. We will perform a systematic comparison of speech, visual, and speech and visual notifications in situations of divided attention. Interventions and acknowledgement designs will, in turn, be evaluated for how they potentially modify user engagement. These studies will be performed on a set of examples that systematically cover a design space of visual, auditory, visual/auditory, haptic, and visual/haptic information. We will use eye-tracking equipment available in our lab to record detailed fixation patterns that we will analyze for measures of fixation, attention direction, and head movement. The second set of studies will use a dual task framework to further understand direction and saturation of attention.

Project Team

  • Dan Siewiorek, Lead
  • Todd Bear
  • Anind Dey
  • Jodi Forlizzi
  • Brian French
  • Ian Li
  • Thomas Kamarck
  • Asim Smailagic