Carnegie Mellon University




We are collaborating with professors in Electrical Engineering at Carnegie Mellon with the aim of improving human spatial navigation using sound. Echoes provide important acoustic information about the environment that is extremely effective for the navigation of certain animals (e.g., bats and dolphins). Because echoes are complex, humans do not normally use echolocation; however, echo information is in fact utilized by some blind people. We are harnessing technology in order to make echo information accessible to many more of the blind by helping them to learn to use echoes. Our approach is to offer a free smartphone game that gives people experience with navigating through a virtual maze by using echoes. In the Auditory Lab we are focusing on discovering the human sensitivity to echo information and how this can be extended to help design and improve training programs and devices.  

Collaborators: Prof. Pulkit Grover and Prof. Bruno Sinopoli, ECE, CMU.

Funding: Google, CMU undergraduate research training award.

If you are interested in participating, check out our app.

Android Users: 
Android users should email The Auditory Lab at CMU to get the android app by email. Email:

iPhone Users:
iPhone users should email The Auditory Lab at CMU for details. You would need to provide a valid AppleID to get an invitation to install the app on your phone. Email:



We are exploring a new form of auditory-motor priming. Motor priming exists if an action is performed more rapidly after the presentation of facilitating cues than after the presentation of interfering cues. We hypothesized that environmental sounds could be used as cues to create motor priming. To create facilitation, we devised a congruent priming sound that was similar to the sound that would be made by the gesture that was about to be performed. To create interference, we devised an incongruent sound that would not normally be made by the gesture that was about to be performed. Using this paradigm we found evidence of auditory-motor priming between environmental sounds and simple gestures. Additionally, we found evidence for auditory-motor priming over a range of conditions.



We are investigating the cognitive neuroscience of the auditory system's ability to identify the causes of sounds. The experimental question we are addressing is which neural networks are preferentially activated when subjects shift the focus of their attention toward different aspects of the source of sounds.



Current studies are investigating the cognitive parameters that affect the integration of auditory and visual events. For example, sometimes visual and auditory stimuli are simultaneous even though the don’t arise from the same event: how do we figure this out? Conversely, sometimes the sights and sounds do belong together even though they are not strictly simultaneous: how do we know to glue them together across time and what are the limits?