BioA spoken syllable may persist in the world for a mere tenth of a second. Yet, as adult listeners, we are able to gather a great deal of information from these fleeting acoustic signals. We may apprehend the physical location of the speaker, the speaker's gender, regional dialect, age, emotional state, and identity, as well as the linguistic message. The ease of everyday conversation belies the complexity involved.
Research in my lab focuses on the cognitive processes that underlie this feat, using speech processing as a platform for investigating learning, plasticity, categorization, cross-modal processing, object recognition, memory, attention and development. Among our current projects, we are investigating the learning that occurs in acquiring the sounds of a second language and how representations of the native language interact with this learning; how listeners' "tune" their auditory perception to the statistical regularities of the sound environment; and how higher-level knowledge may influence early auditory object recognition and speech categorization. The major approach we use is to study human adult (and sometimes child) participants using perception and learning tasks. In addition, we make use of EEG and fMRI to address the neural bases of auditory processing.