Understanding Feature-based Auditory-Visual Interactions

Project: Research project

Project Details

Description

DESCRIPTION (provided by applicant): Research has revealed much about the mechanisms of the visual system. However, perceptual experience is usually multimodal, with close relationships between visual and auditory modalities. Auditory signals influence neural activation throughout the visual pathways including in the mid brain and primary visual cortex. It is therefore important to extend the rigorous theories of vision to integrate multimodal contexts. Prior research on auditory-visual interactions has primarily focused on perception of space, timing, duration, motion, and speech, whereas recent research has demonstrated auditory-visual interactions in the perception of objects and faces. The goal of the proposed research is to fill the gap in our understanding of auditory-visual interactions at the level of visual feature processing. We will characterize which acoustic patterns uniquely interact with processing of low-level (e.g., spatial frequency), intermediate-level (e.g., material texture and 2D shape), and high-level (e.g., common objects, words, face identity, and facial expressions) visual features. To understand these interactions, we will combine psychophysics and computational modeling (AIM 1) to determine how associated sounds influence basic mechanisms of visual feature processing, including those that control image visibility (front-end signal-to-noise ratio and sampling efficiency), those that control signal competition for visual awareness, and those that control the strength and reliability of neural population coding of visual features in the presence of between- and within-receptive-field signal interactions. The results will provide an integrative understanding of how sounds influence visual signals, sampling, competition, and coding, for the processing of low-, intermediate-, and high-level visual features. The proposed research will also allow development of cross- modal methods for assisting visual perception by enhancing specific spatial scales, materials, shapes, objects, and facial expressions. For example, our preliminary results suggest that sounds can be used to boost and tune the perception of facial expressions, and to direct attention to specific spatial frequencies. In the translational aim (AIM 2), we will systematically investigate how sounds can be used to aid visual perception, for example, to direct attention to an object, material, word, or facial expression in search, facilitate object recognition via directing attention to diagnostic spatial-frequency components, and enrich scene understanding via directing attention to multiple spatial scales. Because feature-specific auditory signals are readily presented over headphones, the proposed research may provide a means to, for example, counter biased perception (e.g., perceiving facial expressions as negative due to social anxiety), and to direct attention to specific objects and spatial scales (e.g., details versus gist) for individuals with visual challenges such as low vision, strokes affecting vision, or with attention disorders. Thus, the proposed research will not only systematically integrate auditory influences into the current models of visual feature processing, but it may also provide a means to aid visual processing by using auditory signals. PUBLIC HEALTH RELEVANCE: Visual signals are often accompanied by related auditory signals and therefore understanding auditory influences on visual processes is important for understanding how the visual system works in realistic contexts. Recent results suggest that auditory-visual interactions
StatusFinished
Effective start/end date9/1/118/31/15

Funding

  • National Eye Institute (1R01EY021184-01A1)

Fingerprint

Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.