Abstract
This study examined the temporal dynamics of spoken word recognition in noise and background speech. In two visual-world experiments, English participants listened to target words while looking at four pictures on the screen: a target (e.g. candle), an onset competitor (e.g. candy), a rhyme competitor (e.g. sandal), and an unrelated distractor (e.g. lemon). Target words were presented in quiet, mixed with broadband noise, or mixed with background speech. Results showed that lexical competition changes throughout the observation window as a function of what is presented in the background. These findings suggest that, rather than being strictly sequential, stream segregation and lexical competition interact during spoken word recognition.
Original language | English (US) |
---|---|
Pages (from-to) | 1151-1160 |
Number of pages | 10 |
Journal | Journal of Psycholinguistic Research |
Volume | 45 |
Issue number | 5 |
DOIs | |
State | Published - Oct 1 2016 |
Funding
Writing this article has been supported by Grant R01-DC005794 from NIH-NIDCD and the Hugh Knowles Center at Northwestern University. We thank Chun Liang Chan, Masaya Yoshida, Matt Goldrick, Lindsay Valentino, and Vanessa Dopker.
Keywords
- Eye-tracking
- Lexical competition
- Spoken word recognition
- Stream segregation
ASJC Scopus subject areas
- Experimental and Cognitive Psychology
- Language and Linguistics
- General Psychology
- Linguistics and Language