Listening to speech and non-speech sounds activates phonological and semantic knowledge differently

James Bartolotti, Scott R. Schroeder, Sayuri Hayakawa, Sirada Rochanavibhata, Peiyao Chen, Viorica Marian*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

How does the mind process linguistic and non-linguistic sounds? The current study assessed the different ways that spoken words (e.g., “dog”) and characteristic sounds (e.g., <barking>) provide access to phonological information (e.g., word-form of “dog”) and semantic information (e.g., knowledge that a dog is associated with a leash). Using an eye-tracking paradigm, we found that listening to words prompted rapid phonological activation, which was then followed by semantic access. The opposite pattern emerged for sounds, with early semantic access followed by later retrieval of phonological information. Despite differences in the time courses of conceptual access, both words and sounds elicited robust activation of phonological and semantic knowledge. These findings inform models of auditory processing by revealing the pathways between speech and non-speech input and their corresponding word forms and concepts, which influence the speed, magnitude, and duration of linguistic and nonlinguistic activation.

Original languageEnglish (US)
Pages (from-to)1135-1149
Number of pages15
JournalQuarterly Journal of Experimental Psychology
Volume73
Issue number8
DOIs
StatePublished - Aug 1 2020

Keywords

  • eye-tracking
  • phonology
  • psycholinguistics
  • semantic competition
  • sound processing
  • Speech comprehension

ASJC Scopus subject areas

  • Physiology
  • Neuropsychology and Physiological Psychology
  • Experimental and Cognitive Psychology
  • Psychology(all)
  • Physiology (medical)

Fingerprint Dive into the research topics of 'Listening to speech and non-speech sounds activates phonological and semantic knowledge differently'. Together they form a unique fingerprint.

Cite this