Abstract
Across three experiments, we compare the ability of amateur musicians and nonmusicians in learning artificial auditory and visual categories that can be described as either rule-based (RB) or informationintegration (II) category structures. RB categories are optimally learned using a reflective reasoning process, whereas II categories are optimally learned by integrating information from two stimulus dimensions at a reflexive, predecisional processing stage. We found that musicians have selective advantages for learning auditory RB categories, specifically when they are instructed about the dimensions that define the categories. In Experiment 1, musicians enrolled in a music college demonstrated advantages over nonmusicians in learning auditory RB categories defined on frequency and duration dimensions but did not demonstrate differences in learning auditory II categories or either visual RB or II categories. In Experiment 2, a broader online sample of musicians who were not instructed about the dimensions did not demonstrate any advantage in auditory or visual learning. In Experiment 3, an online sample of musicians when given dimension instructions demonstrated early advantages over nonmusicians for auditory RB but not visual RB categories. Musicians do not demonstrate a global categorization advantage. Musicians’ category learning advantage is limited to their modality of expertise, is enhanced with dimension instructions, and is specific to categories that can be described with verbalizable rules.
Original language | English (US) |
---|---|
Pages (from-to) | 739-748 |
Number of pages | 10 |
Journal | Journal of Experimental Psychology: General |
Volume | 151 |
Issue number | 3 |
DOIs | |
State | Published - Aug 2 2021 |
Funding
This research was supported by the National Institute on Deafness and Other Communication Disorders R01DC013315 to Bharath Chandrasekaran, R01DC015504-01A1 to Bharath Chandrasekaran, and F32DC018979 to Casey L. Roark. We would also like to thank Devon Greer, Kathryn Curry, Mia Liu, Seth Koslov, and other members of the Maddox Lab for their help with data collection. We also thank Han-Gyol Yi, Jasmine Phelps, Britt Rachner, and other members of the SoundBrain Lab for their help with stimuli generation. The ideas and data in the article were previously presented at the Psychonomic Society annual conference in 2015. Specifically, this presentation focused on the ideas and data from Experiment 1. This work used the Extreme Science and Engineering Discovery Environment (XSEDE, Towns et al., 2014), which is supported by NSF (ACI-1548562). Specifically, it used the Bridges system (Nystrom et al., 2015), which is supported by NSF (ACI-1445606), at the Pittsburgh Supercomputing Center (PSC). Stimuli and data can be found at https://osf.io/cp932/.
Keywords
- Audition
- Category learning
- Music experience
- Vision
ASJC Scopus subject areas
- Experimental and Cognitive Psychology
- General Psychology
- Developmental Neuroscience