TY - JOUR
T1 - Theory selection and evaluation in case series research
AU - Goldrick, Matthew
N1 - Funding Information:
Thanks to Simon Fischer-Baum and Brenda Rapp for insightful discussion of these issues and helpful comments on the manuscript. This research was supported by National Science Foundation (NSF) Grant BCS0846147. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the NSF.
PY - 2011/10
Y1 - 2011/10
N2 - Using empirical data to develop theories requires not only evaluating how well a theory accounts for data; it requires using the data to select the best theory from among a set of alternatives. Current case series research is examined in light of these two issues. Theory selection requires that theories make contrasting predictions. In the first section of this commentary, I present novel simulation results showing that existing theories of language production do not make contrasting predictions for the overall distribution of responses over a set of responses categories (e.g., correct response, semantic error, etc.; Dell, Schwartz, Martin, Saffran, & Gagnon, 1997). Given such results, in order to be theoretically productive case series research must focus on those aspects of data that serve to contrast theoretical alternatives. The second section considers evaluation of claims regarding individual differences. Such claims are typically underconstrained. Two approaches to addressing this issue are discussed. I argue that case series research should provide independent evidence for hypothesized individual differences. Second, parametric approaches might provide a means of constraining theories of individual differences. The plausibility of this approach is examined through novel analyses of empirical distributions of individual differences in impairments to lexical access (Schwartz, Dell, Martin, Gahl, & Sobel, 2006).
AB - Using empirical data to develop theories requires not only evaluating how well a theory accounts for data; it requires using the data to select the best theory from among a set of alternatives. Current case series research is examined in light of these two issues. Theory selection requires that theories make contrasting predictions. In the first section of this commentary, I present novel simulation results showing that existing theories of language production do not make contrasting predictions for the overall distribution of responses over a set of responses categories (e.g., correct response, semantic error, etc.; Dell, Schwartz, Martin, Saffran, & Gagnon, 1997). Given such results, in order to be theoretically productive case series research must focus on those aspects of data that serve to contrast theoretical alternatives. The second section considers evaluation of claims regarding individual differences. Such claims are typically underconstrained. Two approaches to addressing this issue are discussed. I argue that case series research should provide independent evidence for hypothesized individual differences. Second, parametric approaches might provide a means of constraining theories of individual differences. The plausibility of this approach is examined through novel analyses of empirical distributions of individual differences in impairments to lexical access (Schwartz, Dell, Martin, Gahl, & Sobel, 2006).
KW - Case series
KW - Computational modeling
KW - Interactive two-step model
UR - http://www.scopus.com/inward/record.url?scp=84863540160&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84863540160&partnerID=8YFLogxK
U2 - 10.1080/02643294.2012.675319
DO - 10.1080/02643294.2012.675319
M3 - Article
C2 - 22746687
AN - SCOPUS:84863540160
SN - 0264-3294
VL - 28
SP - 451
EP - 465
JO - Cognitive Neuropsychology
JF - Cognitive Neuropsychology
IS - 7
ER -