TY - GEN
T1 - The theory is predictive, but is it complete? an application to human perception of randomness
AU - Kleinberg, Jon
AU - Liang, Annie
AU - Mullainathan, Sendhil
N1 - Publisher Copyright:
© 2017 ACM.
Copyright:
Copyright 2017 Elsevier B.V., All rights reserved.
PY - 2017/6/20
Y1 - 2017/6/20
N2 - When we test theories, it is common to focus on what one might call predictiveness: how well do the theory's predictions match what we see in data? Evidence that a theory is predictive, however, provides little guidance towards whether there may exist alternative theories that are more predictive, and how much more predictive they might be. These questions point toward a second issue, distinct from predictiveness, which we call completeness: how close is the performance of a given theory to the best performance that is achievable in the domain? Completeness is an important construct because it lets us ask how much room there is for improving the predictive performance of existing theories in any given domain. We would expect the best possible prediction performance to di.er considerably across domains-for example, an accuracy of 55% is a stunning success for predicting stock movements based on past returns, but extremely weak for predicting movements of a planet based on physical measurements. .is contrast rejects that variation in stock movements conditioned on the features we know is large, while planetary motions are well predicted by known features. To understand how much we can improve on the predictive performance of existing theories, we need to separate prediction error due to intrinsic noise (emerging from limitations of the feature set) from prediction error that reveals opportunities for a be.er model.
AB - When we test theories, it is common to focus on what one might call predictiveness: how well do the theory's predictions match what we see in data? Evidence that a theory is predictive, however, provides little guidance towards whether there may exist alternative theories that are more predictive, and how much more predictive they might be. These questions point toward a second issue, distinct from predictiveness, which we call completeness: how close is the performance of a given theory to the best performance that is achievable in the domain? Completeness is an important construct because it lets us ask how much room there is for improving the predictive performance of existing theories in any given domain. We would expect the best possible prediction performance to di.er considerably across domains-for example, an accuracy of 55% is a stunning success for predicting stock movements based on past returns, but extremely weak for predicting movements of a planet based on physical measurements. .is contrast rejects that variation in stock movements conditioned on the features we know is large, while planetary motions are well predicted by known features. To understand how much we can improve on the predictive performance of existing theories, we need to separate prediction error due to intrinsic noise (emerging from limitations of the feature set) from prediction error that reveals opportunities for a be.er model.
KW - Prediction
KW - Randomness perception
KW - Theory completeness
UR - http://www.scopus.com/inward/record.url?scp=85025804231&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85025804231&partnerID=8YFLogxK
U2 - 10.1145/3033274.3084094
DO - 10.1145/3033274.3084094
M3 - Conference contribution
AN - SCOPUS:85025804231
T3 - EC 2017 - Proceedings of the 2017 ACM Conference on Economics and Computation
SP - 125
EP - 126
BT - EC 2017 - Proceedings of the 2017 ACM Conference on Economics and Computation
PB - Association for Computing Machinery, Inc
T2 - 18th ACM Conference on Economics and Computation, EC 2017
Y2 - 26 June 2017 through 30 June 2017
ER -