Calibration and Expert Testing

Wojciech Olszewski*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

12 Scopus citations


I survey and discuss the recent literature on testing experts or probabilistic forecasts, which I would describe as a literature on "strategic hypothesis testing" The starting point of this literature is some surprising results of the following type: suppose that a criterion forjudging probabilistic forecasts (which I will call a test) has the property that if data are generated by a probabilistic model, then forecasts generated by that model pass the test. It, then, turns out an agent who knows only the test by which she is going to be judged, but knows nothing about the data-generating process, is able to pass the test by generating forecasts strategically.The literature identifies a large number of tests that are vulnerable to strategic manipulation of uninformed forecasters, but also delivers some tests that cannot be passed without knowledge of the data-generating process. It also provides some results on philosophy of science and financial markets that are related to, and inspired by the results on testing experts.

Original languageEnglish (US)
Pages (from-to)949-984
Number of pages36
JournalHandbook of Game Theory with Economic Applications
Issue number1
StatePublished - Jan 1 2015


  • Calibration and other tests
  • Probabilistic models
  • Strategic forecasters

ASJC Scopus subject areas

  • Statistics and Probability
  • Economics and Econometrics
  • Statistics, Probability and Uncertainty
  • Applied Mathematics


Dive into the research topics of 'Calibration and Expert Testing'. Together they form a unique fingerprint.

Cite this