Abstract
This keynote address provides a foundation that incorporates decision-theory methods for integrating uncertainty into the decision-making process for patient care in the health care system. The article provides linkages between the medical economics literature, health care analysis, and quantitative methods to help improve the outcomes for patients in the health care system. (JEL I18, I14, I10).
Original language | English (US) |
---|---|
Pages (from-to) | 227-245 |
Number of pages | 19 |
Journal | Contemporary Economic Policy |
Volume | 38 |
Issue number | 2 |
DOIs | |
State | Published - Apr 1 2020 |
ASJC Scopus subject areas
- Business, Management and Accounting(all)
- Economics and Econometrics
- Public Administration
Access to Document
Other files and links
Fingerprint
Dive into the research topics of 'TOWARDS REASONABLE PATIENT CARE UNDER UNCERTAINTY'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver
}
In: Contemporary Economic Policy, Vol. 38, No. 2, 01.04.2020, p. 227-245.
Research output: Contribution to journal › Article › peer-review
TY - JOUR
T1 - TOWARDS REASONABLE PATIENT CARE UNDER UNCERTAINTY
AU - Manski, Charles F.
N1 - Funding Information: 18.1 ≤ mean life years for (age 50, NH black male, not HBP) ≤ 35.4, 14.3 ≤ mean life years for (age 50, NH black male, HBP) ≤ 38.5, 23.8 ≤ mean life years for (age 50, NH white male, not HBP) ≤ 36.4, 15.6 ≤ mean life years for (age 50, NH white male, HBP) ≤ 42.0. Combining assumptions that (1) persons with HBP have lower life expectancy than those without HBP, and (2) black males have between 0 and 2.5 years lower life expectancy than white males conditional on blood pressure, with the CDC and NHANES data yields narrower bounds: 29.4 ≤ mean life years for (age 50, NH black male, not HBP) ≤ 35.4, 14.7 ≤ mean life years for (age 50, NH black male, HBP) ≤ 22.9, 31.9 ≤ mean life years for (age 50, NH white male, not HBP) ≤ 36.4, 16.3 ≤ mean life years for (age 50, NH white male, HBP) ≤ 25.4. Data Sources : National Health and Nutrition Examination Survey (NHANES), and Life tables from the Centers for Disease Control. ). We've used these bounded variation assumptions—again weaker versions of instrumental variable assumptions—to good effect, in a very recent paper in ) on an entirely different topic, the effect of right to carry laws on crime rates. If you are interested in using these bounded variation assumptions—really generalizations of traditional instrumental variable assumptions—read the paper I coauthored with John Pepper (Manski and Pepper The Review of Economics and Statistics (Manski and Pepper ), wrote: “clinicians must accept uncertainty and the notion that clinical decisions are often made with scant knowledge of their true impact.” During the remaining time, I want to address reasonable care. Everything I've talked about so far has been about empirical research and inference, but the remaining question is how is this going to feed into actual decision‐making? Everyone, including clinicians and guideline developers, should view patient care as a problem of decision‐making under uncertainty. If you go to the medical literature, this is very well recognized. For example, the Institute of Medicine (Institute of Medicine ) makes this clear. They make a strong recommendation on what you should or should not do in a particular context if there is high certainty, based on evidence, of substantial benefits. A moderate recommendation if there's moderate certainty and a weak recommendation if there is at least moderate certainty, based on evidence, of small benefits. To give you a deeper sense of what clinicians do, and this I find fascinating, they deal with uncertainty using various systems to develop clinical guidelines. They do a verbal ranking of evidence, of verbal recommendations. The 2014 JAMA study on hypertension (James et al. What do these words mean? High certainty? Moderate certainty? Small net benefit? Moderate? They are words, and that's it. What do they actually do in guideline development? I sat in on a mock session several months ago at McMaster University in Canada. They put MDs in a room, and they ask: “do we really trust this evidence or not?” “Is it high, moderate, or small uncertainty?” “Is this going to be a big effect or not?” Then they each rate them on some verbal scale and average the ratings across the seven or eight MDs in the room. That's how they make decisions. That is literally what happens. As economists, we like to think about formalizing something, you just don't find it. Not only don't clinicians find it, but they are actively antagonistic to formalization. I can provide a long list of papers with passages to the effect of “we don't want to formalize stuff because we don't trust it. We want to do this verbal stuff.” Now, obviously I'm tainted. I'm an economist so I always want to formalize things and I feel very uncomfortable with words. We feel comfortable with math. I'm attracted to applying decision theory, so that's what I want to talk about as the last topic. We have a standard notion of decision theory. Some decision theory is very abstract, but I want to talk about very basic ideas. If you have a decision maker, there is a choice set, and then there's something called a state space, or states of nature. We characterize what we don't know. All the possibilities that might happen—that's the state space. You're supposed to list all the things that could possibly happen—these are colloquially called “known unknowns” rather than “unknown unknowns”—that express partial knowledge. What do we do with the state space? Well, the state of nature—things that we might not know about—would include things we don't know about the patient's health status, how disease will progress in this patient, how the patient will respond to alternatives, and so on. You can define states of nature for the individual patient or you can define them in terms of groups, in terms of probabilities. We typically do the latter. The whole literature on randomized experiments does not say “I'm going to predict something for a particular patient.” It predicts probabilities. In group terms, it's basically the fraction of patients with specific attributes who are ill. Now we come to this word “reasonable.” Economists like to talk about optimization. If you are in medical economics, with expected utility maximization and rational expectations, we have a very well‐defined optimization problem. We can talk about optimal medical care. The problem is, if you are not in that situation, then what do you do? Let's say there are two states of nature. In one state of nature, treatment A is better than treatment B, and in the other state of nature, treatment B is better than A. You don't know which state of nature exists in the real world. How are we going to define optimization if that's all you know? To give a very concrete example, it's known for cancer treatments that you get a bifurcation depending on genetic information. Some patients, if they have a particular mutation, will react well to this particular drug, but if they don't have that mutation, the drug won't do anything. Doing the genetic test to find out if the patient has the mutation or not might cost $5,000, $10,000. Therefore, the patient may not have been tested. So again, you have two states of nature. The patient either does or does not have that mutation. How are we going to choose the optimal treatment for this patient, because the better treatment depends on whether the patient does or does not have the mutation. Standard textbook decision theory first eliminates dominated treatments. These are things that are known to be suboptimal. Then the problem is how to choose among undominated treatments. That's the hard part. You can only do this reasonably. You can't do this optimally because there's no one right way to do it. ): “It's a natural reaction to search for a 'best' decision rule, a rule that has the smallest risk…” He is talking about minimizing loss, rather than maximizing utility, so you would use the word risk, whatever the true state of nature. He writes: “Unfortunately, situations in which a best decision rule exists are rare and uninteresting. For each fixed state of nature, there may be a best action for the statistician to take. However, this action will differ, in general, for different states of nature, so no one action can be presumed best overall.” The word “reasonable,” I've chosen to use that word. In part, I'm following others. A very nice quote from Ferguson in his statistical decision theory book in 1967 (Ferguson Ferguson's definition of reasonable is “A reasonable rule is one that is better than just guessing.” Ferguson is a very rigorous mathematician, but this is the way he puts this. He clearly thought about that sentence quite a bit and that's as much as he could say. What are reasonable decision criteria? If you're an economist, you say, well, we should be Bayesian. If we don't know something, and we don't have rational expectations, we'll write a subjective distribution down and maximize subjective expected utility. That's become very ingrained. I don't want to criticize that too harshly because in situations where you feel that you can credibly put a subjective distribution on something, we do it. I'm sure I do it myself in day to day life. In some cases, I maximize subjective expected utility. The problem is that often you can write down a subjective distribution, but it may just be for convenience. A big problem in the Bayesian literature is that people often don't know what subjective distribution to use, and it matters critically. It turns out that in the medical literature, there has been controversy about this for a long time. If you ask why the MDs aren't counseled to be Bayesian—and there are biostatisticians, Bayesian statisticians who are telling them to be Bayesian—but they don't do it. The problem basically is: where does the prior come from? They know that the results you get depend critically on the prior, so they don't want to use them. That leads to ambiguity, my final topic. Ambiguity is basically, how are you going to make decisions without feeling comfortable writing a subjective distribution down for what it is that you don't know? Ambiguity is deep uncertainty. There are multiple criteria in the literature, because as I said, there can't be a consensus. You can only find things that are reasonable, not perfect. But there's a deep idea, that if I face ambiguity, I should try to find a decision criterion that behaves adequately in some well‐defined sense across all states of nature. It's uniformly adequate across all states of nature. Then the question is how are we going to formalize this? There are two classical ideas. The one that everyone here is familiar with is maximin. You look for uniformly adequate behavior by asking: “what's the worst that could happen if I do treatment A?” and “What's the worst that I could do with treatment B?” Then you choose the treatment with the least bad worst outcome. I think everyone's familiar with maximin. It goes back formally to the 1920s, to von Neumann. Probably goes back to the Bible informally. It's such a simple idea. ), but let me just define it. Instead of the worst case, look at each possibility, each state of nature, and find the best case that could happen in a given state of nature. What would be optimal? If I knew the truth, then I could optimize. What would be my optimal welfare if I did know the truth? The problem with maximin, of course, is that it is pessimistic. You just look at worst case alternatives and explicitly say: let's just be pessimistic and do this. This is where minimax regret comes in. I know from experience that most people have not heard of minimax regret. The formal literature on this dates back to the 1950s. Leonard Savage originated this (Savage I don't know the truth but imagine I did. Then imagine I choose some suboptimal, inferior alternative. I don't optimize. I do something that's worse. There will be some loss. Say I choose treatment B when treatment A is actually optimal, then there is some loss for choosing treatment B. That's a type‐one error relative to treatment A. What is the magnitude of that loss? That is what was missing in hypothesis testing. How much do I lose by choosing treatment B when treatment A was actually better? The word that got used for that is regret, which is nice because it's like psychological regret. Not exactly the same thing, because psychological regret is ex post. Here we're looking at it before you made the decision: what regret I would have if I made the wrong decision. I could do that both ways. If I choose treatment B when treatment A is actually better, I'm going to suffer some regret. On the other hand, if I choose treatment A when treatment B is better, I'll have some regret. Regret is bad so you want to minimize regret. If you minimize maximum regret across all states of nature, that's what the minimax regret criteria is. It's a saddle point, mathematically. I think it's easy to describe it another way. To think about minimax regret—which I find more comfortable these days—is not to use the word regret. You choose the treatment that minimizes the maximum distance from optimality. That's what minimax regret is doing. I've been applying the minimax regret criteria to a whole set of applied problems. Read my paper in ), or my paper in ) on personalizing patient care. That paper has a section on reasonable decision‐making where I actually calculate the maximin and minimax regret solutions in these very practical cases. Health Economics (Manski Quantitative Economics (Manski Here is a different application of minimax regret criteria. How to choose sample size for randomized experiments is a very important problem. This is another situation with a kind of tyranny of hypothesis testing. The standard way of choosing sample size in randomized clinical trials in medicine, and also in many economic experiments in development or public policy, is by doing statistical power calculations. You pose some hypothesis test with a null hypothesis. The alternative hypothesis has to have an effect size that is called, in the medical literature, the “minimum clinically important difference,” or in labor market program evaluation “how much of an improvement in labor market outcomes do I think is minimally meaningful.” That's how you pose the alternative hypothesis. What sample size is sufficient to get the probability of a type‐two error below 0.2? That's the way it's done in medical clinical trials, whether they're for FDA drug approval or for funding from the National Institutes of Health. It's done in the United States, Canada and all over Europe. I assume in Asia. All across the board they use these power calculations. Again, why am I going to choose a sample size in a randomized experiment based on power calculations? Hypothesis testing is very remote from decision‐making. So instead, if we're going to think about choosing a sample size in a randomized experiment, why not think about it with a decision in mind? I'm going to use the data from a randomized experiment to help me make a decision. Think about choosing a sample size that will get me close enough to a good decision. Consider a classical experiment where there are no identification problems. The only issue is the sample size and it's just due to statistical imprecision. If I increase the sample size, I'll learn more and more, so I should just let the sample size go to infinity. The problem, of course, is the cost. You are going to choose a finite sample size because it's costly to run experiments, but let's think of that from the decision‐making perspective, not from a hypothesis testing perspective. The way you do that formally is by minimax regret and decision theory. You use the finite sample version of this, which was developed and brought to fruition by Abraham Wald in 1950 in his book on statistical decision functions. I never learned statistical decision theory in graduate school, but I'm old enough to have seen statistical decision theory, when I was an assistant professor and I was in contact with statisticians. This isn't taught anymore. Not taught in econometrics, not in statistics. But it's actually quite fundamental and you can apply Wald's ideas in a finite sample version of minimax regret criterion to attack the problem of choosing sample size in a RT. There's a fair amount of work in the last 10, 15 years on this topic. I list some papers here, starting with a paper of my own in 2004 in ); some follow‐up papers, another paper by Schlag (Schlag ), which is unpublished; ); a couple of , ); and then the ) and then a new ). Econometrica (Manski Econometrica paper by Hirano and Porter in 2009 (Hirano and Porter Journal of Econometrics papers by Stoye (Stoye PNAS paper of mine and Tetenov in 2016 (Manski and Tetenov Econometrica paper by Kitagawa and Tetenov in 2018 (Kitagawa and Tetenov A bottom line is that when you approach the question of sample size selection of RTs from this statistical decision theory perspective, you wind up getting a very nice result. You can look at the specific examples that Alex Tetenov and I have computed. You wind up with the conclusion that you can do quite reasonably with a much smaller sample size than is actually used. This is a result that the clinicians and the funders of medical research would really like. Instead of needing 1,000 or 1,500 patients per treatment, doing it with 300 or 400, if you look at it this way, is pretty good. The classical use of statistical power calculations turns out to be very conservative and requires a much larger sample size than you want. We have some hope that this will have some practical influence. Consider the minimax regret problem—the problem of choice between treatment A and treatment B—where you're not sure which one's better because of identification problems. Formally, looking at that problem from a minimax regret perspective, the algebraic result that comes out is that you should not put everyone in treatment A or in treatment B. Instead do a fractional allocation; put some fraction of people in treatment A and some fraction of people in treatment B. Very explicit fractions come out of the minimax regret solution and those fractions will depend on what information you have and how wide the bounds are on the outcomes. ), is the idea of diversifying treatment under ambiguity. It also has a side benefit: diversification means actually running randomized experiments, and if you do this sequentially, what I call adaptive diversification, you can learn from them. Fractional allocations are familiar to all economists. Think about financial portfolio allocation; think about treatment A and treatment B in medicine like putting money into stocks and bonds. There is a formal equivalence to allocating a portfolio between stocks and bonds. Everybody knows about the wonders of diversification. Diversification is a financial fractional allocation. What I've been pushing for quite a while now, starting in a 2009 article (Manski There is so much that I think that economists can do to contribute to this area. Medical decisions, obviously, are important, and I think the big challenge is for us to do our work in a way where we can communicate well with clinicians, epidemiologists, and biostatisticians, who work in a very different mindset. Publisher Copyright: © 2019 Western Economic Association International
PY - 2020/4/1
Y1 - 2020/4/1
N2 - This keynote address provides a foundation that incorporates decision-theory methods for integrating uncertainty into the decision-making process for patient care in the health care system. The article provides linkages between the medical economics literature, health care analysis, and quantitative methods to help improve the outcomes for patients in the health care system. (JEL I18, I14, I10).
AB - This keynote address provides a foundation that incorporates decision-theory methods for integrating uncertainty into the decision-making process for patient care in the health care system. The article provides linkages between the medical economics literature, health care analysis, and quantitative methods to help improve the outcomes for patients in the health care system. (JEL I18, I14, I10).
UR - http://www.scopus.com/inward/record.url?scp=85075071503&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85075071503&partnerID=8YFLogxK
U2 - 10.1111/coep.12452
DO - 10.1111/coep.12452
M3 - Article
AN - SCOPUS:85075071503
SN - 1074-3529
VL - 38
SP - 227
EP - 245
JO - Contemporary Economic Policy
JF - Contemporary Economic Policy
IS - 2
ER -