Original language | English (US) |
---|---|
Pages (from-to) | 117-128 |
Number of pages | 12 |
Journal | Journal of Political Philosophy |
Volume | 28 |
Issue number | 1 |
DOIs | |
State | Published - Mar 1 2020 |
ASJC Scopus subject areas
- Philosophy
- Sociology and Political Science
Access to Document
Other files and links
Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver
}
In: Journal of Political Philosophy, Vol. 28, No. 1, 01.03.2020, p. 117-128.
Research output: Contribution to journal › Article › peer-review
TY - JOUR
T1 - Debate
T2 - Legal Probabilism—A Qualified Rejection: A Response to Hedden and Colyvan
AU - Allen, Ronald J.
N1 - Funding Information: Ronald J. Allen Law Northwestern University and China University of Political Science and Law I am indebted for comments on a previous draft to John Norton, Michael Pardo, Alex Stein, and two anonymous reviewers; to Riley Clafton,for her research assistance; and to the Julius Rosenthal Foundation Fund for financial assistance in the preparation of the article. It is always of great interest when distinguished scholars from other fields turn their attention to legal scholarship, as Brian Hedden and Mark Colyvan (H&C) recently have done. Brian Hedden and Mark Colyvan, “Legal probabilism: a qualified defence,” Journal of Political Philosophy , 27 (2019), 448–68. This is especially so when done in the spirit “to highlight the importance of continued dialogue between legal epistemology and formal epistemology.” Ibid., p. 467. The results of such work in my field—the implications of naturalized epistemology for western legal systems See, e.g., Ronald J. Allen and Brian Leiter, “Naturalized epistemology and the law of evidence,” Virginia Law Review , 87 (2001), 1491–550. —have often been profound. Examples are the philosopher L. Jonathan Cohen's book, The Probable and the Provable (1977), the decision theorist David Schum's book, The Evidential Foundations of Probabilistic Reasoning (1994) and, more recently, the epistemologist Larry Laudan's book, Truth, Error and Criminal Law (2006). And Laudan's follow‐up book, Larry Laudan, The Law's Flaws: Rethinking Trial and Errors? (London: College Publications, 2016). For a less successful effort, see our analysis of Kaplow's “economic analysis,” Ronald J. Allen and Alex Stein, “Evidence, probability, and the burden of proof,” Arizona Law Review , 55 (2013), 557–602. Like the gravity waves emanating from the mergers of black holes, these books sent ripples through the relevant universe that continue today. In their article, H&C certainly take up an important question—the meaning and implications of different forms of probability for legal systems—but, unfortunately, they do not embed their analyses in a plausible normative framework, or an adequate understanding of the legal literature, its conceptual difficulties, and the operation of legal systems. To facilitate the dialogue to which they wish to contribute, I briefly discuss these points. Law and probability have been intertwined probably (no pun intended) since the beginning of civilization, but certainly since Leibniz investigated the contingencies that are the grist of the mill for legal proof. See Ian Hacking, The Emergence of Probability (Cambridge: Cambridge University Press, 1975). For criticisms of Hacking, see Daniel Garber and Sandy Zabell, “On the emergence of probability,” Archive for History of Exact Sciences , 21 (1979), 33–53, at pp. 33–5. Modern legal scholarship began exploring this terrain with the seminal article by John Kaplan in 1968. John Kaplan, “Decision theory and the fact finding process,” Stanford Law Review , 20 (1968), 1065–92. Initially, it was accepted that conventional probability theory applied directly to legal decision making, and the problems were at the margins—such as the idiosyncratic case of naked statistical evidence. See, e.g., David Kaye, “Naked statistical evidence,” Yale Law Journal , 89 (1980), 601–11. But then L. Jonathan Cohen identified certain proof paradoxes that he tried to resolve by reconceiving the nature of juridical proof from Pascalian deductive logic to Baconian inductive logic. L. Jonathan Cohen, The Probable and the Provable (Oxford: Oxford University Press, 1977). Shortly thereafter, I extended the paradoxes and offered a reconceptualizaton of trials, rather than of probability, to solve the problem. Ronald J. Allen, “A reconceptualization of civil trials,” Boston University Law Review , 66 (1986), 401–37. Neither solution worked; instead deeper anomalies were discovered. Ronald J. Allen, “Rationality, algorithms, and juridical proof: a preliminary inquiry,” International Journal of Evidence and Proof , 1 (1997), 254–75. Fast‐forwarding to the present, legal scholarship has undergone a paradigm shift, replacing probabilism with explanationism as the best explanation of juridical proof, exemplified by the symposium in the International Journal of Evidence and Proof . See Ronald J. Allen and Michael Pardo, “Relative plausibility and its critics,” International Journal of Evidence and Proof , 23 (2019), 5–59. H&C base their criticism of the move from probability to explanation on a sparse review of some of the literature, joining a cottage industry of experts from cognate disciplines addressing probability theory as it applies in the legal context. See, e.g., James Franklin, “The objective Bayesian conceptualization of proof and reference class problems,” Sydney Law Review , 33 (2011), 545–61; Martin Smith, “When does evidence suffice for conviction?” Mind , 1997 (2017), 1193–218; Yakov Ben‐Haim, “Assessing ‘ beyond a reasonable doubt' without probability: an info‐gap perspective,” Law, Probability and Risk, 18 (2019), 77–95; D. Enoch and T. Fisher, “Sense and sensitivity: epistemic and instrumental approaches to statistical evidence,” Stanford Law Review , 67 (2015), 557–622; David S. Schwartz and Elliott R. Sober, “The conjunction problem and the logic of jury findings,” William and Mary Law Review , 59 (2017), 619–92. For the difficulties in the analysis of Schwartz and Sober, see Allen and Pardo, “Relative plausibility and its critics.” Their major point is to “fault” this transformation “for assuming an outdated and inexhaustive catalogue of ‘interpretations' of probability … which lead them to ignore the possibility of integrating the explanatory and other epistemological considerations that they rightly emphasize into a probabilistic framework.” Hedden and Colyvan, “Legal probabilism,” p. 450. As demonstrated below, this claim of fault is false. In addition, H&C make a fundamental error about what the conceptual problem is. Together, these errors vitiate their criticisms of explanationism as the best explanation of juridical proof at present. There is no joy in the criticism of others, but discussing mistakes and misapprehensions may help to clarify what the actual issues are and encourage a productive dialogue around those issues. That, in turn, may further the goal that I share with H&C: “to highlight the importance of continued dialogue between legal epistemology and formal epistemology.” Ibid. Few things are as important within legal systems as reducing errors and allocating them appropriately. Without accurate fact finding, rights are meaningless. See the essays in Ronald J. Allen, Professor Allen on Evidence , vol. 1 (Beijing: China University of Political Science and Law Press, 2014). Critiquing H&C faces an immediate difficulty. They claim to be concerned with “a normative version of legal probabilism,” but they never explain what that means. They implicitly assert that an unarticulated normative vision would yield the conclusion that “legal probabilism” is the normatively correct approach to take, but never address why that is the case. The sole objective is to fend off a few of the criticisms of conventional probability as applied to standards of proof, but they never consider how the meaning of standards of proof might be affected by the surrounding procedural context, the availability of different forms of evidence, or human cognitive capacity. It is a perfect example of a reflexive attack on anything that challenges the normativity of conventional probability theory without in any way defending that normative vision. To the extent one can piece it together, their normative vision is simply error reduction in civil cases and error allocation in criminal cases. Hedden and Colyvan, “Legal probabilism,” p. 450. From this limited perspective, Such a normative vision is simplistic at best. The normative foundations of dispute resolution are complicated and contested. For a sampling of pertinent work, see Kevin M. Clermont, Standards of Decision in Law: Psychological and Logical Bases for the Standards of Proof , Here and Abroad (Durham: Carolina Academic Press, 2013); William Twining, Rethinking Evidence (Cambridge: Cambridge University Press,1990); Robert P. Burns, A Theory of the Trial (Princeton: Princeton University Press,1999); Amalia Amaya, The Tapestry of Reason: An Inquiry into the Nature of Coherence and Its Role in Legal Argument (Oxford: Hart Publishing, 2015). For information about the many functions of evidence law in addition to its epistemological tasks, see Ronald J. Allen, “A note to my philosophical friends about expertise and legal systems,” Humana.Mente: Journal of Philosophical Studies , 28 (2015), 79–97. H&C conclude that the important task is to defend legal probabilism, the thesis that legal standards of proof are best understood as probabilistic in form. Standard of proof should be concerned with whether the state's … or the plaintiff's … case has been established to such a degree as to justify a probability of guilty or liability above some threshold, all of which matters, of course, only because of the effect on errors. This is peculiar. Just about everyone studying juridical proof thinks both that the legal system is structured in part to allocate errors, and that it should be. Allen and Pardo, “Relative plausibility and its critics,” p. 2. Hack Lai Ho, A Philosophy of Evidence Law (Oxford: Oxford University Press, 2008), may be an exception. The proper distribution of errors may be another matter. See Alex Stein, Foundations of Evidence Law (Oxford: Oxford University Press, 2005). The essence of the relative plausibility theory that H&C are criticizing is that one gets to the most probable explanation through plausible reasoning, a point that has been made repeatedly in the literature. An example is Ronald J. Allen and Sarah Lively, “Burdens of persuasion in civil cases: algorithms v. explanations,” Michigan State Law Review (2004), 893–944. Perhaps H&C are fighting a rather lonely battle with no other belligerents on the field. And, as I explain below, they fail to make any demonstration whatsoever that embracing “legal probabilism” would somehow advance the law's goals. A conflict does emerge when they embrace quantitative measures for the standards of proof, such as a preponderance being greater than or a range around 0.5. Here, by ignoring that “ought should imply can,” Allen and Leiter, “Naturalized epistemology and the law of evidence.” their argument becomes entirely epiphenomenal to understanding or prescribing for the legal system. If H&C are in fact concerned about errors allocation, and wish to impose a probabilistic framework, then to require that the plaintiff meet a greater‐than‐0.5 standard would require the plaintiff to prove all the ways the world might have been at the time in question, and that at least half of those ways, plus one, favored liability. Allen and Lively, “Burdens of persuasion in civil cases,” pp. 931–2; Ronald J. Allen, “The nature of juridical proof,” Cardozo Law Review , 13 (1991), 373–442. That is absurd. It would require essentially infinite resources and, in any event, would be impossible in the normal case (the world does not always cooperate by laying out its many possible worlds in an orderly fashion). This is an example of neglecting in an argument for the application of probability its very foundations. Unless H&C are not operating within mathematical probability, in which case their arguments are incoherent, the probability space must fill to 1.0. In the typical legal case, no one has a clue what that might mean. Instead, the parties choose what to litigate; they discard rather than generate ambiguity. The fact finder then fashions the result in the face of the alternatives advanced. The result in a standard civil case is, to unrealistically use numbers, that the fact finder concludes that the probability of the plaintiff's case is 0.4 and the defendant's 0.2, and no one knows what happened to the rest of the probability space. For whom should one decide? The legal system in fact says the plaintiff, notwithstanding instructions on burdens of proof, Allen and Pardo, “Relative plausibility and its critics,” p. 10. but H&C's normative error‐reduction view says the defendant, because the plaintiff has not proved its case to greater than 0.5. This will result in more errors than a finding for the plaintiff in such cases—an obviously lamentable “normative” outcome. None of this follows from relative plausibility, which is one of its attractions, and why, even on H&C's limited version of normativity, it is normatively superior. People use probability language all the time, e.g. “there is a 75% chance that it will rain over the weekend.” Such probability talk provides no path to an application of probability theory to trials. First, weather predictions even by trained meteorologists are notoriously unreliable and hardly could amount to an attractive normative model for legal decision making. Second, as has been known since Edward N. Lorenz's seminal article on chaotic systems, “Deterministic nonperiodic flow,” Journal of Atmospheric Science , 20 (1963), 130–41, generally speaking, weather will never be predictable (computable) at anything more than a short period of time, because it is composed of a very large number of chaotic variables and essentially forms what today is called a complex adaptive system. Law involves the opposite effect. It requires going from present conditions to a prediction (retrodiction) of the way the world was at some time in the past. The human condition, just as much as the weather, is subject to an enormous number of interactive variables, and thus the problem of the law, if viewed computationally, is precisely what is identified in the text. I am indebted to an anonymous reviewer for having pointed out the need to explain this point. Moreover, if one gives up computation, as perhaps H&C do, then their work has literally no implications for understanding, explaining, justifying or critiquing the legal system. I briefly explain these points in the remainder of this article. Relative plausibility has other normative advantages if “ought” should imply “can.” H&C refer to reference classes and degrees of belief, but, as they recognize, no one has ever explained where the evidence will come from for the vast number of reference classes that would be pertinent to any real trial, nor do H&C explain why there is any reason to believe that asking a fact finder after deliberating on evidence to put a number on a “credence” or a “degree of belief,” or to formulate an “evidential probability” (see discussion below) in numeric terms, would advance accurate fact finding—and thus is normatively superior. Nor do they explain how such matters increase understanding of juridical proof or the legal system. Data exists at trial primarily in explanatory forms and the entire process forces parties to provide competing explanations. See, e.g., Allen and Pardo, “Relative plausibility and its critics”; Allen and Lively, “Burdens of persuasion in civil cases,” pp. 936–7. It is completely mysterious how any version of probability theory is supposed to be operationalized without access to the necessary data. The probabilistic conclusion, in other words, is a label placed on an explanatory process, and no reason is given that even suggests a probabilistic labeling process will improve anything, practically or conceptually. H&C's misunderstanding of what is at stake in understanding juridical proof is captured by their dismissal of objections to formal approaches to proof because “[t]here is no algorithm that will tell us what the evidential probability of a hypothesis is, given some body of evidence.” This is lacking in explanatory accounts as well, and so H&C conclude that “[i]f the inability to give an algorithmic characterization is not a problem for Allen and Stein's own view, it is unclear why it should be a problem for legal probabilism.” Hedden and Colyvan, “Legal probabilism,” p. 454. I will clarify. The attraction of probabilism lies in algorithmic decision making that may increase the proportion of correct results—the main reason why many adherents to probabilistic reasoning equate it with rationality. For an early manifestation of this in the legal literature, see Ward Edwards, “Summing up: the society of Bayesian trial lawyers,” Boston University Law Review , 66 (1986), 937–41. For a discussion, see Max Albert, “Bayesian rationality and decision making: a critical review,” Analyse und Kritik , 25 (2003), 101–17. Relative plausibility arose as the better explanation of juridical proof in significant part because the necessary foundations for any serious probabilistic reasoning do not exist, and thus an algorithmic approach cannot be implemented. Allen, “Rationality, algorithims and juridical proof.” Relative plausibility exploits this weakness by providing a non‐algorithmic alternative. H&C, in other words, mistake an explanatory feature of relative plausibility for a criticism of it. H&C claim that “[a] serious challenge to legal probabilism must take aim at the very structure of probability, and not at various inessential theses that some probabilists have occasionally endorsed.” Hedden and Colyvan, “Legal probabilism,” p. 454. I agree, but unfortunately they do not follow their own advice. H&C have noted a few scattered criticisms of probabilism, taken them out of context, given responses to them that have been disposed of in the pertinent literature, add nothing new to rescue probabilism from the critiques, and then proclaim probabilism a success. I will give an example. H&C claim that the justification that Alex Stein and I have for dismissing a relative frequency account of juridical proof is that we “object that legal probabilism, using a frequentist interpretation of probability, would require ignoring (or at least pushing to the background) particular facts of the case and paying attention primarily to general frequencies.” Ibid., p. 451. Below is what we actually said; the reader can decide on the adequacy of H&C's presentation, and thus the seriousness of their argument: We show that the relative plausibility approach outperforms mathematical probability operationally and normatively. Application of mathematical probability in the courts of law engenders paradoxes and anomalies that are not easy to avoid or explain away. Relative plausibility, on the other hand, faces no such predicaments. It seamlessly resolves all uncertainty problems that might arise in adjudicative factfinding. A further advantage is its alignment with the natural reasoning of ordinary people, which reduces the cost of adjudication and helps the legal system guide individuals' behavior. Last, but not least: relative plausibility is the best available tool to get factfinders to the actual facts of the case they are asked to resolve. Mathematical probability, on the other hand, abstracts away from those facts. As a substitute, it prods factfinders to derive their decisions from the general frequencies of events. Allen and Stein, “Evidence, probability and the burden of proof,” p. 560. The legal literature has taken aim “at the very structure of probability” as it applies to juridical proof and demonstrated that the conditions under which any serious form of probabilism could advance truth finding do not exist in western legal systems. See, e.g., Allen, “Rationality, algorithms and juridical proof.” H&C, ignoring their own advice, do not address that literature in a serious fashion. But their version of probabilism isn't serious; so far as I can tell, it consists entirely of a fact finder doing the hard work of processing and deliberating on evidence and, after reaching a conclusion, putting a number on it. If that is all H&C want to defend, I will not object further. See below for a discussion of what H&C think is a new approach to probabilism that avoids these difficulties. They are wrong about that. In another example of H&C responding to a serious problem unhelpfully, they purport to solve the conjunction problem by asserting as though it were an original thought (there are no citations) that the legal system should demand that parties prove that the conjunction of the necessary elements “has probability above 0.5 … This strikes us as a modest and sensible improvement to the legal system and one which dissolves the conjunction paradox for legal probabilism.” Hedden and Colyvan, “Legal probabilism,” p. 458. This idea has been discussed and thoroughly rejected in the legal literature for over thirty years. Beginning with Ronald J. Allen, “A reconceptualization of civil trials,” p. 407. Various scholars have come up with remarkably creative ways to try to avoid the problem: see Clermont, Standard of Decision in Law . There are a dwindling number of holdouts; Schwartz and Sober, “The conjunction problem and the logic of jury findings.” For the difficulties in the analysis of Schwartz and Sober, see Allen and Pardo, “Relative plausibility and its critics.” Nance also thinks the conjunction effect is not a problem, but then subscribes to something resembling relative plausibility nonetheless; see the quote from Nance, n. 49 below. Rather than solving the problem, proving the conjunction of multiple elements creates new problems. Legal liability usually depends on numerous elements, not just two (as in H&C's example), and requiring a plaintiff to prove a conjunction of six or seven elements would make it nearly impossible to impose liability. This would not be a problem if trial occurred with a fully specified probability space—with knowledge of all the ways the world might have been at the time in question—but that knowledge never exists. Requiring proof of the conjunction of elements also leads to peculiar outcomes. For example, theft has many more elements than murder, but both require intent. Under H&C's purported solution, a person could be convicted of murder based on a lower probability of intent than would be necessary to convict for theft. That does not logically disprove anything about probabilism, but it is strange indeed that the key component of culpability for theft would have to be proved to a higher probability than for murder, and hardly an argument for probabilism. Allen, “A reconceptualization of civil trials,” p. 407; Allen, “ Rationality, algorithms, and juridical proof,” p. 272. These references contain further examples of the weirdness of the conjunction solution which H&C do not address. A reviewer pointed out the need to explain the example in the text. The point is that the greater the number of elements the higher each one on average has to be proved. Thus, on average, intent to commit theft has to be proved to a higher probability than intent to murder, which is weird. But what if H&C are not advancing a serious form of probabilism? What if it is just a lovely conceptual framework to be deployed if useful but not taken seriously? This seems to be their position. Rather than committing to a form of probabilistic reasoning that would have been recognized by Savage, de Finetti, or anyone else concerned with improving decision making, their version of probability simply has been absorbed by explanatory reasoning. They say that “the legal probabilist has more resources at her disposal than critics have recognized” Hedden and Colyvan, “Legal probabilism,” p. 466. and that the critics “overstate the case against subjective Bayesian interpretations of probability”—and neglect a new defense of probabilism that rests on the idea of “evidential probability … [that] incorporates precisely the elements they regard as so important, namely explanatory power, simplicity, comprehensiveness, and so on.” Ibid., p. 453. I address this “new” idea of “evidential probabilism” next—but, regarding the resources of probabilists, if a reformulated view of subjective Bayesianism and a new addition to the epistemologist's lexicon—evidential probability—incorporate the lessons of explanatory theory, then they have lost any independent explanatory power. As Kiel Brennan‐Marquez put it recently, The problem with … incorporating [into Bayesianism] the epistemic that defines explanationism is not that it makes subjective probabilism wrong per se; it's that it turns subjective probabilism into a species of explanationism, thus draining probabilism of descriptive power on its own terms. Kiel Brennan‐Marquez, “The probabilism debate that never was,” International Journal of Evidence and Proof , 23 (2019), 141–6, at p. 142, n. 6. This “defense of probabilism” becomes just another recognition that probabilism has been supplanted by explanationism, although obviously probability remains a tool to be employed when appropriate by a fact finder. Which is the primary point of Ronald J. Allen, “The nature of juridical proof: probability as a tool in plausible reasoning,” International Journal of Evidence and Proof , 21 (2017), 133–42. It is just one of many. If H&C accept that proposition, then once more it is unclear with whom they think they are disagreeing. To be fair to H&C, the juridical proof literature is complex and nuanced, and it may be asking too much of commentators from other fields to have mastered it completely. However, H&C make a similar claim about legal scholars exploiting the work of epistemologists—that legal scholars have neglected modern developments in formal epistemology that aid in understanding juridical proof. Their claim that legal scholars have missed “fairly recent developments in formal epistemology” rests explicitly upon their assertion that legal scholarship is unaware of Timothy Williamson's controversial book, Knowledge and Its Limits (2000), although they are perhaps simply using this as an example of what might be called an explanatory turn in some recent epistemology. See Timothy Wilson, Williamson on Knowledge , ed. Patrick Greenough and Duncan Pritchard (Oxford: Oxford University Press, 2009); L. Bonjour, “The myth of knowledge,” Philosophical Perspectives , 24 (2010), 57–83; Jessica Brown, Fallibilism: Evidence and Knowledge (Oxford: Oxford University Press, 2018). H&C instruct that this book is indicative of a new game in town (although actually over 19 years old), entitled “evidential probability,” that resolves all the difficulties the legal scholars have with conventional probability theory, and thus that all of the concerns about it “fail.” If only it were that simple. Instead, H&C are wrong on both counts: Williamson's book has little significance for understanding juridical proof (nor do other explanatory accounts in the epistemological literature for the reasons discussed above—they deprive probability theory of any useful content, except as one of the many tools in the cognitive tool box), and in any event it has been vetted by legal scholars. First, its insignificance. As suggested above, the problems with probabilism are substantial. H&C admit that the problems with these two primary interpretations pertinent to the legal systems—relative frequency and subjective Bayesiansim—are either fatal or at least render probability theory problematic; but not so Williamson's and perhaps others' construct of evidential probabilism. Quite remarkably for those criticizing the conceptual failure of others, H&C tell us nothing about how “evidential probabilism” would solve the difficulties that conventional probability faces. For good reason: Williamson, like H&C, does not provide a single example of how the use of the various concepts he discusses would be operationalized in realistic legal settings, or give any reason to think that what he is advocating would improve legal decision making—or any other realistic decision‐making process for that matter. Williamson is not trying to explain the legal system; he is trying to reconstruct epistemology. Nevertheless, H&C extract from this explanatory turn a blueprint for a new vision of a probabilistic juridical proof, but the path to that result is completely opaque. The new vision, they say, is that “the relevant probability would be the probability that the defendant is guilty or liable, given the admissible evidence presented at trial along with some mundane background evidence about our physical and social world.” Hedden and Colyvan, “Legal probabilism,” p. 453. This is another manifestation of H&C's simplistic understanding of juridical proof. The background knowledge relied upon is hardly “mundane.” It is instead a critical component of what it means to be “evidence”; Ronald J. Allen, “Factual ambiguity and a theory of evidence,” Northwestern University Law Review , 88 (1994), 604–40. The insight—based on modern developments in formal epistemology that legal scholars have neglected—seems to be that people hear evidence, process and deliberate upon it through the use of their background knowledge and tools of rational thought, and reach a decision. Well, no kidding. What do H&C think anyone else is proposing fact finders do besides appraise the evidence in light of their understanding of the world, using the tools of rational thought, to make their best decision? It is a bit of a mystery. It is difficult to imagine the alternative to processing evidence by reference to one's knowledge and experience. That is what legal fact finding has involved for centuries, because that is what humans do. The additional component on offer here—once again—is that, after reaching a decision, the fact finder is to put a number on it called a “probability.” Doing so is completely epiphenomenal; it would be at most a label attached as an afterthought. In this regard, evidential probabilism is just a specific example of the general problem of the probabilistic approaches discussed above. The problems of probabilism as an explanation of juridical proof that have been exposed and more or less resolved by the turn to explanationism lie in its formalization. Converting “probability” to a hunch based on the evidence available to a fact finder, appraised in light of his or her background knowledge, resolves none of those problems. In the absence of any plausible example of how evidential probability would make any difference at trial, obviously H&C fail to give any reason to believe that evidential probability is a normatively superior approach to juridical fact finding than explanationism. On the embrace of explanationism and the rejection of probabilism by the courts, see Allen and Pardo, “Relative plausibility and its critics.” Relatedly, why H&C think that adding a completely unnecessary step to decision making (the addition of a meaningless label referring to probability) is going to increase accuracy or otherwise entail a normative improvement is another mystery for which H&C do not provide a word of explanation. This invocation of “new developments in epistemology” simply restates the problem (what is the manner in which people process, deliberate on the evidence, and decide, and what are the implications of such things for juridical proof?) as though it were the solution, and adds nothing. Moreover, notwithstanding H&C's claim of ignorance on the part of legal scholars, Williamson's work has been vetted by legal scholars David Enoch and Talia Fisher, “Sense and sensitivity: epistemic and instrumental approaches to statistical evidence,” Stanford Law Review , 67 (2015), 557–622; David Enoch, Levi Spectre, and Talia Fisher, “Statistical evidence, sensitivity, and the legal value of knowledge,” Philosophy and Public Affairs , 40 (2012), 197–224; Michael Pardo, “Safety vs. sensitivity: possible worlds and the law of evidence,” Legal Theory , 24 (2018), 50–75; Alex Stein, “The new doctrinalism: implications for evidence theory,” University of Pennsylvania Law Review , 163 (2015), 2085–107; Smith, “When does evidence suffice for conviction?”, Mind , 27 (2018), 1193–218 (recognizing the irrelevance of Williamson's work for legal evidentiary issues). These articles focus more on sensitivity and safety than “evidential probability” because, as so far developed, “evidential probability” has no discernable implications for the problems facing legal scholars. —even to the limited extent it is relevant to the debate between probabilism and explanationism. See Allen and Pardo, “Relative plausibility and its critics,” nn. 74, 109, 292, and accompanying text. Even more remarkably, given the claim of ignorance, the central idea of “evidential probabilism” has been developed in considerable detail in another important body of work that goes unnoticed by H&C. The disregard of the legal literature is jarring for authors criticizing that literature. In addition to the previously noted lapses, they make no reference to numerous contributions of Michael Pardo and Alex Stein that engage explicitly and thoroughly with the epistemological problems of constructing legal systems. Dale Nance, in The Burdens of Proof: Discriminatory Power, Weight of Evidence, and Tenacity of Belief (2016), develops a theory of juridical proof that depends in significant part on “epistemic probabilities” and embeds it in the larger epistemological issues that H&C claim to be addressing. Dale A. Nance, The Burdens of Proof: Discriminatory Power, Weight of Evidence, and the Tenacity of Belief (Cambridge: Cambridge University Press, 2016), pp. 43–57. Nance's account may be highly similar to Williamson's in critical respects, and the difficulty in judging how similar they are lies in the difficulty of understanding how Williamson's explanation may be pertinent to legal proof, to which H&C add nothing. Nance does not engage with Williamson's work. That matters, because Nance's account is compatible with, and perhaps an extension of, relative plausibility, which in turn suggests that Williamson's account is as well. Nance says that commonsense, plausible reasoning “serve[s] important functions relative to both the assessment of discriminatory power and the choice of Keynesian weight … [P]lausible reasoning serves as a tool for the analysis of evidence in commonsense terms. Even as litigation comes with ready‐made contending hypotheses (C and not‐C), those general claims typically will be refined at trial to specific theories of the case, one or more for the claimant instantiating C and one or more for the defendant instantiating not‐C. In deliberation, though, the fact‐finder will often find it necessary to consider other alternatives. And as Peirce noted, abduction (or inference to the most plausible explanation) becomes a critical tool by which commonsense reasoning develops such additional hypotheses. An assessment of Keynesian weight must be made relative to the contending hypotheses, and as these hypotheses change, some modification of the practical optimization of Keynesian weight may become necessary”; Nance, The Burdens of Proof , pp. 140–1. Williamson's account thus may be further evidence for the insignificance of conventional probability in understanding juridical proof rather than, as H&C claim, evidence to the contrary. H&C may be relying on Williamson's work as simply an example of evidential probabilism rather than as an explicit framework, but it is interesting to note that Williamson recently focused on the relationship between probabilism and explanationism, describing it thus: Inference to the best explanation does not directly rank potential explanations according to their probability. This does not automatically make it inconsistent with a probabilistic epistemology … Inference to the best explanation may be a good heuristic to use when—as often happens—probabilities are hard to estimate, especially the Bayesian prior probabilities of theories. In such cases, inference to the best explanation may be the closest we can get to probabilistic epistemology in practice. Timothy Williamson, “Abductive philosophy,” The Philosophical Forum , 47 (2016), 263–80, at p. 267. See also Allen, “The nature of juridical proof.” Williamson is suggesting in this passage that, in many instances (juridical proof is an example), virtually all of the work is being done by explanatory rather than probabilistic reasoning. Somewhat ironically, then, H&C are criticizing legal scholarship for being outdated by reference to somewhat dated formal epistemology that has no capacity, so far as presently can be determined, to explain juridical proof. But their point that legal scholars should attend to recent developments in epistemology is well taken, as we have already seen. There is much recent work that may actually have an important bearing on the nature of juridical proof, One example is epistemic decision theory; see, e.g., Jason Konek and Benjamin A. Levinstein, “The foundations of epistemic decision theory,” Mind , 128 (2019), 69–107. Another is Sarah Moss's Probabilistic Knowledge (Oxford: Oxford University Press, 2018), which develops a new understanding of knowledge, forgoing the requirements of certainty latent in such concepts as “true” and replacing them with a probabilistic understanding. Another is John Norton's forthcoming book, The Material Theory of Induction , < https://www.pitt.edu/~jdnorton/homepage/cv.html#material_theory >. Norton's book is rich and complicated, but will not give solace to probabilists. As he says, “Where Bayesians err is in their belief that probabilistic methods are a universal default that can be applied everywhere, automatically. Instead, my view is that probabilistic methods can be applied only in some domain when the background facts of that domain authorize it. We cannot just assume that they apply in some domain. We have a positive obligation to show that they are warranted by background facts in each case.” and I and many of my colleagues toiling in the legal field would be greatly aided by careful applications of these and other developments in epistemology to juridical proof, especially if those applications were informed by sophisticated understandings of the legal context and literature. H&C proclaim that “Probabilism Survives.” Hedden and Colyvan, “Legal probabilism,” p. 466. Regarding the legal system, indeed it does—as a set of platitudes with no explanatory power beyond the obvious that error reduction and allocation are important concerns of the legal system. Probabilism offers little insight into how those concerns are operationalized. H&C deploy probability theory as little more than putting labels with numbers in them on decisions reached for explanatory reasons. Rather than defending the continuing viability of “probabilism,” H&C provide more evidence of it having been absorbed by explanationism.
PY - 2020/3/1
Y1 - 2020/3/1
UR - http://www.scopus.com/inward/record.url?scp=85076321715&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85076321715&partnerID=8YFLogxK
U2 - 10.1111/jopp.12210
DO - 10.1111/jopp.12210
M3 - Article
AN - SCOPUS:85076321715
SN - 0963-8016
VL - 28
SP - 117
EP - 128
JO - Journal of Political Philosophy
JF - Journal of Political Philosophy
IS - 1
ER -