The new IOM report, Breast Cancer and the Environment: A Life Course Approach, again emphasizes the difference between how scientific panels go about making a causal inference and the approach too often approved of by credulous judges often insecure about their own ability to think critically and mesmerized by the jargon-laden pronouncements of credentialed experts. Beginning on page 82 under "Hierarchy of Studies" and followed by "Categories of Evidence" the report does a great job of detailing what counts as evidence and the methods and criteria used by organizations like the International Agency for Research on Cancer, the National Toxicology Program, the World Cancer Research Fund / American Institute for Cancer Research in going about collecting, assessing and weighing evidence when making causal judgments. They even put together a helpful summary of the classification systems (see Appendix C, "Classifications Systems Used in Evidence Reviews" at page 312).
Here are a couple of takeaways:
(1) "The criteria aim to be explicit about the weight, or relative importance, given to studies in humans and in animals or other experimental systems"; and
(2) "Strong and consistent positive epidemiologic evidence in rigorously conducted studies is prima facie evidence that the substance is a risk factor." You will quickly note upon reviewing the summary of systems of causal inference that none support anything like the notion embraced by the court in Milward v. Acuity that an expert weighing a subset of the data (each piece of which is either weak, irrelevant or inconsistent) upon the scales of his personal scientific judgment can by "reasoning to the best explanation" reliably reach a causal inference - especially in the complete absence of any epidemiological evidence to support it. Indeed the "atomization" of evidence decried by the Milward court and those in the "public health movement" who promote mass tort litigation is exactly what IARC, IOM, NTP, EPA and WCRF/AICR do - they assess each piece of evidence, they do it transparently, they do it according to rules laid down before they even go looking for the evidence and then they weigh what's left; again, according to weighting systems that are explicit, consistent and established before the first piece of evidence is examined.
The idea that knowledge comes from scientists taking a "holistic approach to the data" and applying their personal judgment to it is, to be blunt, hooey. That may be a way to arrive at a testable conjecture but without the conjecture passing a test of its predictive power (e.g. a rigorous epidemiological study) it remains nothing but a bald, personal opinion with no foundation beyond the ipse dixit of the expert who induced it.
David Oliver is managing partner of the Houston office of Vorys, Sater, Seymour and Pease. His practice focuses on civil litigation involving allegations of injuries due to exposure to chemicals or pharmaceuticals; he holds degrees in both chemistry and biology. Read more of David’s work on his blog: Mass Torts: State of the Art. You may contact David through the firm’s website at www.vorys.com.