The Evidence of Evidence-Based Medicine


In recent years the claim that only 20% or less of standard Western medicine is evidence-based has been repeated widely by health professionals and others.1 This assertion is perhaps most often made by proponents of unproven (“alternative” and “complementary”) therapies with the implication that, if true, it might somehow justify the integration of any number of unconventional modalities with a similar dearth of supporting scientific evidence into mainstream medical practice. It should be immediately noted that this line of reasoning is an example of the logical fallacy tu quoque (“you did it too”): One party cannot criticize another because both parties are guilty of the same “sin.” While this argument may be without merit, it is often made and widely held to be valid. Therefore, the authors of this paper have attempted to identify the sources of, and examine the evidence for, the “20% or less” claim. They also document the investigations into real-world use of methods established through clinical trials and “evidence-based” medicine (EBM), and find that such investigations establish the evidence basis for methods in use in modern medicine.


Laments about the state of conventional medicine are nothing new. In 1861, Oliver Wendell Holmes wrote: “I firmly believe that if the whole materia medica as used now, could be sunk to the bottom of the sea, it would be all the better for mankind—and all the worse for the fishes.”2 The original claim that “it has been estimated that only 10 to 20% of all procedures currently used in medical practice have been shown to be efficacious by controlled trial” first appeared in print in a document published by the U.S. Congressional Office of Technology Assessment (OTA) in 19793 and was repeated in 1983.4 The claim stems from the comments of OTA advisory panel member and noted epidemiologist Kerr White. Dr. White based his informal “10 to 20%” estimate on a 1963 paper that reported on two surveys of the prescribing practices of 19 family doctors in a northern British town for 2 weeks (one conducted in December 1960, and another in March 1961).5 Interestingly, the paper was never intended to evaluate the science of medical practice; rather, its purpose was to look toward controlling prescribing costs in terms of standard (i.e., “generic”) versus “proprietary” drugs. The “intent” of each prescription was analyzed according to how specific it was for the condition. Intent was “specific” for the condition for which it was prescribed only about 10% of the time; “probable” in about 22%; “possible” in 26%; “hopeful” in 28%; “placebo” in 10%; and “not stated” in 3.6%. From these data White estimated that “specific measures” accounted for 10 to 20% of the benefits of patient care; that the combined placebo and other nonspecific effects accounted for another 20 to 40%; and the rest (which he referred to as a “mystery”) account for 40 to 70%.6 In 1995, Dr. White stated:

Some 20 years ago, as a member of the original Health Advisory Panel to the U.S. Congressional Office of Technology Assessment I ventured the 10–20% figure again and invited anyone to provide more timely data. No one could. The figure was immortalized in OTA circles and publications for almost a decade. In countless addresses and conferences I often challenged others to provide better evidence but none was forthcoming. So the northern industrial town “arm-chair” assessment persisted.3

Little about these surveys was relevant to medical practice across the board when they were first published nearly four decades ago, and they are almost certainly even less relevant today. Dr. White himself has noted that his assessments were never intended to be applied generally.7 Nevertheless, even more gloomy pronouncements as to the evidential basis for medical practice have subsequently turned up in the medical literature.8,9 In 1991, Dr. David Eddy, at a conference in Manchester, England, claimed that only 15% of medical practice was based on any evidence at all. He apparently based this sweeping conclusion entirely on his studies of treatments for just two specific conditions: arterial blockage in the legs and glaucoma.10 Subsequently, Dr. Eddy’s claim, rather than the much more conservative OTA “armchair estimate,” has been widely cited as a criticism of mainstream medicine.


Regardless of the origin or intent of the original assessments, critics of the “10 to 20%” claims were originally unable to refute them because no solid evidence existed either in favor of or against them. That situation has changed in recent years. A growing body of evidence now exists regarding the extent to which medical practice is evidence-based. Still, in order to fully respond to either claim, one must ask, “What constitutes acceptable scientific evidence of efficacy, and how might one establish the relative ‘weight’ to be ascribed to different types of evidence?” Various rating systems have been devised, some describing levels of evidence ranging from I to V, with evidence from randomized, controlled trials (RCTs) being generally given a rating of level I, and the lowest grade being generally assigned to interventions performed without substantial evidence. Interventions other than level I that are nonetheless considered compelling evidence include evidence from prospective and/or comparative studies, and evidence from follow-up studies and/or retrospective case series.11 One category of evidence that appears to be unique to science-based medicine, and the occasional subject of criticism from those who wish to criticize the concept of evidencebased practice,12 are so-called self-evident interventions. These are incidences of treatment without compelling evidence obtained from RCTs and are considered as evidence in discussions of the extent of evidence-based practice. Examples of such interventions include blood transfusions, starting the stopped hearts of victims with heart attacks, antibiotics for meningitis, or a tourniquet for a gushing wound. Such interventions would not require RCTs to demonstrate proof of efficacy; indeed, such trials would most likely be considered unethical. There appear to be no comparable situations of the obvious necessity for and benefit from the interventions of “alternative” medicine. Consequently, it would appear that “compelling evidence” may occasionally be obtained from uncontrolled case series in science-based medicine, but probably not in “alternative” medicine. Evaluations of published studies suggest that Dr. White’s and the OTA’s figures substantially underestimate the extent to which clinical decisions are or could be made on the basis of evidence from randomized trials only. Evaluations of those same studies suggest that Dr. Eddy’s pronouncements wildly underestimate the extent to which standard medical practice is based on any evidence. Contrary to the claims, evidence-based practice appears to be prevalent, and it appears to be widely distributed geographically. Evidence for evidence-based practice includes those listed below.

  • 96.7% of anesthetic interventions (32% by RCT, UK)13
  • approximately 77% of dermatologic out-patient therapy (38% by RCT, Denmark)14
  • 64.8% of “major therapeutic interventions” in an internal medicine clinic (57% by RCT, Canada)15
  • 95% of surgical interventions in one practice (24% by RCT, UK)16
  • 77% of pediatric surgical interventions (11% by RCT, UK)17
  • >65% of psychiatric interventions (65% by RCT, UK)18
  • 81% of interventions in general practice (25.5% by RCT, UK)19
  • 82% of general medical interventions (53% by RCT, UK)20
  • 55% of general practice interventions (38% by RCT, Spain)21
  • 78% of laparoscopic procedures (50% by RCT, France)22
  • 45% of primary hematology-oncology interventions (24% by RCT, US)23
  • 84% of internal medicine interventions (50% by RCT, Sweden)24
  • 97% of pediatric surgical interventions (26% by RCT, UK)11
  • 70% of primary therapeutic decisions in a clinical hematology practice (22% by RCT, UK)25
  • 72.5% of interventions in a community pediatric practice (39.9% by RCT, UK)26

Thus, published results show an average of 37.02% of interventions are supported by RCT (median = 38%). They show an average of 76% of interventions are supported by some form of compelling evidence (median = 78%). There appear to be some areas of medical practice where interventions are less frequently based on level I evidence than others. Published surveys of ENT surgery,27 burn therapy,28 retinal breaks and lattice degeneration,29 and pediatric surgery30 have concluded that there is not a strong foundation of evidence obtained from RCTs on which to base practice in these areas. However, in the studies of burn therapy and pediatric surgery, it was noted that the number of RCTs has grown dramatically in the past decade. This suggests that those practicing in these fields are aware of the need to generate unbiased data in support of clinical practice and that they support the effort to develop effective practice guidelines. Calls for the evidence-based practice of “complementary” medicine have also been issued,31 and established scientific methodologies have been deemed “quite satisfactory” for addressing the majority of study questions related to “alternative” medicine by the United States Office of Alternative Medicine.32


Basing medical practice on the best available scientific evidence does have its critics. Some, for instance, assert that this philosophy of practice has major limitations when considering the care of individual patients. Others have argued that “science” and “objectivity” are themselves merely arbitrary “social constructs,” and therefore anecdote, testimony, and clinical (personal) experience should be afforded equal weight to ostensibly more objective scientific lines of evidence. Still other critics of EBM note that the data available under its framework may not apply to many treatments offered to patients in clinical practice or to subgroups of various diseases, nor may it be applicable to various types of prophylactic interventions, diagnostic decisions, or psychosocial factors.33 Notwithstanding such criticisms or claims regarding the prevalence of evidence-based medical practice, health professionals must address the essential question: “Does providing evidence-based care improve outcomes for patients?” Unfortunately, no pertinent data is currently available from randomized controlled trials, most likely because no investigative team or research granting agency has yet overcome the problems of sample size, contamination, blinding, and long-term follow-up that such trials would entail. Moreover, such trials pose serious ethical questions and concerns: For instance, would it be ethical to withhold evidence-based treatment from the control arm? On the other hand, “outcomes research” has documented that patients who receive evidence-based therapies often have better outcomes than those who don’t. For example, myocardial infarction survivors prescribed aspirin34 or betablockers35 have lower mortality rates than those who aren’t prescribed those drugs. Where clinicians use more warfarin and stroke unit referrals, stroke mortality declines by >20%.36 For a negative example, patients undergoing carotid surgery, despite failing to meet evidence-based operative criteria, when compared with operated patients who meet those criteria are more than 3 times as likely to suffer major stroke or death in the next month.37 


Dr. White has stated that the “10 to 20%” figure was used heuristically to stimulate the search for more accurate information. To some extent, he has succeeded in attaining that goal. However, Dr. White also notes that he had no control over the fact that the OTA used his “arm-chair estimate” in its final report, and that neither he nor the OTA can be blamed for the abuse of the statement. Clearly the intent of the OTA report was to strengthen the scientific basis for medical care, not to promote an “open door policy” for unproven alternative and complementary therapies.38 In 1995, Dr. White stated that he suspected the proportion of interventions based on evidence was higher than 20%.3 Even if Dr. Eddy’s estimates were accurate with regard to the 2 conditions he studied a decade ago, they appear to be clearly inapplicable to many conditions and therapeutic interventions that have been evaluated more recently. Whatever the merits or faults of evidence-based medicine, a growing body of evidence demonstrates that the practice is widespread and becoming more so. More importantly, there is emerging evidence that, when EBM is practiced, patients benefit. Clearly, demanding rigorous evidence in evaluating the effectiveness of medical interventions is a good thing. One may quibble with bits of evidence provided in individual studies: For example, the figures cited above are lower when only the results of RCTs are considered as “evidence,” although they are still higher than the “10 to 20%” figure. In any case, while the evidence for evidence-based medicine may be held, for good reason, to exclude anecdotal and subjective personal experience, it is not restricted to randomized trials and meta-analyses. Rather, it involves tracking down the best objective evidence in order to answer clinically relevant questions.39 Claims that conventional medicine is not widely based on evidence should be rejected, as should logically fallacious arguments based on such claims. The evidence fails to support them.


Reprinted with permission from Complementary Therapies in Medicine. 2000; 8; 123–126. © 2000 Harcourt Publishers Ltd.


  1. Heptonstall J. Traditional Chinese medical science. eBMJ. November 11, 1999. Available at:
  2. Holmes OW. In: Strasuu MB, ed. Familiar Quotations. Boston, Mass: Little, Brown; 1968: 124.
  3. Congress of the United States, Office of Technology Assessment. Assessing the Efficacy and Safety of Medical Technologies. Washington, DC: US Government Printing Office; 1978.
  4. Congress of the United States, Office of Technology Assessment. The Impact of Randomized Clinical Trials on Health Care Policy and Medical Practice. Washington, DC: US Government Printing Office; 1983.
  5. Forsyth G. An inquiry into the drug bill. Med Care. 1963; 1: 10–16.
  6. White KL. Evidence-based medicine [letter]. Lancet. 1995; 346(8978): 837–838.
  7. Ontario College of Physicians and Surgeons. Report of the Ad Hoc Committee on “Alternative” Medicine. September 1997.
  8. Smith R. Where is the wisdom . . . the poverty of medical evidence. BMJ. 1991; 303: 798–799.
  9. Smith R. The ethics of ignorance. J Med Ethics. 1992; 18: 117–118.
  10. Dubinsky M, Ferguson JH. Analysis of the National Institutes of Health Medicare Coverage Assessment. Int J Technol Assess Health Care. 1990; 6: 480–488.
  11. Baraldini V, Spitz V, Pierro A. Evidence-based operations in paediatric surgery. Pediatr Surg Int. 1998; 13(5–6): 331–335.
  12. Churchill W. Clarifications. eBMJ. November 20, 1999. Available at:
  13. Myles PS, Bain DL, Johnson F, et al. Is anaesthesia evidence-based? A survey of anaesthetic practice. Br J Anaesth. 1999; 82(4): 591–595.
  14. Jemec GB, Thorsteinsdottir, H, Wulf HC. Evidencebased dermatologic out-patient treatment. Int J Dermatol. 1998; 37(11): 850–854.
  15. Michaud G, McGowan JL, van der Jagt R, Wells G, Tugwell P. Are therapeutic decisions supported by evidence from health care research? Arch Intern Med. 1998; 158(15): 1665–1668
  16. Howes N, Chagla L, Thorpe M, et al. Surgical practice is evidence based. Br J Surg. 1997; 84(9): 1220–1223.
  17. Kenney SE, Shankar KR, Rintala KR, et al. Evidencebased surgery: interventions in a regional paediatric surgical unit. Arch Dis Child. 1997; 76(1): 50–53.
  18. Geddes JR, Game D, Jenkins NE, et al. What proportion of primary psychiatric interventions are based on evidence from randomised controlled trials? Qual Health Care. 1996; 5(4): 215–217
  19. Gill P, Dowell AC, Neal RD, et al. Evidence based general practice: a retrospective study of interventions in one training practice. BMJ. 1996; 312(7034): 819–821.
  20. Ellis J, Mulligan I, Rowe J, et al. Inpatient general medicine is evidence based. A-Team, Nuffield Department of Clinical Medicine. Lancet. 1995; 346(8972): 407–410.
  21. Suarez-Varela MM, Llopis-Gonzalez A, Bell J, et al. Evidence based general practice. Eur J Epidemiol. 1999; 15(9): 815–819.
  22. Slim K, Lescure G, Voitellier M, et al. [Is laparoscopic surgical practice “factual” (evidence based)? Results of a prospective regional survey]. Presse Med. 1998; 27(36): 1829–1833.
  23. Djulbegovic B, Loughran TP Jr, Hornung CA, et al. The quality of medical evidence in hematology-oncology. Am J Med. 1999; 106(2): 198–205.
  24. Nordin-Johansson A, Asplund KJ. Randomized controlled trials and consensus as a basis for interventions in internal medicine. Intern Med. 2000; 247(1): 94–104.
  25. Galloway M, Baird G, Lennard A. Haematologists in district general hospitals practise evidence based medicine. Clin Lab Haematol. 1997; 19(4): 243–248.
  26. Rudolf MC, Lyth N, Bundle A, et al. A search for the evidence supporting community paediatric practice. Arch Dis Child. 1999; 80(3): 257–261.
  27. Maran HG, Malony NC, Armstrong MW, et al. Is there an evidence base for practice in ENT surgery? Clin Otolyngol. 1997; 22(2): 152–157.
  28. Childs C. Is there an evidence-based practice for burns? Burns. 1998; 24(1): 29–33.
  29. Wilkinson CP. Evidence-based analysis of prophylactic treatment of asymptomatic retinal breaks and lattice degeneration. Opthalmology. 2000; 107(1): 12–15.
  30. Hardin WD Jr, Stylianos S, Lally KPJ. Evidence-based practice in pediatric surgery. Pediatr Surg. 1999; 34(5): 908–912.
  31. Lewith GT, Ernst E, Mills S, et al. Complementary medicine must be research led and evidence based. BMJ. 2000; 320(7228): 188.
  32. Levin JS, Glass TA, Kushi LH, et al. Quantitative methods in research on complementary and alternative medicine. A methodological manifesto. NIH Office of Alternative Medicine. Med Care. 1997; 35(11): 1079–1094.
  33. Feinstein AR, Horwitz RI. Problems in the “evidence” of “evidence-based medicine.” Am J Med. 1997; 103(6): 529–535.
  34. Krumholz HM, Radford MJ, Ellerbeek FF, et al. Aspirin for secondary prevention after acute myocardial infarction in the elderly; prescribed use and outcomes. Ann Intern Med. 1996; 124; 292–298.
  35. Krumholz HM, Radford MJ, Wang Y, et al. National use and effectiveness of beta-blockers for the treatment of elderly patients after acute myocardial infarction. National Cooperative Cardiovascular Project. JAMA. 1998; 280: 623–629.
  36. Mitchell JG, Ballard DJ, Whisnant JP, et al. What role do neurologists play in determining the costs and outcomes of stroke patients? Stroke. 1996; 27: 1937–1943.
  37. Wong JH, Findlay JM, Suarez-Almazor ME. Regional performance of carotid endarterectomy appropriateness, outcomes and risk factors for complications. Stroke. 1997; 28: 891–898.
  38. Blame for abuse of OTA reports of dubious estimate: “Only 10–20% of medical procedures are proved.” NCAHF Newsletter. 1996; 19(5).
  39. Sackett DL, Rosenberg WMC, Muir Gray JA, et al. Evidence based medicine: what it is and what it isn’t. BMJ. 1996; 312: 71–72.