Acupuncture is no different than placebo. The refrain is repeated over and over. For me, the question begins with: How do you know you achieved placebo? Were the results such that they could lower cost of care?
Here is an example of policy development even with studies that show no difference from placebo. It is primarily due to the demonstrated lower cost of care. In October 2000, the German Federal Committee of Physicians and Health Insurers recommended that special model projects be developed in order to determine the evidence-based role of acupuncture.1-6 After the analysis, the German health authorities decided to reimburse for acupuncture treatment of chronic low back pain and chronic osteoarthritis of the knee. Given this success, I am concerned about the way the acupuncture and Chinese medical community uses science to create value.
Clearly, medical decisions rely upon evidence and from this point of view; the evidence-based medicine (EBM) movement makes a good deal of sense. The problem is that EBM values certain forms of knowledge over others, with findings from well-designed, randomized, controlled trials rated at the top of the evidence pyramid and qualitative, narrative and case-based methods at the bottom. Understandably, unsystematic clinical observations are also placed at the bottom. This, however, has significant implications for practice models that rely on it.
From the quantitative worldview of EBM, researchers define validity that depends on the ability of experimenters to achieve repeated results, which only deals with general regularities.7 This focus on repetitive findings leads to a hierarchy in the value of evidence used for decision-making. Consider this set of criteria from Guide to Clinical Preventive Services; An Assessment of the Effectiveness of 169 Interventions:
Level I: Evidence obtained from at least one properly designed randomized controlled trial.
Level II-1: Evidence obtained from well-designed controlled trials without randomization.
Level II-2: Evidence obtained from well-designed cohort or case-control analytic studies, preferably from more than one center or research group.
Level II-3: Evidence obtained from multiple time series with or without the intervention. Dramatic results in uncontrolled trials might also be regarded as this type of evidence.
Level III: Opinions of respected authorities based on clinical experience, descriptive studies or reports of expert committees.
What is left out of this list are qualitative methods such as cases, case series, medical anthropology, narrative methods, phenomenology, ethnography, participatory research models and anecdotal reports, as well as a practitioner’s personal experiences. While EBM has gained a position in the construction of best practices for accreditors, certification agencies, and medical schools, it remains controversial. Five types of arguments have been made against the EBM movement: it lacks a philosophical basis; the definition of evidence is too narrow; the movement itself is not evidence-based; it has limited usefulness in its application to individual patients; and it threatens the autonomy of the patient-physician relationship.8 By ignoring many factors used in practice by physicians, EBM as a form of knowledge building leads to an incomplete (and possibly inaccurate) model of medical epistemology (the study of how we know what we know).9 With appropriate controls and procedures in place, that which is outside the boundaries set by the experimenter is of no consequence to the study of the systems in question. This attempt to limit variables loses contact with the complexity and whole systems view that forms the bases of Chinese medical practices.
While science is typically considered objective, it is also subjective in many ways. This is because the scientific quantitative approaches involve subjective decisions regarding the subject, study design, statistical models and data interpretation. Hence, the scientific endeavor is actually embedded in subjectivity, even though it is often thought of and portrayed as being free of such. Further enhancing this perception of objectivity, scientists often report in the third person or present and analyze data based upon the null hypothesis, which assumes that any kind of difference or significance in a set of data is due to chance. It can be difficult for a scientist who is interested in a particular subject to present a result opposite that which they are attempting to prove. Given that the use of the null hypothesis is essential in the application of statistics, I would prefer that the researcher’s biases, cultural background and other situational influences also be revealed in order to expose potential conflicts of interest.
The creation of knowledge arises from very specific sources of authority. As we come to recognize the assumptions underlying our civilization, we are influenced by dominant cultures, tracing back thousands of years. All of these impact our ability to navigate within, interact with, observe, and inquire within the world today.
Indigenous modes of knowing often clash with modernism and the paradigmatic biases of the dominant culture. The questions about research conducted with indigenous populations include: Who does the research belong to? Who benefits? Who has designed and framed the research? Who compiles the results? Where and how are the results disseminated? If these research questions have not been framed, a new question of validity forms. There are further questions that can be leveled towards the researcher in the indigenous domain: Is their spirit clear? Does this person have a good heart? What other issues do they bring? Can they be of assistance to us?10
The creation of knowledge conforms to privilege since it is the privileged who engage in studies by and for those in positions of power. The situational influence of gender, class, profession or nation, cannot be underestimated. Such inquiries are often designed in such a way as to preserve these social structures. The entrenched social and academic structures that define acceptable pathways of knowledge acquisition limit the influences of those outside of these systems. The effect of this is that the “suppressed voices” find it difficult if not impossible to validate and incorporate their knowledge into the system.
If we want different results, we need to find ways to liberate ourselves from the constraints of defining the randomized control trial as the gold standard for clinical knowledge acquisition. Once this happens, we can build an equitable knowledge base.