When it comes to health care, there is an oft-sung mantra: “We have received little return from the investment of health care dollars!” Nor, the chorus fervently adds, have we cured cancer or heart disease.
One of the most recent singers of this old song is David Brooks, who breathlessly extolled a tender essay by a dying man weighing the end of his life on the scale of human and economic costs.
Then there’s Dennis Callahan, an advocate of “we have spent too much for so little.” Callahan has argued that it is against the natural order of things to extend marginal lives in pallid substitution for actually finding cures.
These new-age philosophers’ aggregate view is that we should suck it up and die when we are still fiscally neutral — for the sake of the economy and nature. This colloquy is a good example of observations being put in service of surprisingly shallow conclusions.
But to understand the real reason why we have not achieved our health care ambitions, we should ask what many editorials scrupulously avoid: why was American medicine so good for so long? For much of the 20th century, remarkable medical progress was the byproduct of a deliberate alignment between patient care and the conduct of research. This linkage between clinical observation and discovery is largely gone, replaced by a flawed assumption that basic science experiments, conceived far from the bedside, can be retrofitted into practice and improve disease outcomes.
If we once had it right, where did we go wrong?
A little history is helpful. The Flexner Report, written in 1910, decried the state of medical practice as an often casual trade. In response, medical schools connected, for the first time, the methods of science to physician behavior. Doctors became both observers and thinkers, with progressively refined skills in their natural biologic laboratory — i.e., the patient. Medical training rewarded reflection, and “Why is what I am seeing happening?” became the essential question. Critical thinking by doctors was highly valued, and “discovery” followed the framing of a scientific question often based on a bedside observation.
These clinician-scientists used tools vastly different from what we find in the laboratories of today. Physiology (the science of the study of living systems) and pharmacology (the study of drug action) were the available disciplines used to simulate human disease in the laboratory animal. It was a closed loop, with the doctor, acting as clinician, asking the question and the same doctor, as scientist, answering it.
Yet, over time, there was an increasing recognition that root causes resided at a cellular or molecular level. Clinical observation, in and of itself, lost importance, and the basis of discovery shifted from the visible to the invisible. The arena for questioning also moved from the medical school to the graduate school, which trained Ph.D.s as “pure” scientists. The M.D., with rare exception, became the manager of disease, and the Ph.D. the student of its causes.
We have created two very different and largely incongruent systems — i.e., 1) medical care, as a service industry, and 2) the medical sciences, which focus on and reward laboratory experiments that may or may not easily translate into improved patient care.
To be sure, the service industry does brilliantly when facing human engineering problems — basically, repairing things that are broken. Such technical solutions do not depend on absolute understanding, since you can stent a heart blood vessel without knowing every gene related to heart disease. But when the scientists and the managers are separated to such a degree as they are today, medical progress on the whole must necessarily suffer.
The general public’s wish, per David Brooks, and one that borders on demand, is for our most frightening diseases to be defeated. Short of that, the returns from our growing investment seem meager. In truth, they are very substantial, but we will not be in a position to cure until the two vital parts of medicine — i.e., patient care and scientific inquiry — are better reconciled. This does not mean making every doctor a scientist, but it does mean seeking ways to ensure that clinical medicine and basic science are transparent to each other, share some mutual goals, and shape each other’s interests.
As Mr. Brooks’ article shows, many have confused the process of providing health services with determining the probability of finding something important about disease. Our intense focus on the delivery system has again moved medicine closer to being a trade. Striving for the lowest price hardly encourages innovative ideas. Diseases do not care what politics are in fashion and do not bend to any public policy.
If we really want progress and not simply cost control, we have to recognize that clinical medicine is not just “service.” On the contrary, when in concert with basic scientists, clinical medicine provides the best opportunity for making advances towards the cure or prevention of disease.
~ The Author ~
Joel Levine is a professor of medicine and the founding director of the Colon Cancer Prevention Program at the University of Connecticut Health Center.
Written by Joel Levine, MD and published on American Thinker.com, September 20, 2011.
FAIR USE NOTICE: This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in our efforts to advance understanding of environmental, political, human rights, economic, democracy, scientific, and social justice issues, etc. We believe this constitutes a ‘fair use’ of any such copyrighted material as provided for in section 107 of the US Copyright Law. In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit to those who have expressed a prior interest in receiving the included information for research and educational purposes. For more information go to: http://www.law.cornell.edu/uscode/17/107.shtml