Thus far, we have examined the relationship between scientific ideology and the neglect of major ethical dimensions of science, largely caused by the component of scientific ideology that declares science to be “value free” and “ethics free”. But, while the explicit denial of values is certainly going to be the most obvious cause of ethical neglect, we cannot underestimate the more subtly corruptive influence of the second component of scientific ideology we have delineated, the denial of the reality or knowability of subjective experiences in people and animals.
Obviously, concern about how a person or animal feels—painful, fearful, threatened, stressed—looms large in the tissue of ethical deliberation. If such feelings and experiences are treated as scientifically unreal, or at least as scientifically unknowable, that will serve to eliminate what we may term a major call to ethical deliberation and ethical thought. Insofar as modern science tends to bracket subjectivity as outside of its purview, the tendency to ignore ethics is potentiated. For example, in our discussion of animal research we have alluded to the absence of pain control in animal research until it was mandated by federal legislation.
While this is certainly a function of science’s failure to recognize ethical questions in science, society in general, except for issues of overt cruelty, also historically neglected ethical questions about animals. Ordinary people, however, were comfortable in attributing felt pain to animals, ( a matter of ordinary common sense) and adjusted their behavior accordingly, even in the absence of an explicit ethic for animals being prevalent in society. Scientists, however, in being trained out of ordinary common sense regarding animal subjective experience, were not moved by what non-scientists saw as plainly a matter of pain. Hence one of my veterinarian colleague’s response to my concern about howling and whining in his experimental surgery dogs: “Oh that’s not pain, it is after-effects of anesthesia.”
In other words, the denial of the reality (or at least scientific knowability) of pain in animals provided yet another vector for ignoring ethics, since ethical concern is so closely linked to recognizing mental states. We shall, surprisingly, shortly document a similar problem in human medicine.
It is certainly the case that modern science (i.e. the science beginning with Galileo, Descartes, Newton, etc.) began with preconceptions uncongenial to taking subjective experience as part of scientific reality. Medieval and Aristotelian science (Medieval science was at root Aristotelian) set itself the task of explaining the world of sense experience and common sense experience; a world of qualitative differences to which sense experience provided largely accurate access which, when it failed, could be corrected by additional sense experience. As I have explained elsewhere, Aristotelian science took the position that the world of ordinary experience was the “real world”; that “what you see is what you get.” Indeed, that in my view was one reason that the Aristotelian world-view lasted so long—it was based in and congenial to ordinary experience.
This is, of course, not the case with the new science. Everyone who has taken introductory philosophy recalls reading Descartes’ sustained attack on the senses and on common sense, which was intended to undercut the old world view and prepare people to accept that reality is not as it appears to be but as reason and mathematical physics tell us it is. Descartes’ program was completed by Newton, but retained the same logic.
Thus, as we remarked at the beginning of this book, physical science became the paradigm case for all science, with “objectivity” the primary mantra in all fields. Even the social sciences strived to be “objective.” Subjective experiences were strongly disvalued (even though science was said to be based in “experience”). By the 1920s, as I have recounted in detail elsewhere, subjective experiences had been relegated to non-persons all across science, with J.B. Watson and Behaviorism finally eliminating it even from psychology when Watson skillfully sold the idea that psychology, in order to achieve parity with the “real” and successful sciences, needed to become the science of rat behavior and learning. Indeed, Watson came perilously close to affirming that “we don’t have thoughts, we only think we do!”
As long as we are discussing the Scientific Revolution, it is important to discuss, as we briefly mentioned in our first chapter, that the Scientific Revolution itself presents us with a superb example of a change in values. Consider the Aristotelian/common sense world view. Can one imagine a crucial experiment that would falsify the claim that the world is best understood in terms of adherence to sense experience, teleological explanations, qualitative differences, and prove that such explanations must be abandoned in favor of mechanistic, mathematical, quantitative explanations that ignore qualitative distinctions? Since experiments set up in the Aristotelian paradigm would necessarily be qualitative, and since the paradigm determines what counts as relevant data, how could such an experiment ever disprove the paradigm itself? (The same of course, holds equally true for overturning an extant mechanistic paradigm in favor of a qualitative one!) Thus the Aristotelian/common sense world view was not falsified or disproved, it was rather set aside by virtue of the rise of new values that clashed with it; for example the belief that God is a mathematician, and that under qualitative diversity there must exist quantitative uniformity. The Aristotelian approach was more disapproved than disproved!
As mechanism became the regnant conceptual paradigm for physics, its dominance was gradually replicated in other sciences—chemistry, biology, geology, etc. This ideal was enunciated in positivism, which affirmed in the twentieth century that psychology would be reduced to neurophysiology, neurophysiology to biochemistry, chemistry to physics. For those less radical, the move was nonetheless to eliminate the subjective from science, as Watson did with turning psychology to the study of overt behavior. It is again noteworthy that this transformation was not necessitated by experimental evidence or logical analysis overturning the coherence of looking at subjective states. Indeed, as I have shown elsewhere, the alleged historical inevitability of Behaviorism as reconstructed in histories of psychology will not stand up to rational scrutiny. None of the major figures in psychology prior to Behaviorism disavowed consciousness. In fact, Watson “sold” Behaviorism through rhetoric, arguing that only by turning to the examination of overt behavior could psychology become analogous to physics, and lead to the ability to socially manipulate behavior, to eliminate criminality and other socially deviant behavior, which, in the end, is learned behavior.
The same pattern of what we may call “physicalization”—elimination of the subjective as irrelevant to science—took place in medicine. Particularly with the rise of molecular biology and sophisticated biochemistry, disease was increasingly seen as defects in the machine, and subjective states as, to use Ryle’s apt phrase, “ghosts in the machine.” Even psychiatry, by the end of the twentieth century, had come to see mental illness not as “mental” or “behavioral”, but as biochemical—insufficiencies or excesses of certain chemicals. The management of such diseases became a matter of balancing an individual’s chemistry, not of analysis or individual or group therapy.
Traditionally, before the physicalistic turn, medicine was, and aspired to be, a combination of science and art. The science component came, of course, from its attempt to develop generalizable, lawlike knowledge that would remain invariant across space and time. Such knowledge was sought regarding the working of the body, the nature of disease, and valid therapeutic regimens, though medicine often fell short of the mark in all of these areas. The element of art was patent in medicine. Art deals with the individual, the unique, with the domain of proper names; with the person, not merely the body; with that which does not lend itself to generalization, with the subjective psychological aspects of a persons well as with the observable. A physician was thus expected to be both lawlike and intuitive, the latter not in any mystical way, but rather in a manner that is focused on this particular individual and his or her subjectivity and felt experience. And in understanding the individual—by definition unique—all information, be it first-person reports or objective measurements, was relevant.
In some ways, the physicalization of medicine was a boon to sick people—there now existed science and evidence-based ways to develop and test drugs and other therapies for safety and efficacy. But, in other ways, it was a detriment. In the first place, how the patient felt became significantly subordinated to how they objectively “were.” Medical success came to be measured in terms of how long the patient lived; alive not dead and how long was an objective parameter that could be quantified.
Cancer medicine provides an excellent example of this view. Oncology was directed at eliminating the tumor and buying a measurable increment in life span, or time before death. Quality of life, suffering attendant on chemotherapy or radiation, loss of dignity in the course of treatment, psychological and economic toll on family; were not measures that scientific medicine was wont to adopt. “Buying extra time” was the goal. And yet, as numerous authorities have told us, patient concern is primarily about suffering, not about death per se.
In general, people who seek voluntary euthanasia do so because they fear pain, loss of dignity (e.g., of the sort comes from incontinence), helplessness, dependence, stress on the family. Obviously, they fear such experiences more than they fear death. Yet scientific medicine does not worry about such “hindrances” to prolonging life. In particular, and crucial to our argument in this chapter, felt pain becomes not fully medically real, since it is not observable or objective, or mechanistically definable. In that regard, I vividly recall what one nursing dean told me; “The difference between nurses and doctors is that we worry about care, they worry about cure.” In turn, recall that the institution that has most concerned itself and done the most for the terminally ill is hospice, and hospice was founded and is dominated by nurses, not by physicians!
In 1973, psychiatrists R.M. Marks and E.S. Sacher published a seminal article on pain control in which they demonstrated that almost 3 out of 4 cancer patients studied in two major New York hospitals suffered (unnecessary) moderate to severe pain because of undermedication with readily available narcotic analgesics. The authors were psychiatrists brought in to consult on patients putatively having a marked emotional reaction to their disease. On examination, they determined that the problem was undertreatment of pain leading to the emotional responses, rather than a psychiatric problem! Though this article received a great deal of attention, this disgraceful state of affairs was confirmed by other studies and by an extraordinary editorial in Pain, fourteen years later, by John Liebeskind and Ron Melzack—two of the world’s most eminent pain researchers:
We are appalled by the needless pain that plagues the people of the world—in rich and poor nations alike. By any reasonable code, freedom from pain should be a basic human right, limited only by our knowledge to achieve it.
Cancer pain can be virtually abolished in 80-90% of patients by the intelligent use of drugs, yet millions of people suffer daily from cancer pain without receiving adequate treatment. We have the techniques to alleviate many acute and chronic pain conditions, including severe burn pain, labor pain, and postsurgical pain, as well as pains of myofascial and neuropathic origin; but worldwide, these pains are often mismanaged or ignored.
We are appalled, too, by the fact that pain is most poorly managed in those most defenseless against it—the young and the elderly. Children often receive little or no treatment, even for extremely severe pain, because of the myth that they are less sensitive to pain than adults and more easily addicted to pain medication. Pain in the elderly is often dismissed as something to be expected and hence tolerated.
All this needless pain and suffering impoverishes the quality of life of those afflicted and their families; it may even shorten life by impairing recovery from surgery or disease. People suffering severe or unrelenting pain become depressed. They may lose the will to live and fail to take normal health-preserving measures; some commit suicide.
Part of the problem lies with health professionals who fail to administer sufficient doses of opiate drugs for pain of acute or cancerous origin. They may also be unaware of, or unskilled in using, many useful therapies and unable to select the most effective ones for specific pain conditions. Failure to understand the inevitable interplay between psychological and somatic aspects of pain sometimes causes needed treatment to be withheld because the pain is viewed as merely ‘psychological.’ [emphasis mine] The final line of this editorial eloquently buttresses the account we have given of the capture of medicine by a mechanistic and physicalistic ideology that denies reality to subjective experience. Also highly relevant to our subsequent discussion is the strong claim that pain is most egregiously ignored in the young and the elderly, i.e., those most vulnerable and defenseless, a point we will return to shortly.
The ignoring of pain just detailed is further buttressed in a 1991 paper by Ferrel and Rhiner that appeared in the Journal of Clinical Ethics. According to the authors, although pain can be controlled effectively in 90% of cancer patients it is in fact not controlled in 80% of such patients. A 1999 article in Nursing Standard shows that the Marks and Sacher problem continued into the new century. The author affirms that “more recent studies have shown that there has been little improvement over the years.” Supporting the points we made earlier, the author is a nurse, not a doctor.
As pain was see as medically unreal and subjective, control of pain was historically determined by strange ideological dicta even in the nineteenth century after the discovery of anesthesia. Historian Martin Pernick has shown this point eloquently, by comparing hospital records on anesthetic use with ideological pronouncements, and finding very high correlations. For example, although affluent white women were generally the class receiving the most anesthesia for most medical procedures, this was not true for childbirth, because it was believed that childbirth pain was Divine punishment for Eve’s transgression and also that women would not bond with the child unless they felt pain. Farmers, sailors, and other members of “macho” professions received very little anesthesia, as did foreigners. Black women, even when being used for painful experiments received no anesthesia at all. Limb amputation was classified as “minor surgery.” Children received more pain control because of their “innocence” (which, as we shall see, has been reversed under current ideology). Worries were expressed that anesthesia gives the doctor too much (sexual) control over the patient. The key point is that pain control even then was more a valuational and ideological decision than a strictly scientific medical one.
Thus, even with regard to anesthesia, precedent existed for odd and arbitrary ideologically dictated use. Inevitably, given the tendency to see felt pain as scientifically less than real, and in any case unverifiable, and further given the ethics-free ideology we have discussed, a morally-based dispensation of pain control was unlikely to be regnant. Indeed we have already quoted Liebeskind and Melzack on the tendency for pain management to be minimal in the “defenseless.”
There is ample evidence for this claim. In 1994 a paper appeared in the New England Journal of Medicine demonstrating that infants and children (who are of course powerless or defenseless in the above sense), receive less analgesia for the same procedures then do adults undergoing the same procedures. But there is a far more egregious example of the same point in the surgical treatment of neonates—an example that never fails to elicit gasps of horror from audiences when I recount it. This was the practice of doing open heart surgery on neonates without anesthesia, but rather using “muscle relaxants” or paralytic drugs like succinylchlorine and pancuronium bromide, until the late 1980s! Postsurgically, no analgesia was given.
Let us pause for a moment to explain some key concepts. Anesthesia means literally without feeling. We are all familiar with general anesthetics which put a patient to sleep for surgery. With general anesthesia, a person should feel no pain during a procedure. Similarly, local anesthetics, such as novocaine for a dental procedure, remove the feeling of pain from a particular area while procedures such as filling a tooth or sewing up a cut are performed, and the patient is conscious but does not feel pain. Though there are many qualifications to this rough and ready definition, they do not interfere with our point here.
Muscle relaxants or paralytics block transmission of nerve impulses across synapses and thus produce flaccid paralysis but not anesthesia. In other words, one can feel pain fully but one cannot move, which may indeed make pain worse, since pain is augmented by fear. First person reports by knowledgeable physician/researchers of paralytic drugs, which paralyze the respiratory muscles so the patient is incapable of breathing on his or her own, recount the terrifying nature of the use of paralytics in conscious humans aware of what is happening. Analgesics are drugs which attenuate pain or raise patients’ ability to tolerate pain. Examples are aspirin and Tylenol for headaches, morphine, Demerol, Vicodin. Thus babies were receiving major open heart surgery using only paralytic drugs, and experiencing countless procedures ranging from circumcision and venipuncture to frequent heel-sticks with no drugs for pain alleviation at all—neither anesthetics nor analgesics.
The public became informed about the open-heart surgery in 1985, when a parent, whose own child died undergoing this sort of surgery, complained to the medical community, was essentially ignored, and went public, supported by some operating room nurses who felt strongly that babies experienced pain. The resulting public outcry caused the medical community to reexamine the practice and eventually to abolish it.
The reasons anesthesia was ignored in neonates were multiple and familiarly ideological. First of all, the medical community believed pain is “subjective” and thus not medically real. Second, since babies do not remember pain, pain doesn’t matter. Third, it was argued and widely accepted that the neonatal cortex or other parts of the nervous system were insufficiently developed to experience pain. For example, it was said that babies’ nerves were insufficiently myelinated for babies to feel pain. Fourth, since all anesthesia is selective poisoning, it was argued that anesthesia was dangerous. Many of the claims which the objections to anesthesia were based were deftly handled in a classic paper by Anand and Hickey, entitled “Pain and its Effects in the Human Neonate and Fetus.”
To the first claim that pain is (merely) subjective, the reply is simple—first that is equally true for adults and second, what is subjective is very real for the experiencer. (The essence of pain is that it hurts). To the claim that forgotten pain doesn’t matter, the simple response is that, once experienced, pain is biologically active and retards healing and is immunosuppressive even if forgotten. To this day, painful procedures like bronchoscopy and colonoscopy are done under amnesic drugs in adults, who may feel much pain during the procedure but don’t remember because of the drug. Failure to remember does not justify infliction of pain. Furthermore babies give evidence of memory when brought back to rooms in which they underwent surgery.
Third, Anand and Hickey convincingly debunk the claim that neonates—and even preterm babies—do not feel pain. There are convincing physiological arguments that both myelination and cortical development in neonates suffice to attribute pain to infants. Behavioral changes also buttress this point.
Fourth, all anesthesia is dangerous, particularly when administered to sick people! The key point is that adequate anesthesia regimens exist to tilt the cost-benefit ratio in favor of using anesthesia. In a later paper (1992), Anand and Hickey showed that neonates given high doses of anesthesia and analgesia for surgery fared better in terms of morbidity and mortality than children treated with light anesthesia. They demonstrated that when infants undergoing open heart surgery were deeply anesthetized and given high doses of opiates for 24 hours postoperatively, they had a significantly better recovery and significantly fewer postoperative deaths than a group receiving a lighter anesthetic regimen (halothane and morphine) followed postoperatively by intermittent morphine and diazepam for analgesia. The group that received deep anesthesia and profound analgesia “had a decreased incidence of sepsis, metabolic acidosis, and disseminated intravascular coagulation and fewer postoperative deaths (none of the 30 given sufentanil versus 4 of 15 given halothane plus morphine).”
The conclusion of Anand and Hickey’s 1987 paper is worth quoting in its entirety:
Numerous lines of evidence suggest that even in the human fetus, pain pathways as well as cortical and subcortical centers necessary for pain perception are well developed late in gestation, and the neurochemical systems now known to be associated with pain transmission and modulation are intact and functional. Physiologic responses to painful stimuli have been well documented in neonates of various gestational ages and are reflected in hormonal, metabolic, and cardiorespiratory changes similar to but greater than those observed in adult subjects. [Emphasis mine] Other responses in newborn infants are suggestive of integrated emotional and behavioral responses to pain and are retained in memory long enough to modify subsequent behavior patterns.
None of the data cited herein tell us whether neonatal nociceptive activity and associated responses are experienced subjectively by the neonate as pain similar to that experienced by older children and adults. However, the evidence does show that marked nociceptive activity clearly constitutes a physiologic and perhaps even a psychological form of stress in premature or full-term neonates. Attenuation of the deleterious effects of pathologic neonatal stress responses by the use of various anesthetic techniques has now been demonstrated….The evidence summarized in this paper provides a physiologic rationale for evaluating the risks of sedation, analgesia, local anesthesia, or general anesthesia during invasive procedures in neonates and young infants. Like persons caring for patients of other ages, those caring for neonates must evaluate the risks and benefits of using analgesic and anesthetic techniques in individual patients. However, in decisions about the use of these techniques, current knowledge suggests that humane considerations should apply as forcefully to the care of neonates and young, nonverbal infants as they do to children and adults in similar painful and stressful situations It is interesting to note that, as in the case of pain in animals, the scientific “reappropriation of common sense” about infant pain occurred only at the instigation of and subsequent to public moral outrage about standard practice.
In a powerful and sensitive 1994 paper in the New England Journal of Medicine, Walco, Cassidy, and Schechter review some of the major arguments leading to withholding pain control from children and infants, echoing points we have seen made m Anand and Hickey. These include the subjectivity of pain, the belief that children are not reliable reporters of pain, a failure to recognize individual differences in children (despite solid scientific evidence to the contrary), misinformation about the neurologic capacity to feel pain, the “no memory” argument. Recent evidence indicates that this last point is particularly egregious, that not only does unrelieved pain disturb eating, sleeping, arousal in the neonate, “infants retain a memory of previous experience, and their response to a subsequent painful experience is altered,” and failure to control pain in infants leads to aberrant nerve growth, causing additional pain later in life.
Walco et al also raise and refute the claim that opioid analgesics cause respiratory depression or arrest. They point out that “the risk of narcotic induced respiratory depression in adults is about 0.09 percent, whereas in children it ranges between 0 percent and 1.3 percent.” In most cases, the problem is solved by dose reduction, and opiate overdose can be reversed. They also indicate that fully 39% of physicians worry about creating addicts by use of opioids, yet this concern is baseless, with “virtually no risk of addiction associated with the administration of narcotics.” Another set of arguments affirms that masking pain masks symptoms (a very common reason for not using analgesia in veterinary medicine), an absurd claim regarding major surgical post-operative pain. An additional argument affirms that “pain builds character,” again an absurd argument in an infant or suffering child. As Walco et al declare “If there is a therapeutic benefit from a child’s pain, one must be exquisitely economical with it.”
The conclusion of the Walco paper is as morally sensitive and powerful as the rest:
There are now published guidelines for the management of pain in children, which are based on recent data. However, guidelines and continuing medical education do not necessarily alter physicians’ behavior. Specific administrative interventions are required. For example, hospitals may include standards for the assessment and management of pain as part of their quality-assurance programs. The Joint Commission on Accreditation of Healthcare Organizations has established standards for pain management. To meet such standards, multidisciplinary teams must develop specific treatment protocols with the goal of reducing children’s pain and distress. In addition, pressure from parents and the legal community is likely to affect clinical practice.
All health professionals should provide care that reflects the technological growth of the field. The assessment and treatment of pain in children are important parts of pediatric practice, and failure to provide adequate control of pain amounts to substandard and unethical medical practice. Many of the points made in the Walco paper have direct implications for other areas where pain is neglected. For example, large portions of the medical community have steadfastly opposed the use of narcotics in terminally ill patients on the dubious grounds that such patients may become addicted. The first response, of course, is for people with a short time to live, “so what if they become addicted—these drugs are cheap!” In any case, the medical community ignorance in this area is appalling. Again from Walco:
It is essential to distinguish between physical dependence (a physiologically determined state in which symptoms of withdrawal would occur if the medication were not administered) and addiction (a psychological obsession with the drug). Addiction to narcotics is rare among adults treated for disease-related pain and appears to depend more on psychosocial factors than on the disease or medically prescribed administration of narcotics. Studies of children treated for pain associated with sickle cell disease or postoperative recovery have found virtually no risk of addiction associated with the administration of narcotics. There are no known physiologic or psychological characteristics of children that make them more vulnerable to addiction than adults. We also find large numbers of physicians vehemently opposed to medical marijuana! It appears that such physicians have gullibly brought into simplistic government propaganda about drugs and addiction—“one shot and you’re hooked.” In fact, there were many regular heroin users among soldiers in Vietnam who, when upon their return home were no longer in stressful situations, gave up the drug use and were not addicted! Again we see ideology trump both science and reason—in this case, the ideology underlying U.S. drug policy.
Another glaring example of medicine’s ignoring of subjective states can be found in the history of the drug ketamine. This illustration is particularly valuable in that it demonstrates the cavalier attitude that historically obtained (and indeed still obtains) with regard to negative subjective experiences in humans and animals.
Ketamine is a cousin of phencyclidine. Phencyclidine (also known as PCP) was developed in the 1950s but was found to be very dangerous in terms of hallucinations, creating violent behavior, confusion, delusions, and abusability. Various derivatives of PCP were tried until 1965, when ketamine was found to be most promising. In 1970, it was released for clinical use in humans in the U.S.
Ketamine was heralded as the “ideal” anesthetic, since overdose was virtually impossible, and it did not cause respiratory depression. Furthermore, it could be administered via intravenous, intramuscular, oral, rectal, or nasal routes. Ketamine is profoundly analgesic (pain relieving) for somatic or body pain, though it is of no use for visceral (gut) pain. It has been particularly useful in human medicine for treating burn patients and changing dressings.
In the 1980s, while researching a paper on pain, I looked at ketamine in some detail. In the first place, I found that it was used very frequently in research as a sole surgical anesthetic in small rodents and other animals, and in veterinary practice as a standard “anesthetic” for spays. Since it is emphatically not viscerally analgesic, this meant that in such procedures it was being used as a restraint drug. Under ketamine, animals are “disassociated”—experience a strong feeling of disassociation from the environment and are immobilized in terms of voluntary movement. When I watched a visceral surgery on a cat done with ketamine, I could see obvious signs of pain when the viscera were cut or manipulated. In essence, this means that when ketamine alone was used for visceral procedures, the animals felt pain but were immobilized.
In human medicine, ketamine was used for a wide variety of somatic procedures, such as burn dressing change and plastic surgery. But, by 1973, the medical community had become aware of the fact that ketamine was capable of engendering significantly “bad trips” in a certain percentage of patients, though many experienced pleasant hallucinations. A watershed contributing to this awareness was a letter published in Anesthesiology in 1973 by Robert Johnstone, wherein the contributor, an M.D. anesthesiologist, graphically described his experiences under ketamine as a research subject. It is important to stress that Dr. Johnstone had taken several different narcotics and sedatives before the ketamine experience and had had no problems. The ketamine experience, however, was quite different:
I have given ketamine anesthesia and observed untoward psychic reactions, but was not concerned about this possibility when the study began. After my experience, I dropped out of the study, which called for two more exposures of ketamine. In the several weeks since my ketamine trip, I have experienced no flashbacks or bad dreams. Still I am afraid of ketamine. I doubt I will ever take it again because I fear permanent psychologic damage. Nor will I give ketamine to a patient as his sole anesthetic agent. Here is Johnstone’s description of what occurred:
My first memory is of colors. I saw red everywhere, then a yellow square on the left grew and crowded out the red. My vision faded, to be replaced by a black and white checkerboard which zoomed to and from me. More patterns appeared and faded, always in focus, with distinct edges and bright colors.
Gradually I realized my mind existed and could think. I wondered, “What am I?” and “Where am I?” I had no consciousness of existing in a body; I was a mind suspended in space. At times I was at the center of the earth in Ohio (my former home), on a spaceship or in a small brightly-colored room without doors or window. I had no control over where my mind floated. Periods of thinking alternated with pure color hallucinations.
Then I remembered the drug study and reasoned something had gone wrong. I remembered a story about a man who was awake during a resuscitation and lived to describe his experience. “Am I dying or already dead?” I was not afraid, I was more curious. “This is death. I am a soul, and I am going to wherever souls go.” During this period I was observed to sit up, stare and then lie down.
“Don’t leak around the mouthpiece!” were the first real sounds I heard. I couldn’t respond because I didn’t have a body. Thus began my cycling into and out of awareness—a frightening experience. I perceived the laboratory as the intensive care unit; this meant something had gone wrong. I wanted to know how bad things were. I now realized I wasn’t thinking properly. I recognized voices, then I recognized people. I saw some people who weren’t really there. I heard people talking, but could not understand them. The only sentence I remember is “Are you all right?” Observers reported a panicked look and defensive thrashing of my arms. I screamed “They’re after me!” and “They’re going to get me!” I don’t recall this or remember the reassurances given me.
I then became aware of my body. My right arm seemed withered and my left very long. I could not focus my eyes. Observers reported marked nystagmus. I recognized the ceiling, but thought it was covered with worms (apparently cued by the irregular depressions in the soundproof blocks). I desperately wanted to know what was reality and to be part of it. I seemed to be thinking at a normal rate, but couldn’t determine my circumstances. I couldn’t speak or communicate, but once, recognizing a friend next to me, I hugged him until I faded back to abstractness.
The investigators gave me diazepam, 20 mg, and thiopental, 150 mg, intravenously because I was obviously anxious, and I fell asleep. When I awoke it was five hours since I had received ketamine. I promptly vomited bilious liquid. Although I could focus accurately, I walked unsteadily to the bathroom. I assured everyone “I’m OK now.” Suddenly I cried with tears for no reason. I knew I was crying but could not control myself. I fell asleep again for several hours. When I awoke I talked rationally, was emotionally stable and felt hungry. The next day I had a headache and felt weak, similar to the hangover from alcohol, but functioned normally. Today, of course, ketamine (known by the street name “special K”) is classified as a Schedule III drug, not only because it is widely abused, but because it has become a rape drug, in virtue of the immobility and “paralysis of will” it produces. And there are countless examples in literature of vivid depictions of bad ketamine trips going back to 1973. An additional troubling dimension of ketamine use became known at this time—the tendency of ketamine to produce unpredictable “flashbacks,” much in the manner of LSD.
When researching all this in 1985, my main interest was its use in animals. I therefore approached some world renowned veterinary anesthesia colleagues, who confirmed, first of all, its misuse for visceral surgery. I then asked about “bad trips.” There was no literature on this, I was told, but anecdotally, such occurrences were obvious. As my colleague put it, “Most animals (cats) see little pink mice; but some see giant, ferocious pink rats.” Despite this observation, I have never seen any discussion in the veterinary literature of “bad trips.” Similarly, there is no literature on deviant behavior indicating possible flashbacks in animals, but I have been told of owners reporting complete personality changes in animals after ketamine dosing, one woman claiming that the hospital had given her back the wrong animal! The failure of veterinary medicine to even discuss such potential problems eloquently attests to the perceived irrelevance of bad subjective animal experiences to scientific veterinary medicine.
Continuing my research on ketamine in 1985-86, I was curious about how ketamine use had changed since the 1973 revelations of bad trips and flashbacks. Much to my amazement, I now found that ketamine was largely being used “on the very young (children) and the very old (the elderly).”
For the next few months I searched the anesthesia literature, journals and textbooks, to find out what unique physiological traits were common to the very young and the very old that made ketamine a viable drug at these extremes, but not for people in the middle. I got nowhere. Finally, by sheer coincidence, I happened to be at a party with a human anesthesiologist and asked him about the differing physiologies. He burst out laughing! “Physiology?” he intoned. “The use has nothing to do with physiology. It’s just not that the old and the young can’t sue and have no power!” In other words, their bad subjective experiences don’t matter!
This was confirmed for me by one of my students, who had a rare disease since birth that was treated at a major research center. He told me that procedures were done under ketamine, which he loathed in virtue of “bad trips,” until he turned 16, at which time he was told, “Ketamine won’t work anymore.”
If there ever was a beautiful illustration of ideological, amoral, cynical, denial of medical relevance of subjective experience in human and veterinary medicine, it is the above account of ketamine. Unfortunately, there is more to relate on this issue. We will now discuss the International Association for the Study of Pain (IASP) definition of pain that was widely disseminated until finally being revised in 2001 to mitigate some of the absurdity we shall discuss.
IASP is the world’s largest and most influential organization devoted to the study of pain. Yet, as we shall shortly detail, the official definition of pain entailed that infants, animals and non-linguistic humans did not feel pain! In 1998, I was asked to criticize the official definition of pain, which I felt was morally outrageous in its exclusion of the above from feeling pain and in reinforcing the ideological denial of subjective experience to a large number of beings to whom we had moral obligations. Leaving such a definition to affirm the scientific community’s stance on felt pain was a matter causing both moral mischief and ultimately a loss of scientific credibility. The discussion that follows is drawn from my remarks at the IASP convention of 1998, and my subsequent essay version published in Pain Forum.
It is a major irony that although the definition of pain adopted by the IASP was cast into its current form for laudable moral reasons, it has given succor to neo-Cartesian tendencies in science and medicine, and in fact has the potential for supporting morally problematic behavior. Dr. Harold Merskey, a principle architect of the definition, has explained at the 1998 American Pain Society meeting in San Diego that the initial definition of pain as “an unpleasant sensory and emotional experience associated with actual or potential tissue damage, or described in terms of such damage” was later modified in a note to allow for the reality of pain in adult humans where there was no organic cause for the pain and no evident tissue damage. The note affirmed that
Pain is always subjective. Each individual learns the application of the word through experiences related to injury in early life. It continues:
Many people report pain in the absence of tissue damage or any likely pathophysiological cause: Usually this happens for psychological reasons. There is usually no way to distinguish their experience from that due to tissue damage if we take the subjective report. If they regard their experience as pain and if they report it in the same ways as pain caused by tissue damage, it should be accepted as pain. In other words, linguistic self-reports of pain should be accepted as proof of the existence of genuine pain in linguistically competent beings, a move designed to encourage medical attention to pain even in the absence of a proximate stimulus involving tissue damage. This was clearly a praiseworthy, morally motivated move, which also spurred research into areas such as chronic pain that might have been ignored in the absence of a definition stressing the subjective side of pain and its linguistic articulation.
Unfortunately, however, the definition’s emphasis on the connection between pain and full linguistic competence have led to a neo-Cartesian tendency to make such linguistic competence a necessary and sufficient condition for attributing pain to a being (Descartes had famously argued that only creatures with language could be said to possess mind.) “Mere” behavior does not license the confident or certain attribution of pain to an organism because only words describe the subjective. As Merskey says: “The behavior mentioned in the definition is behavior that describes the subjective state and that is how matters should remain”. Merskey also stated:
The very words “pain behavior” are often employed as a means to distinguish between external responses and the subjective condition. I am in sympathy with Anand and Craig in their wish to recognize that such types of behaviour are likely to indicate the presence of a subjective experience, but the behavior cannot be incorporated sensibly in the definition of a subjective event. Despite Merskey’s own professed belief that “there is an almost overwhelming probability that some speechless organisms suffer pain, including neonates, infants and adults with dementia” (he does not mention animals), he nonetheless classifies such pain as “probable” or “inferred,” in contradistinction to the certainty accompanying claims by linguistic beings. As Anand and Craig and Craig have argued, this ultimately draws a major ontological and epistemological gulf between linguistic and nonlinguistic beings in relation to the presence and certainty of experienced pain. This, in turn, helps to justify the well-documented tendency of researchers and clinicians to undertreat or fail to treat altogether pain in neonates, infants, young children, and animals, all of whom lack full linguistic ability.
It is thus disturbing to find a neo-Cartesian element infiltrating these recent discussions of pain, suggesting that only linguistic beings are capable of experiencing pain as something of which they are aware, and that only verbal reports allow us to “really” know that a being is in pain. Aside from the ethical damage that such a view can create by implying that animals, neonates, and prelinguistic infants do not “really feel pain”, promulgation of this view is dangerous to the methodological assumptions underlying science, as well as to scientific credibility in society in general.
To illustrate the methodological pitfalls inherent in such a view, consider the thesis once raised by Bertrand Russell: “How do we know that the world was not in fact created 10 seconds ago, complete with fossils, etc. and us with all of our memories?” Or better yet, consider the following critique of the very possibility of science: “Look here. Science claims to give us explanations of phenomena that take place in the physical world we all share. Yet, in point of fact, our only access to the real physical world is through our experiences, our perceptions, which are totally