One of the interesting issues that I think we should come back to next time is the issue of risk. Rollin mentions a number of times the need for researchers to communicate the risks associated with a certainly line of research and discuss whether these risks are balanced off, to a certain extent, by its benefits. This plausible point raises a host of questions though. There are at least two components to risk: probability and outcome. I was recently an experimental subject in an experiment involving single malt scotch (seriously!). I was asked to report on my subjective taste experiences before and after being instructed on how different aspects of these experiences were conceived. I signed a waiver acknowledging that, as I was to be drinking several measures of 100–110 proof liquor, there was a significant risk that I'd get a headache later (yeah. . . .). For the good of science I went forward.
Let's say that there was a 50% chance of getting headache. I don't like headaches. . . . I'd rather not get them. But they're not THAT bad. If she had told me that there's a 20% chance that my thumbnails might fall off, I'd probably think twice (or ask what KIND of of scotch it was — hey, they'd regrow!). Suppose I valued getting a headache at -20 "happiness units" (HUs) and having my thumbnails fall off at -50 HUs. By multiplying chance and outcome, we get something like the "expected value" of taking that risk: -10 HUs in each case. Supposing that I value the contribution to human knowledge (or just like single malt scotch), this expectation might be balanced off by a greater positive expectation — e.g., the 80% chance that this researcher has chosen some quite lovely malts whose enjoyment would give me 50 HUs (x 80% = +40 HUs). Overall, then, I expect to be 30 "units" happier. . . . I agree.
You probably see where this is going. What the hell are Happiness Units? Do my happiness units and yours compare? Do they even compare in an individual? What happens if an experimenter encounters a volunteer with extremely odd HU assignments (risk tolerances)? For example, suppose that a voluntee agrees to undergo an experiment that carries a 50% chance of total paralysis on the grounds that he's not a very active person. Should we even accept him as a volunteer? Can we rationally evaluate whether different HU assignments are rational?
There's another interesting problem who students from the Philosophy of Biology may remember connected to the St. Petersburg Paradox. How much would you be willing to pay to enter the following contest/bet: I flip a coin until it comes up tails. For each head I get before then, I give you $2n dollars. So if I flip three heads before finally getting a tails, I give you $8. But since the probability that I'd do that is .5n (=.125), you should expect to get $1. But in computing the expected value for this bet, since it's possible that I keep flipping heads arbitrarily many times, the expected payoff is arbitrarily much. If I were to flip a mere 10 heads in a row, I'd have to pay you over a thousand dollars! In fact, the expected value of the gamble is infinite! Thus on a simpleminded approach to rational decisionmaking, you should be rationally permitted to spend any finite amount of money to buy in. It's worth your entire life savings to take a chance on this bet!
Of course, this just goes to show that the simpleminded approach is simpleminded. There's no way you should spend very much to buy into this bet. I'd be hard-pressed to spend more than $10, say. Perhaps this shows that we're less interested in expected value than in what probably will happen (however that should be interpreted).
There's another side of this sort of phenomenon. Some physicists worried that there was a small but non-zero chance that when we fired up the Large Hadron Collider (at CERN), we'd produce a black hole that would destroy the earth. That'd be bad. How many HUs should we assign this outcome? I recall reading in Richard Posner's book that a reasonable monetary assignment would be -$600 trillion. I find that hilarious. Why that number? Does it even make sense to think of a dollar amount? Whatever. But if it's bad enough, then no matter how small the chance is, the expected cost will outweigh the benefit and we should shut down the experiment. (This might remind you of Pascal's Wager.)
Anyway, these sorts of questions are clearly relevant to our thinking about using humans (or animals) as experimental subjects.
Subscribe to:
Post Comments (Atom)
When you go into any situation that could be any risk. In fact, most people could think/makeup risks and then you would have that "chance" to get exposed etc. For example, when you eat food, you are taking the risk of getting sick from salmonella etc. But we as humans will still eat food, because not only do we like the taste of a wonderful hamburger but we need the food to survive. When it comes to research, isn't it that same type of situation? We are making/developing the research for us to survive. We are not just doing the research for the "heck of it." It is important to think of risks in research, especially if you are the one to be researched on, but when it comes to animals, most of the time it is for the human species to survive. Why not take the risk if it will benefit us in the long run?
ReplyDeleteThis is a general reply to Matthew's post:
ReplyDeleteWhen assessing risks and benefits of the outcomes of particular situations, we need to be very careful to consider not only risks but also the ethical implications inherent in our actions. The ends do NOT always justify the means, and this is why we need to consider both the utilitarian and the deontological ethical aspects of scientific research. Yes, using animals in research has certainly led to great achievements in medicine and our understanding of human physiology (utilitarian aspect: good consequences outnumber bad--maybe). But isn't there something inherently wrong with paralyzing animals and operating on them without any anesthetic just so we can get a better understanding of how their organs work (deontological: moral duty to not harm)? This is only an example, and perhaps I'm wrongly tugging at your heartstrings here, but I'm merely suggesting that risk-benefit analysis is not enough. Indeed, this is what has gotten science into a lot of trouble in recent decades.
How about looking at risk from another angle--what are the ethical risks associated with a particular action? When assessing these sorts of risks we might ask ourselves (as scientists), "How might I design this experiment to preserve the maximum health and dignity of my subject(s)?" or "What might be the ethical implications of my research that could influence future experiments?". I think when we ask these sorts of "risk" questions, we really force ourselves to analyze the long-term outcome, not simply the short-term of whether some action will produce benefit for human "survival."