One of the interesting issues that I think we should come back to next time is the issue of risk. Rollin mentions a number of times the need for researchers to communicate the risks associated with a certainly line of research and discuss whether these risks are balanced off, to a certain extent, by its benefits. This plausible point raises a host of questions though. There are at least two components to risk: probability and outcome. I was recently an experimental subject in an experiment involving single malt scotch (seriously!). I was asked to report on my subjective taste experiences before and after being instructed on how different aspects of these experiences were conceived. I signed a waiver acknowledging that, as I was to be drinking several measures of 100–110 proof liquor, there was a significant risk that I'd get a headache later (yeah. . . .). For the good of science I went forward.
Let's say that there was a 50% chance of getting headache. I don't like headaches. . . . I'd rather not get them. But they're not THAT bad. If she had told me that there's a 20% chance that my thumbnails might fall off, I'd probably think twice (or ask what KIND of of scotch it was — hey, they'd regrow!). Suppose I valued getting a headache at -20 "happiness units" (HUs) and having my thumbnails fall off at -50 HUs. By multiplying chance and outcome, we get something like the "expected value" of taking that risk: -10 HUs in each case. Supposing that I value the contribution to human knowledge (or just like single malt scotch), this expectation might be balanced off by a greater positive expectation — e.g., the 80% chance that this researcher has chosen some quite lovely malts whose enjoyment would give me 50 HUs (x 80% = +40 HUs). Overall, then, I expect to be 30 "units" happier. . . . I agree.
You probably see where this is going. What the hell are Happiness Units? Do my happiness units and yours compare? Do they even compare in an individual? What happens if an experimenter encounters a volunteer with extremely odd HU assignments (risk tolerances)? For example, suppose that a voluntee agrees to undergo an experiment that carries a 50% chance of total paralysis on the grounds that he's not a very active person. Should we even accept him as a volunteer? Can we rationally evaluate whether different HU assignments are rational?
There's another interesting problem who students from the Philosophy of Biology may remember connected to the St. Petersburg Paradox. How much would you be willing to pay to enter the following contest/bet: I flip a coin until it comes up tails. For each head I get before then, I give you $2n dollars. So if I flip three heads before finally getting a tails, I give you $8. But since the probability that I'd do that is .5n (=.125), you should expect to get $1. But in computing the expected value for this bet, since it's possible that I keep flipping heads arbitrarily many times, the expected payoff is arbitrarily much. If I were to flip a mere 10 heads in a row, I'd have to pay you over a thousand dollars! In fact, the expected value of the gamble is infinite! Thus on a simpleminded approach to rational decisionmaking, you should be rationally permitted to spend any finite amount of money to buy in. It's worth your entire life savings to take a chance on this bet!
Of course, this just goes to show that the simpleminded approach is simpleminded. There's no way you should spend very much to buy into this bet. I'd be hard-pressed to spend more than $10, say. Perhaps this shows that we're less interested in expected value than in what probably will happen (however that should be interpreted).
There's another side of this sort of phenomenon. Some physicists worried that there was a small but non-zero chance that when we fired up the Large Hadron Collider (at CERN), we'd produce a black hole that would destroy the earth. That'd be bad. How many HUs should we assign this outcome? I recall reading in Richard Posner's book that a reasonable monetary assignment would be -$600 trillion. I find that hilarious. Why that number? Does it even make sense to think of a dollar amount? Whatever. But if it's bad enough, then no matter how small the chance is, the expected cost will outweigh the benefit and we should shut down the experiment. (This might remind you of Pascal's Wager.)
Anyway, these sorts of questions are clearly relevant to our thinking about using humans (or animals) as experimental subjects.
Saturday, January 31
Wednesday, January 28
NYT: "Elevating Science, Elevating Democracy"
Posted by
Roxy Hickey
at
10:56 AM
Intriguing article from the New York Times yesterday: "Elevating Science, Elevating Democracy" by Dennis Overbye. More evidence that the social psyche is creeping into the practice of science...
Seed magazine...heard of it?
Posted by
Roxy Hickey
at
12:01 AM
I was wondering if anyone is familiar with Seed magazine (subheading: "Science is Culture"). It features articles and reviews on current topics in science from a social (and sometimes ethical) perspective. I guess it's kind of a layperson's version of Nature or Science, although I've only read a few issues at this point so I may be wrong in drawing this comparison. On the whole, the topics and viewpoints featured seem to be pretty substantive, and there's certainly a lot of material relevant to our class.
An interesting theme I've noticed in many of the articles I've read so far is this notion that science "must progress" or "must overcome some barriers"... obviously different scientists have different ideas of what these barriers are and whether we should overcome them at all. This does, however, seem to be a pervasive "ideology" (what a loaded term!) in the scientific realm and thus deserves some comment. It occurred to me that perhaps this barrier is what Rollin claims is the failure of science to consider ethics ("Ethics2", that is). Following from that, perhaps many scientists view the non-scientific community as a "barrier" because they feel like they're just not getting their message across. Evidently scientific progress is a two-way street. What are your thoughts on this?
On another note, and what triggered this late-night post in the first place: for those of you who took Matthew's Philosophy of Science class last semester, you'll remember Steven Shapin, author of The Scientific Revolution. He wrote a short article for Seed's December 2008 issue entitled "The Scientist in 2008." He discusses historical and modern perspectives of how society views science/scientists and what the role of science/scientists is or ought to be, and (most importantly, I think) he also suggests that an "adjustment of the boundaries between the natural and social sciences" is imminent as we enter the 21st century with "new scientific agendas and new conceptions of what it is to be a scientist." I guess I'm bringing this up mainly because I'm happy to see that a consideration of ethics and social values (whatever the hell that means) is starting to make its way into the mainstream dialogue of science.
I guess one thing I find a little disturbing about the magazine is that Craig Venter's (credited with sequencing the first entire human genome; founder of Celera Genomics) name has appeared on the covers of all three magazines I've read so far (not as an author, but as a featured scientist). For those of you not familiar with Venter, his team used a method of "shotgun sequencing" to determine the sequence of the human genome years before the set goal, and he stepped on a lot of other people working on the Human Genome Project in doing so (sorry--that's a personal value judgment, but in my opinion he played a little dirty).
Anyway, that's my spiel for now. I hope it was coherent enough for you to garner something from it. I welcome your responses and comments.
An interesting theme I've noticed in many of the articles I've read so far is this notion that science "must progress" or "must overcome some barriers"... obviously different scientists have different ideas of what these barriers are and whether we should overcome them at all. This does, however, seem to be a pervasive "ideology" (what a loaded term!) in the scientific realm and thus deserves some comment. It occurred to me that perhaps this barrier is what Rollin claims is the failure of science to consider ethics ("Ethics2", that is). Following from that, perhaps many scientists view the non-scientific community as a "barrier" because they feel like they're just not getting their message across. Evidently scientific progress is a two-way street. What are your thoughts on this?
On another note, and what triggered this late-night post in the first place: for those of you who took Matthew's Philosophy of Science class last semester, you'll remember Steven Shapin, author of The Scientific Revolution. He wrote a short article for Seed's December 2008 issue entitled "The Scientist in 2008." He discusses historical and modern perspectives of how society views science/scientists and what the role of science/scientists is or ought to be, and (most importantly, I think) he also suggests that an "adjustment of the boundaries between the natural and social sciences" is imminent as we enter the 21st century with "new scientific agendas and new conceptions of what it is to be a scientist." I guess I'm bringing this up mainly because I'm happy to see that a consideration of ethics and social values (whatever the hell that means) is starting to make its way into the mainstream dialogue of science.
I guess one thing I find a little disturbing about the magazine is that Craig Venter's (credited with sequencing the first entire human genome; founder of Celera Genomics) name has appeared on the covers of all three magazines I've read so far (not as an author, but as a featured scientist). For those of you not familiar with Venter, his team used a method of "shotgun sequencing" to determine the sequence of the human genome years before the set goal, and he stepped on a lot of other people working on the Human Genome Project in doing so (sorry--that's a personal value judgment, but in my opinion he played a little dirty).
Anyway, that's my spiel for now. I hope it was coherent enough for you to garner something from it. I welcome your responses and comments.
Tuesday, January 27
Topics/Reading for 2/4
Posted by
Matthew
at
9:23 PM
As I mentioned in class, I want to continue talking about Chapter 4 of Rollin next week alongside our discussion of the use of animals in scientific research: Chapter 5 of Rollin and this PDF from Webster's book (NB: this link will only work from within blackboard — don't email me to tell me that the link is broken; it's also in the Supplemental Readings folder in blackboard).
One other note about this blog. It was pretty quiet last week. That's understandable, of course, since our first full meeting was just today. I encourage you to start posting your reactions, questions, other material you think the rest of us should read/see, whatever. Don't worry about being off base in these first few posts: we'll find a groove sooner or later.
Friday, January 23
The Tuskegee Study
Posted by
Matthew
at
10:32 PM
There are a number of resources on the infamous Tuskegee study on the web — Rollin mentions some on p. 81 of his book (which I hope you've all managed to buy by now). An excellent starting point is the CDC Page on Tuskegee. I've also posted an article on Tuskegee by Gregory Pence from his Classic Cases in Medical Ethics on our blackboard site which you should have access to shortly (if you're registered for the course). If you're reading this from within blackboard, this link should take you directly to it.
Wednesday, January 21
Consequential Ethics ?
Posted by
B.Lungsi Sharma
at
2:11 PM
Party alliance aside. I came across this video few days back and found the scenario given by the senator (to the nominated attorney general) interesting. After yesterdays class I am guessing this is an example for consequentialism.
Tuesday, January 20
topical news ticker
Posted by
Matthew
at
10:48 PM
You may have noticed the news stories running on the right side of the page. This is a Blogger feature that I'm trying out: I can enter a series of search terms for which Google will offer news stories. I just dashed in a few. I'll take requests in the comments.
Arianna Huffington on blogging
Posted by
Matthew
at
10:22 PM
This frightens me a little. . . . No, make that a lot.
As you begin to blog, people, don't listen to her. There's no reason to share your unvarnished dreck. Note, though, that uncertain ≠ unvarnished. . . .
The Daily Show With Jon StewartM - Th 11p / 10c
As you begin to blog, people, don't listen to her. There's no reason to share your unvarnished dreck. Note, though, that uncertain ≠ unvarnished. . . .
Subscribe to:
Posts (Atom)