This my view on the link in More Peer Review Sketchiness. I was about to put this in comments but it was running a little bit too long ;-). Here goes.....
The article by Scott Jashik on Lamont’s research in “How Professors Think” is an interesting and very informative article for a student like me who intend to pursue research. I have always been skeptical with the big wigs and the whole process of peer review though I have really never successfully thought of an alternative approach. This article made me even more skeptical and hence the urgent need to either reform or debunk our current approach of rewarding excellence by peer review.
I agree with Lamont when she states that people (professors reviewing, students/applicant, common people) should not pretend that the current criteria in peer review (originality, feasibility, social & intellectual significance) equate scientific measure of excellence and hence other criteria should also be used. Though understandable, Lamont’s finding that different discipline have extremely different approach to decision-making challenges our current foundation of peer review because it tells us that there is no common fundamental criteria (fundamental because it is expected that others may either be added or be insignificant depending upon the discipline).
The findings and quotes from professors as peer reviewers particularly those related with “luck of timing” and “power of personal/profession interests” clearly show lack of professionalism. It is also interesting that some peer reviewers consider morality and character of the applicant just by reading the paper being reviewed and ranks them as wither courageous risk-takers or lazy conformist. I wonder if these reviewers consider themselves different from the draconian approach when institutions (church) banned Galileo’s research. I guess Galileo was too courageous and since history tells us that he was correct we should therefore support only such risk-takers. But on what grounds are they defining as risk-takers and lazy conformist, the extremes are easy to pick out but most are not extremes.
It is also interesting that most review panels go for the middle of the pack proposals which have flaws and hence I guess easier to review as they can give a good (lengthy) account of pros & cons, whereas for the those outside the pack these reviewers are too lazy, busy or incompetent to review/test the theories that are not in the common textbooks. I found this criterion to be against that where reviewers support the risk-takers. I guess this is where luck comes in (for the applicant), which is different from the luck of timing.
These findings therefore gives me the impression that our current peer review process is far from perfect and far more subjective that depends on the reviewer, his/her character, choice and mood on that particular day/time of review. I think peer review though not analogous in the strict sense but should be like taking a test where you get graded objectively (though there are accounts of subjectivity). The grader/teacher has a sense of responsibility because h/she knows that it will affect the carrier of the student unlike our current peer reviewers who seem (from Lamont’s findings) not to take those considerations seriously. Apart from all the rules, changes and codes of conduct of peer reviewing that can be brought upon if this sense of responsibility (to their particular discipline and well as to the applicant) by reviewers are taken seriously I think it will also motivate (or force) researchers/applicant to do better/sincere research and hence less crappy ones.
Tuesday, March 10
Subscribe to:
Post Comments (Atom)
In psychology it is not uncommon to create objective measures based on the subjective impressions of subject matter experts. A good example would be music critique. Playing music well isn't merely a matter of hitting all the right notes with the right timing. At to some degree we find this type of music rigid and dissatisfying. Musicians however can pick up on subtleties average listeners cannot. Of course the problems with system like this is wide scoring variability. Maybe part of the problem with unwarranted subjectivity in peer review could be resolved by having more reviewers to wash out "noise".
ReplyDelete