A BBC journalist writes about the dangers and consequences of scientists who fabricate their results.
I've written about a similar article and topic here, discussing the shortcomings of peer-review, which ultimately reflect on the self-regulated industry which is scientific research.
Jardine, the BBC writer, stresses the pressure to fabricate could be credited to the competition for funding, especially in regards to finding "big" breakthroughs and making spectacular discoveries. As a grad student, I can cite another potential source.
It's not just that funding issues put pressure on researchers to produce results. Professors put pressure on their grad students to produce results, in the form of "If you don't publish, I'm taking away your funding and you'll never get your Ph.D." Now, I imagine that most professors can spot any attempts to "enhance" results, but for all of the thousands of grad students across the sciences performing research, I find it perfectly believable that some students have published erroneous results. Whether that means data is exaggerated or completely fabricated, I'm certain it happens at a rate most would rather not think about.
As far as a solution goes, I have none. Peer review is vulnerable to problems, but the truth is that no other system exists to prevent bad science from being published or funded. As long as there is money in science (whether from funding, fame, or success) there will be people who play fast and loose with their results.
8 comments:
As a scientist, you should know that the peer-review system works just fine because anything that is published contains an experimental procedure that will be reproduced by other research groups. If it is not reproducible, you hear about it, and retractions are made.
Not everything is reproducible. That is to say, some stuff passes under the radar too quietly. In other cases, you might be dealing with techniques or equipment so expensive or complicated that it might be a long time before anyone could even attempt to reproduce your experiment, much less verify your data.
Take the Korean stem-cell researcher, Hwang Woo-suk. If I recall correctly, inability to replicate his results was not the catalyst that exposed him as a fraud. I can't recall if it was what set things off or a side effect, but several scientists credited with authorship on many of his papers turned out to have had very little, if anything, to do with the projects.
The point is, the holes in the system are there. To say that it is without problems is to turn a blind eye, a state of denial. How often do the problems occur? I suppose we can only guess. More often than we'd hope is my best guess.
How often do you see this sort of lapse in ethics in your own research group? I have never seen it in any of mine.
I'm not sure, exactly. When presenting results to our advisor as a "I need to know you're being productive" kind of thing, I wouldn't put it past anybody to stretch the truth just a little. I doubt that anybody in my research group has gone so far as to fabricate data (misinterpret, maybe), but then I probably wouldn't know if I saw it.
Unfortunately, just a glimpse at my research group doesn't tell the whole story. It's too small a sample.
Not everything is reproducible. That is to say, some stuff passes under the radar too quietly. In other cases, you might be dealing with techniques or equipment so expensive or complicated that it might be a long time before anyone could even attempt to reproduce your experiment, much less verify your data. While reproducibility by others is ideal (and inevitable), submitting something that one has not been able to reproduce personally is bad science.
If I recall correctly, inability to replicate his results was not the catalyst that exposed him as a fraud. I can't recall if it was what set things off or a side effect, but several scientists credited with authorship on many of his papers turned out to have had very little, if anything, to do with the projects.
A scientist that had collaborated requested to be removed in December 2005 from the June 2005 report after he found out from one of his students that parts of it may have been faked. The only other paper found to contain the same data-faking was published in 2004, also in Science.
I probably could have looked up some of that info. Like I said, I was going off of memory.
But in any case, the fact that the papers were published at all means that they weren't vetted carefully enough. Could it be because they were on a politically charged topic? Is it significant that Science seems to have chosen sides on that topic?
To me, the questions are rhetorical. But that's just me.
no. those questions are not rhetorical. they're stupid. i'm sorry.
i'd bet that science just thought it was a really interesting discovery, not "oh! let's take sides on this issue!" in fact, they would be taking sides on the issue if they DIDN'T publish the paper. as a scientific magazine, they have the responsibility to publish new discoveries that are, to the best of their knowledge well researched.
it's not a big conspiracy. journals publish terrible papers ALL THE TIME.
that's why people review papers.
Considering the number of papers submitted to journals on a yearly basis, and the number I've read that contain bad science (not purposefully fraudulent, in most cases, but usually more along the lines of misinterpreting NMR data or forgetting to consider another mechanism of action), plus the fact that if Hwang Woo-Suk's results would have been pretty amazing, I'd be hard pressed to say that it's some conspiracy to push an agenda, though it is clear that decisions aren't made in a vacuum.)
If Science gets a paper that appears to have all the relevant data for publication and is as revolutionary as this would have been (or as, say, figuring out on-water reactions), they'll probably publish it. They found out later that the researchers had tampered with the data, so they issued a retraction, which would have happened anyway when someone tried to do this and failed, which would be the peer review process working (not all peer-review must take place before publication).
The conjecture that this was only published not because it was, as it was presented, interesting science that means a lot to the field of research he's working in, but rather because they're pursuing an agenda blindly is kind of paranoid.
Is it necessary that Victor Ninov's research was published only to further their opinion?
Post a Comment