NationStates Jolt Archive


Peer Reviewed Journals

Hotwife
09-02-2009, 15:49
Let's talk about The Lancet.

http://www.timesonline.co.uk/tol/life_and_style/health/article5683671.ece

Or the other study, about casualties in Iraq:

Later studies produced numbers of less than a quarter of Lancet’s totals, and the authors of the study were largely discredited. Now a research association has publicly rebuked Gilbert Burnham for not disclosing his methodology, and Burnham may face action from his employer, Johns Hopkins...
Burnham won’t reveal how he arrived at those numbers, which makes his research completely useless. Scientific studies have to reveal their entire methodology in order for others to attempt to duplicate the study and its results. Without duplication, results cannot be confirmed, and most scientists reject them — unless they serve political rather than scientific ends.

Neither is a shining example of how peer-reviewed science should be conducted - in the open, with real data that can be verified by a third party at any time.

It's pretty sad when a journal with the reputation of The Lancet decides that sensational results and politics are more important than science.
Damor
09-02-2009, 17:55
So, you hold that rather than review the article and check whether it satisfies criteria of falsifiability etc, the peers should try to replicate the experiments and/or check the source of the data themselves before giving their ok of the article?
Please tell me that's a gross exaggeration of your actual position. Peer review worked in these cases, exactly because the articles could both be discredited later. Peer review is not about getting only the truth published. It's about getting articles published that have all the appearance of being scientific, so that other research groups can do the replication, critiquing, and if need be, discrediting. That some people fudge the data and that it slips through is not a failing of peer review.
Eofaerwic
09-02-2009, 18:34
Peer reviewers should not have to double check the data, that is unrealistic, especially in studies whereby data protection may be an issue (a lot of psychological studies will have confidentiality issues). Therefore yes, they will always be at risk of people fudging or fabricating data. It's why universities and research institutions come down so hard on those who do, because they undermine the very fabric of the academic community which is (like all professions) built on a level trust.

However, certainly in psychology, the rules are:
- You're methodology should be reported in such a way that your study can be fully replicated if need be (so researchers can build on, disprove, reinterpret or critique said study)
- You have to keep the original data and that should be verifiable by external authorities (within the limits of reasonableness) in case questions do arise about the authenticity of what you have reported. I believe it's 5 or 7 years you have to keep the data.

Failing on number one *should* mean you don't get published, failing on number 2 without good reason is a serious breach of academic conduct.
Vetalia
09-02-2009, 23:16
Peer review isn't perfect review...scientists are human and vulnerable to human mistakes as a consequence. It's a testament to the effectiveness of this system that it was caught and the perpetrators appropriately punished for their intellectual dishonesty.
Dimesa
10-02-2009, 07:01
http://i41.tinypic.com/34nfbz5.gif
greed and death
10-02-2009, 09:55
So, you hold that rather than review the article and check whether it satisfies criteria of falsifiability etc, the peers should try to replicate the experiments and/or check the source of the data themselves before giving their ok of the article?
Please tell me that's a gross exaggeration of your actual position. Peer review worked in these cases, exactly because the articles could both be discredited later. Peer review is not about getting only the truth published. It's about getting articles published that have all the appearance of being scientific, so that other research groups can do the replication, critiquing, and if need be, discrediting. That some people fudge the data and that it slips through is not a failing of peer review.

to get published in a peer reviewed journal you have to reveal your methodology, and sources(anonymous sources in social sciences are fine but need to be listed as such). The author of the study in question has done neither. In the academic world this either means plagiarism or pulling numbers out of the air.
Damor
10-02-2009, 10:53
to get published in a peer reviewed journal you have to reveal your methodology, and sources(anonymous sources in social sciences are fine but need to be listed as such). The author of the study in question has done neither.You read it?
If this were true, certainly it would have come up in the last ten years of discussion. Unlike the data it isn't something you need to look for since it's right there in the article. The thing I've mostly heard about the study is that in itself it's ok, but further research has shown its conclusions don't hold true. Which is hardly a rare thing in science.
There's no reason to vilify the author, other than the fact he still seems to persist in claiming there's a link between autism and MMR despite all the evidence to the contrary.

In the academic world this either means plagiarism or pulling numbers out of the air. You can fabricate methodology and sources almost as easily as data.


http://briandeer.com/mmr/lancet-paper.htm
Methodology and sources (as far as privacy laws allow) seem to be there, imo.
Rotovia-
10-02-2009, 11:05
Peer review unfortunately begins from the assumption people aren't lazy, self-obsessed, and greedy. It also assumes that serious academic research isn't attached to a lolcats forward, to be assessed by post-grads living on the brink of poverty, who are too busy slapping together anything that can guarantee their supervisor bigger grants to notice if it is nothing more than the first five pages of an unfinished thesis they found and vague promises of future applications
The Archregimancy
10-02-2009, 11:19
I've been a peer reviewer, a contributor to, and editor of academic journals, and currently edit a professional society newsletter.

There are at least three separate arguments in this specific discussion.

1) Was the original paper in The Lancet on Iraq casualties adequately reviewed, and did it meet publication standard?

2) If didn't meet publication standard, does that automatically invalidate the findings of the paper regarding deaths in Iraq?

3) Is peer-reviewing a good system, or does reviewer bias undermine the system.

To which the quick answers are no, no, and both - it's a good system but not immune to bias.

It was sloppy of the original authors to not disclose their methodology (reviewers aren't expected to follow that methodology themselves, but the methodology has to be clear so that readers can replicate it), and it was sloppy of The Lancet to allow the publication of a paper on a politically controversial topic (casualties in Iraq) where no methodology was included, but this doesn't by itself invalidate the conclusions of that paper. Only the publication of a second paper proving the conclusions were false, or a second paper by the original authors reaching the same original conclusion, but with firmer methodology, can do that.

The final verdict here should be that the lack of methodology means the original argument is not proven, not that the original argument is demonstrably false.


And peer reviewers aren't perfect. I still cringe about how unfair one of my early reviews was a decade ago, and I've been in receipt of peer reviews that reveal that the reviewer in question simply didn't understand the paper [while the other two reviewers did]. We're human - which is why you should always ask for at least three reviews for a journal paper, just in case.
Rambhutan
10-02-2009, 11:21
I've been a peer reviewer, a contributor to, and editor of academic journals, and currently edit a professional society newsletter.

There are at least three separate arguments in this specific discussion.

1) Was the original paper in The Lancet adequately reviewed, and did it meet publication standard?

2) If didn't meet publication standard, does that automatically invalidate the findings of the paper regarding deaths in Iraq?

3) Is peer-reviewing a good system, or does reviewer bias undermine the system.

To which the quick answers are no, no, and both - it's a good system but not immune to bias.

It was sloppy of the original authors to not disclose their methodology (reviewers aren't expected to follow that methodology themselves, but the methodology has to be clear so that readers can replicate it), and it was sloppy of The Lancet to allow the publication of a paper on a politically controversial topic where no methodology was included, but this doesn't by itself invalidate the conclusions of that paper. Only the publication of a second paper proving the conclusions were false, or a second paper by the original authors reaching the same original conclusion, but with firmer methodology, can do that.

The final verdict here should be that the lack of methodology means the original argument is not proven, not that the original argument is demonstrably false.


And peer reviewers aren't perfect. I still cringe about how unfair one of my early reviews was a decade ago, and I've been in receipt of peer reviews that reveal that the reviewer in question simply didn't understand the paper [while the other two reviewers did]. We're human - which is why you should always ask for at least three reviews for a journal paper, just in case.


This is the definitive answer as far as I am concerned
Damor
10-02-2009, 11:36
and it was sloppy of The Lancet to allow the publication of a paper on a politically controversial topicIt wasn't at that time politically controversial. Nor should the paper have contributed to that. It even literally says that it has found no link between MMR and the syndrome they describe. Only that the parents of the children have linked the two together.
Yet the media frenzy that started after its publication cherry picked from the article to give an entirely different picture.

where no methodology was includedWhat do you mean that it isn't included? It seem to be there when I look at it. Is it too short, incomplete, or is something else wrong with it?
The Archregimancy
10-02-2009, 11:40
What do you mean that it isn't included? It seem to be there when I look at it. Is it too short, incomplete, or is something else wrong with it?

Sorry, there seems to be some confusion; re-reading my post, I wasn't entirely clear.

I was specifically addressing the second point - the politically controversial study of casualties in Iraq - rather than the MMR controversy, which was, as you imply, the result of Daily Mail sensationalism.

I've edited my previous post to make this clearer.
Damor
10-02-2009, 12:03
Ah, thank you.
Now, which of the Iraq studies in the lancet are we talking about? The 2004 (http://web.archive.org/web/20061102194212/http://www.jhsph.edu/refugee/research/iraq/sdarticle.pdf) one, or the 2006 (http://brusselstribunal.org/pdf/lancet111006.pdf) one?
Nodinia
10-02-2009, 12:21
Neither is a shining example of how peer-reviewed science should be conducted - in the open, with real data that can be verified by a third party at any time.

It's pretty sad when a journal with the reputation of The Lancet decides that sensational results and politics are more important than science.

Heres a summary of the original criticism of the Iraq paper

Cluster sampling has recently been used to estimate the mortality in various conflicts around the world. The Burnham et al. study on Iraq employs a new variant of this cluster sampling methodology. The stated methodology of Burnham et al. is to (1) select a random main street, (2) choose a random cross street to this main street, and (3) select a random household on the cross street to start the process. The authors show that this new variant of the cluster sampling methodology can introduce an unexpected, yet substantial, bias into the resulting estimates, as such streets are a natural habitat for patrols, convoys, police stations, road-blocks, cafes, and street-markets. This bias comes about because the residents of households on cross-streets to the main streets are more likely to be exposed to violence than those living further away. Here, the authors develop a mathematical model to gauge the size of the bias and use the existing evidence to propose values for the parameters that underlie the model. The research suggests that the Burnham et al. study of conflict mortality in Iraq may represent a substantial overestimate of mortality. Indeed, the recently published Iraq Family Health Survey covered virtually the same time period as the Burnham et al. study, used census-based sampling techniques, and produced a central estimate for violent deaths that was one fourth of the Burnham et al. estimate. The authors provide a sensitivity analysis to help readers to tune their own judgements on the extent of this bias by varying the parameter values. Future progress on this subject would benefit from the release of high-resolution data by the authors of the Burnham et al. study.
http://jpr.sagepub.com/cgi/content/abstract/45/5/653

At no stage do they suggest that the bias referred to above is on behalf of the researchers, but a result of methodology. Nowhere does there seem to be claims that the Lancet or the authors were looking for "sensational results". In fact, the only fuck up thats happened so far is the refusal to share the original data.
greed and death
11-02-2009, 08:29
You read it?
If this were true, certainly it would have come up in the last ten years of discussion. Unlike the data it isn't something you need to look for since it's right there in the article. The thing I've mostly heard about the study is that in itself it's ok, but further research has shown its conclusions don't hold true. Which is hardly a rare thing in science.
There's no reason to vilify the author, other than the fact he still seems to persist in claiming there's a link between autism and MMR despite all the evidence to the contrary.

You can fabricate methodology and sources almost as easily as data.


http://briandeer.com/mmr/lancet-paper.htm
Methodology and sources (as far as privacy laws allow) seem to be there, imo.


Talking about the Iraq study not the MMR study.
Autism is a slippery thing that seems hard to pin down causes on so i am not surprised it came up one way one one study and a different way on another study.