Are most published research findings false?
John P. A. Ioannidis argues here (summary here) that from knowledge of statistics alone most published research findings are false. Basically the idea is that of the hypotheses tested, on average, most will be false. Since we use a p=.05 as a cutoff generally, if even 95% are in fact false, than false positives should equal true positives. If more than 95% of tested hypotheses are in fact false, than false positives would outnumber true positives. False negatives would have an even worse rate because journals normally only publish negative results if they are very unexpected (and therefore more probably wrong.)
Another way to look at it is that most hypotheses have a low prior probablity of being true only increase their probability of being true a little from positive results. For example if a hypothesis only has a 5% chance of being true at first, a study with p=.05 will only increase its chance of being true to about 10%. That's almost worthless.
I just read a month ago Jery Avorn's Powerful Medicines, which gives all the other reasons why we can't trust drug data (bias, fraud, poor study design, selective information from pharm companies etc.) and basically says that we will never satisfactorally answer most questions we have about drugs. Somewhat depressing. It seems that the lesson from this is that we should raise our standards for studies a lott as individual studies are not very reliable and we should mainly look at studies that are very large, are meta analyses, have dramatic results or high prior probabilities of being true. And that we should get used to the fact that trials can only answer a small fraction of the questions that we want them to answer. Anyone have comments?
0 Comments:
Post a Comment
<< Home