Print Topic - Archive

BTD Forums  /  Journal Club and Literature Review  /  On P values
Posted by: Lloyd, Sunday, February 16, 2014, 3:54pm
Remembering that the P value does not tell you how well the study is designed. Or that a low P value proves truth.

http://www.nature.com/news/scientific-method-statistical-errors-1.14700

A few selected passages:

The irony is that when UK statistician Ronald Fisher introduced the P value in the 1920s, he did not mean it to be a definitive test. He intended it simply as an informal way to judge whether evidence was significant in the old-fashioned sense: worthy of a second look.

These are sticky concepts, but some statisticians have tried to provide general rule-of-thumb conversions (see 'Probable cause'). According to one widely used calculation5, a P value of 0.01 corresponds to a false-alarm probability of at least 11%, depending on the underlying probability that there is a true effect; a P value of 0.05 raises that chance to at least 29%.

Critics also bemoan the way that P values can encourage muddled thinking. A prime example is their tendency to deflect attention from the actual size of an effect. Last year, for example, a study of more than 19,000 people showed8 that those who meet their spouses online are less likely to divorce (p < 0.002) and more likely to have high marital satisfaction (p < 0.001) than those who meet offline (see Nature http://doi.org/rcg; 2013). That might have sounded impressive, but the effects were actually tiny: meeting online nudged the divorce rate from 7.67% down to 5.96%, and barely budged happiness from 5.48 to 5.64 on a 7-point scale. To pounce on tiny P values and ignore the larger question is to fall prey to the “seductive certainty of significance”, says Geoff Cumming, an emeritus psychologist at La Trobe University in Melbourne, Australia. But significance is no indicator of practical relevance, he says: “We should be asking, 'How much of an effect is there?', not 'Is there an effect?'”

Statisticians have pointed to a number of measures that might help. To avoid the trap of thinking about results as significant or not significant, for example, Cumming thinks that researchers should always report effect sizes and confidence intervals. These convey what a P value does not: the magnitude and relative importance of an effect.
Posted by: Averno, Sunday, February 16, 2014, 5:08pm; Reply: 1
Quoted Text

P values have always had critics. In their almost nine decades of existence, they have been likened to mosquitoes (annoying and impossible to swat away), the emperor's new clothes (fraught with obvious problems that everyone ignores) and the tool of a “sterile intellectual rake” who ravishes science but leaves it with no progeny3. One researcher suggested rechristening the methodology “statistical hypothesis inference testing”3, presumably for the acronym it would yield.


Kinda says it all...

(Oops... Meant here the P value itself, not the critics of it.)

Posted by: Victoria, Sunday, February 16, 2014, 5:25pm; Reply: 2
Thank you, Lloyd!  :)
Posted by: Amazone I., Monday, February 17, 2014, 7:43am; Reply: 3
:o :D(smarty)(clap)(ok)thanx for the reminder Lloyd!!!
Print page generated: Saturday, August 23, 2014, 5:27pm