“Is Peer Review Biased Toward Statistical Significance?”
Coautoreado con Scott Carrell, David Figlio y Lester Lusher
Abstract: Studies have illustrated how the distribution of test statistics from published manuscripts lump at certain significance thresholds. Little is known about the underlying mechanisms driving these findings: authors may engage in certain behavior to attain “desirable” p-values (p-hacking), and/or the peer review process may favor statistical significance (publication bias). This study is the first to use data from journal submissions to identify these mechanisms. We first find that initial submissions display significant bunching, suggesting prior findings cannot be strictly attributed to a bias in the peer review process. Desk rejected manuscripts display greater heaping than those sent for review, suggesting editors on average “sniff out” marginally significant results. Reviewer recommendations, on the other hand, are swayed significantly by statistical thresholds. Overall, the distribution of test statistics from published manuscripts display slightly less heaping at the 10 significance thresholds than from rejected manuscripts.
CONTACTO DEL EVENTO