Number of the records: 1
Estimating the false discovery risk of (randomized) clinical trials in medical journals based on published p-values
- 1.
SYSNO ASEP 0576101 Document Type J - Journal Article R&D Document Type Journal Article Subsidiary J Článek ve WOS Title Estimating the false discovery risk of (randomized) clinical trials in medical journals based on published p-values Author(s) Schimmack, U. (CA)
Bartoš, František (UIVT-O) SAI, ORCID, RIDNumber of authors 2 Article number e0290084 Source Title PLoS ONE. - : Public Library of Science - ISSN 1932-6203
Roč. 18, č. 8 (2023)Number of pages 12 s. Publication form Online - E Language eng - English Country US - United States Keywords false discovery ; clinical trials ; replication studies OECD category Statistics and probability Method of publishing Open access Institutional support UIVT-O - RVO:67985807 UT WOS 001201691300001 EID SCOPUS 85169230576 DOI https://doi.org/10.1371/journal.pone.0290084 Annotation Many sciences are facing a crisis of confidence in published results [1]. Meta-scientific studies have revealed low replication rates, estimates of low statistical power, and even reports of scientific misconduct [2]. Based on assumptions about the percentage of true hypotheses and statistical power to test them, Ioannidis [3] arrived at the conclusion that most published results are false. It has proven difficult to test this prediction. First, large scale replication attempts [4–6] are inherently expensive and focus only on a limited set of pre-selected findings [7]. Second, studies of meta-analyses have revealed that power is low, but rarely lead to the conclusion that the null-hypothesis is true [8–16] (but see [17, 18]). So far, the most promising attempt to estimate the false discovery rate has been Jager and Leek’s [19] investigation of p-values in medical journals. They extracted 5,322 p-values from abstracts of medical journals and found that only 14% of the statistically significant results may be false-positives. This is a sizeable percentage, but it is inconsistent with the claim that most published results are false. Although Jager and Leek’s article was based on actual data, the article had a relatively minor impact on discussions about false-positive risks, possibly due to several limitations of their study [20–23]. One problem of their estimation method is the problem to distinguish between true null-hypotheses (i.e., the effect size is exactly zero) and studies with very low power in which the effect size may be very small, but not zero. To avoid this problem, we do not estimate the actual percentage of false positives, but rather the maximum percentage that is consistent with the data. We call this estimate the false discovery risk (FDR). To estimate the FDR, we take advantage of Sorić’s [24] insight that the false discovery risk is maximized when power to detect true effects is 100%. In this scenario, the false discovery rate is a simple function of the discovery rate (i.e., the percentage of significant results). Thus, the main challenge for empirical studies of FDR is to estimate the discovery rate when selection bias is present and inflates the observed discovery rate. To address the problem of selection bias, we developed a selection model that can provide an estimate of the discovery rate before selection for significance. The method section provides a detailed account of our method and compares it to Jager and Leek’s [19] approach. Workplace Institute of Computer Science Contact Tereza Šírová, sirova@cs.cas.cz, Tel.: 266 053 800 Year of Publishing 2024 Electronic address https://dx.doi.org/10.1371/journal.pone.0290084
Number of the records: 1