Reporting divergence between frequentist and Bayesian result

Whenever you feel like talking about something else, there's always the off-topic forum
Post Reply
pdeli
Posts: 22
Joined: Thu Mar 19, 2020 3:05 pm

Reporting divergence between frequentist and Bayesian result

Post by pdeli »

Question Jamovi

Hello,

In a research I am doing I wanted to use both frequentist and bayesian statistics to confirm/deny hypotheses. Most of the time results support each other. However, in some cases they diverge (e.g., p < .05 but BF01 < 3 or p not significant, but BF01 > 3).

My questions:
  • Apart from the obvious “more research is needed”, what can one do when results are not confirmed by both frequentist and Bayesian analyses for a particular hypothesis?
  • I usually try to find explanations as to why I did not find an expected result, but when for the same hypothesis outcomes diverge between two types of statistics, does it have to be justified/explained?
References welcome.

Thanks in advance,
pdeli

++++
User avatar
MAgojam
Posts: 421
Joined: Thu Jun 08, 2017 2:33 pm
Location: Parma (Italy)

Re: Reporting divergence between frequentist and Bayesian re

Post by MAgojam »

Hi, pdeli.
I don't know if these references will help you find an answer, but I think they are two interesting reads.

Held, Leonhard 2018. On p-Values and Bayes Factors.
https://www.zora.uzh.ch/id/eprint/14860 ... vision.pdf

Massimiliano, Pastore 2013. Bayes Factor e p-value: così vicini, così lontani. (Italian)
https://www.researchgate.net/publicatio ... si_lontani

If you've never done this, try checking it out here too:
https://www.bayesianspectacles.org
See for instance the series of posts "redefine statistical significance" (you can search on the site).

Cheers,
Maurizio
sriparnabiswas
Posts: 1
Joined: Wed Oct 27, 2021 4:39 am

Re: Reporting divergence between frequentist and Bayesian re

Post by sriparnabiswas »

Hello Everyone I am a newcomer to this group. Basically, I also want to know more things related to on : Reporting divergence between frequentist and Bayesian results. So I have researched a lot, after doing that I have found Some Basic things related to Bayesian results. So I am writing all of those things :
Abstract

Previous surveys of the literature have shown that reports of statistical analyses often lack important information, causing a lack of transparency and failure of reproducibility. Editors and authors agree that guidelines for reporting should be encouraged. This Review presents a set of Bayesian analysis reporting guidelines (BARG). The BARG encompass the features of previous guidelines, while including many additional details for contemporary Bayesian analyses, with explanations. An extensive example of applying the BARG is presented. The BARG should be useful to researchers, authors, reviewers, editors, educators and students. Basically, I have found all these related things from https://www.exactlly.com/blog/index.php ... -high-roi/. Utilization, endorsement and promotion of the BARG may improve the quality, transparency and reproducibility of Bayesian analyses.
Main

Statistical analyses can be conceptually elaborate and procedurally complex, and therefore it is easy to skip steps in the execution of the analysis and to leave out important information in reporting the analysis. These problems can result in erroneous or incomplete analyses and in reports that are opaque and not reproducible. Bayesian analyses might be especially prone to these problems because of their relative novelty among applied researchers. Basically I The concern is pressing because Bayesian analyses are promoted as having important advantages over traditional frequentist approaches1 and are being used in increasing numbers of publications in the behavioural sciences2.

In a review3 of the reporting of Bayesian analyses for medical devices, using the ROBUST (reporting of Bayes used in clinical studies) checklist4 for scoring, only 24% of 17 articles fully reported the prior, only 18% reported a sensitivity analysis, only 35% explained the model, and only 59% reported credible intervals. <a href="https://www.exactlly.com/blog/index.php ... -sme/">erp for sme</a> In a review5 of reporting of mixed-treatment comparisons analysed with Bayesian methods, only 52.9% of 34 articles reported the prior distribution, only 11.8% reported a sensitivity analysis, only 35.3% reported Markov chain Monte Carlo (MCMC) convergence measures, and only 20.6% made their computer code available. In a review6 of Bayesian meta-analyses of N-of-1 studies, using the ROBUST checklist4 for scoring, 5 out of 11 reviewed articles scored 7 out of 7 on the ROBUST list, and the remaining 6 articles scored 6 out of 7. In most cases, all that was missing (according to the ROBUST criteria) was a sensitivity analysis. However, only 3 of the 11 articles mentioned convergence diagnostics, no articles mentioned effective sample size (ESS), and only 2 articles made the computer code available. In an extensive review of applied Bayesian analyses2, 55.6% out of 99 articles did not report the hyperparameters specified for the prior, 56.6% did not report checking for chain convergence, and 87.9% did not conduct a sensitivity analysis on the impact of priors7. A review8 of 70 articles in epidemiologic research using Bayesian analysis found that 2 did not specify a model, 9 did not specify the computational method, 14 did not specify what software was used, 27 did not report credible intervals, 33 did not specify what prior was used, and 66 did not report a sensitivity analysis, leading the authors to conclude that “We think the use of checklists should be encouraged and may ultimately improve the reporting on Bayesian methods and the reproducibility of research results”8.

Journal editors and authors agree that reporting guidelines should be encouraged9. In a survey of editors and authors10 regarding the use of the guidelines for transparent reporting of evaluations with nonrandomized designs (TREND)11, most editors believed that all authors and reviewers should use reporting guidelines. Editors agreed that reporting guidelines need to be promoted by journals and by professional societies. Authors felt that they would be encouraged if peers used the guidelines. In the findings, the authors recommended10 that there should be future research to demonstrate the efficacy of guidelines, which would also encourage their adoption.
Post Reply