I’d like to suggest a possible improvement for jamovi and would be interested to hear what others think.
When performing paired or independent t-tests, Mann–Whitney U tests, Wilcoxon signed-rank tests, or Kruskal–Wallis tests (especially when running multiple comparisons), it would be helpful to have a built-in option to adjust p-values — ideally via a simple checkbox in the analysis settings of jamovi for those tests. Widely used methods like Bonferroni or Holm correction would already be sufficient.
I believe this feature would make it easier to apply proper multiple testing correction directly within jamovi, without having to manually adjust results afterwards.
What do others think? Would this be useful in your workflows too?
Best regards,
vinschger
Adjusted p-values for multiple comparisons
Re: Adjusted p-values for multiple comparisons
we're reluctant to add more corrections to the t-test itself, just because it's not an oft-used feature in t-tests, and we try and not to overwhelm the user with options.
Longer term, I could imagine us adding the ability to save p-values to a column in a data set, where a subsequent analysis could correct them.
But I don't have a good solution for you in the short-term I'm sorry.
Longer term, I could imagine us adding the ability to save p-values to a column in a data set, where a subsequent analysis could correct them.
But I don't have a good solution for you in the short-term I'm sorry.
Re: Adjusted p-values for multiple comparisons
Dear Jonathon,
thank you very much for your reply — I really appreciate your taking the time to explain the considerations. I completely understand the desire to avoid overwhelming users with too many options, and I see how keeping the interface clean and accessible is an important goal for jamovi. That said, as someone who is not a statistician, I very much rely on guidance from experienced people like yourself. That’s why I hope you don’t mind a follow-up question out of genuine curiosity: is it really the case that p-value adjustments are not commonly used in these kinds of tests? I was under the impression — perhaps incorrectly — that multiple comparisons should be corrected routinely when doing multiple comparisons to avoid inflated Type I error rates. In the past I was always taught that some kind of correction (e.g., Bonferroni or Holm) is necessary whenever multiple hypotheses are tested. But maybe this is more nuanced than I thought? Thanks again for your thoughtful answer and for all your work on jamovi!
Best regards,
vinschger
thank you very much for your reply — I really appreciate your taking the time to explain the considerations. I completely understand the desire to avoid overwhelming users with too many options, and I see how keeping the interface clean and accessible is an important goal for jamovi. That said, as someone who is not a statistician, I very much rely on guidance from experienced people like yourself. That’s why I hope you don’t mind a follow-up question out of genuine curiosity: is it really the case that p-value adjustments are not commonly used in these kinds of tests? I was under the impression — perhaps incorrectly — that multiple comparisons should be corrected routinely when doing multiple comparisons to avoid inflated Type I error rates. In the past I was always taught that some kind of correction (e.g., Bonferroni or Holm) is necessary whenever multiple hypotheses are tested. But maybe this is more nuanced than I thought? Thanks again for your thoughtful answer and for all your work on jamovi!
Best regards,
vinschger