Request: Confidence intervals for all effects

Discuss the jamovi platform, possible improvements, etc.

by filination » Fri Nov 09, 2018 5:08 am

Dear JAMOVI,


Amazing job, we're now using JAMOVI in all my classes.

An important request: please add a quick toggle box to add a report of the confidence intervals of the effect size to the tables/outputs, not just the mean difference CIs (etc.). This is now a required stat in most of the psychology journals, and students really struggle with using R codes and the available online tools, and having to copy paste from JAMOVI elsewhere.

We would also appreciate: effect size and CIs for what's required for power analyses in Gpower like w ES for chi-square and f for ANOVAs, etc.

[I noticed JASP does this for t-test/ANOVAs, if you need an example of what that might look like. but we much prefer JAMOVI due to several modules and R syntax code]

Thank you,
Gilad
filination
 
Posts: 5
Joined: Fri May 12, 2017 5:18 am

by jonathon » Fri Nov 09, 2018 5:20 am

hi,

we haven't been planning on adding CI's for effect-sizes, as i thought that was "the new statistics" approach, (and those guys are writing their own module for jamovi) - but i hadn't heard most psych journals require that. could you substantiate this a bit more?

if we're missing some key effect-sizes, that seems like an oversight. i'll see what ravi thinks.

with thanks

jonathon
User avatar
jonathon
 
Posts: 1115
Joined: Fri Jan 27, 2017 10:04 am

by filination » Wed Dec 05, 2018 10:12 am

Thanks for the answer. Somehow didn't get notified it was posted.

Maybe I missing something, but effect size CIs seem far more important to me than raw CIs. Raw means CIs do not allow easy interpretation (and meta analyzing and comparisons between articles using different measures/designs).
At the very least I would implement this for Cohen's d (paired and independent) and correlations. Ideally, add the rest (everything that is used in GPower to estimate required samples).

This is definitely the "new stats" trend, such as in Psychological Science. https://www.apa.org/education/ce/confid ... ervals.pdf

We try and train for this here:
https://docs.google.com/document/d/11CA ... n4iptrlblx
and calculating effect size, which is still a mess:
https://docs.google.com/document/d/11CA ... 14z4p2a9cj

But JAMOVI can make all of that so much easier.

This is an email example I received from the PsycSci editor just after acceptance:
[...] please add the 95% CI around the meta-analytic estimate of d in the abstract, and also add 95% CIs for the individual estimates of Cohen's d throughout the manuscript. If any of those are repeated measures, you may which to consult http://web.uvic.ca/~dslind/?q=node/197 . [...] we are encouraging authors to report an indicator of the degree of precision of their estimates of effect size.


Others:
https://www.elsevier.com/journals/journ ... or-authors
https://onlinelibrary.wiley.com/page/jo ... thors.html
https://www.springer.com/psychology?SGW ... -1390050-0
https://jamanetwork.com/journals/jamaot ... le/2653021

Many thanks,
Gilad
filination
 
Posts: 5
Joined: Fri May 12, 2017 5:18 am

by Vit » Thu Mar 26, 2020 9:45 am

I strongly support this request. CIs for effects is a critical feature for results interpretation and reporting. I do acknowledge there is now the UFS module, but it is not enough.

I don't want to jump between jamovi and JASP to get my test results reported fully :weary: ...
Vit
 
Posts: 11
Joined: Fri Apr 19, 2019 11:51 am

by jonathon » Thu Mar 26, 2020 9:56 am

yeah, this kills me too. we've just had our hands full with other stuff. if any one out there has some R skills, this won't be that difficult to add.

jonathon
User avatar
jonathon
 
Posts: 1115
Joined: Fri Jan 27, 2017 10:04 am

by Vit » Wed Apr 01, 2020 1:17 pm

I have another idea thinking about all this, and I am not sure where else to put it.

It would be nice if there were an option to run power analysis on the test you just ran. I don't mean to test for exact p values and so in non-sense "post-hoc" power, but to set general alpha levels (0.1, 0.05, 0.01 etc.) you were interested in, and as your test knows what sample size was used and what were the effects, it could just tell us what Beta levels we achieved. In cases of more complex models, it could tell us power for the omnibus test and specified main effects and interactions.

The current Power analysis module is a wonderful piece, especially for education. Still, often one uses other tests than just the t-tests family, and though this approach to power isn't ideal, it might be better than nothing and once again can raise awareness about what the results are telling us, with what precision and what Type 2 error rates.

Otherwise one still needs to wrestle with various effect size converters and G!Power. With this "aftermatch power" analysis, data and analysis simulation capabilities of Jamovi can be put into even better use.

It is just an idea, maybe a bad one.
Vit
 
Posts: 11
Joined: Fri Apr 19, 2019 11:51 am


Return to General

cron