Have a nice day.

Statistics: Posted by RHainez — Mon Nov 11, 2019 5:21 pm

]]>

]]>

Thanks for the help :)

Statistics: Posted by Gintare84 — Mon Nov 11, 2019 10:52 am

]]>

Because there are tests of multivariate normality available, I'd suggest to implement one of those (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3927875/). The multivariate tests can be applied to the residual of the contrasts. Which contrast scheme one uses is immaterial, because if multivariate normality holds for one linear combination of the original variates, it holds for any linear combination of them.

Consider, however, that the contrasts approach may get a bit complex as the anova design expands. A simpler solution would be to test the multivariate normality of the residuals, computed as the original dependent variates minus their means (that are the estimated marginal means of the model).

Statistics: Posted by mcfanda@gmail.com — Mon Nov 04, 2019 9:45 am

]]>

https://github.com/jamovi/jamovi/issues

jonathon

Statistics: Posted by jonathon — Sat Nov 02, 2019 5:06 am

]]>

Statistics: Posted by risakd — Sat Nov 02, 2019 5:05 am

]]>

In my investigations of statistical analysis for my books, it is amazing how many different terms are used for the same phenomena. Given the antipathy or maybe indifference between mathematically focused statisticians and applied statistics users, I can't see this changing anytime soon.

Statistics: Posted by coledavis — Fri Nov 01, 2019 2:14 pm

]]>

]]>

i really feel like the stats community have done a really poor job at empowering consumers of statistics. (exhibit A, jamovi is being written by psychologists!)

anyway, that's my rant.

jonathon

Statistics: Posted by jonathon — Thu Oct 31, 2019 11:51 pm

]]>

I think that the correct answer is to allow the user to ask for normality tests and Q-Q plots which are then computed separately for each residual term.

For the Between Subjects Effects the residuals are based on the independent ANOVA of the between subject IVs and covariates on the mean of the repeated columns.

For the Within Subjects effects I think that the residuals should be based on the independent ANOVA of the between subject IVs and covariates on a set of orthogonal contrasts of the repeated columns (i.e. L1-L2 for two levels; L1-MEAN(L2,L3) and L2-L3 for three levels; L1-MEAN(L2,L3,L4) and L2-MEAN(L3,L4) and L3-L4 for four levels, etc (where L1 is the reference level). Ideally this would be a single test/Q-Q plot per residual so that the user isn't faced with multiple plots/tests for a single normality assumption.

It may be that this procedure is flawed, for example if the selection of which contrasts get used is critical. However, I'm sure that it is better than the SPSS residuals (which are demonstrably false since they don't simplify to the same as a paired t-test for a single repeated IV with two levels).

This is the closest I've found on the subject so far https://psych.wisc.edu/Brauer/BrauerLab/wp-content/uploads/2014/04/Murrar-Brauer-2018-MM-ANOVA.pdf

Cheers,

Wake

Statistics: Posted by Wake — Thu Oct 31, 2019 11:33 pm

]]>

jonathon

Statistics: Posted by jonathon — Thu Oct 31, 2019 11:10 pm

]]>

My understanding is that the SPSS method of saving and testing a residual for each level of the repeated measures variable(s) is incorrect. However, at the moment jamovi doesn't offer an alternative.

I've just written slides recommending that our students compute an average and a difference of the repeated measures, i.e. MEAN(L1,L2,L3) and L1-MEAN(L2,L3), and then run one-way ANOVA or t-tests on those scores in order to run normality tests. For the difference score I feel like I should add a second orthogonal contrast (e.g. L2-L3) but equally, since there is only one within-subjects residuals term surely the is a single set of residuals? The degrees of freedom for the residuals suggest levels-1 residuals per participant, but does it matter which orthogonal contrasts are chosen. Of course, ideally I would like to test the normality of two difference columns simultaneously without having to copy and paste them into extra rows -- which is beyond what I'm happy to ask my students to do.

Best wishes,

Wakefield

Statistics: Posted by Wake — Thu Oct 31, 2019 11:07 pm

]]>

Best wishes,

Wakefield

Statistics: Posted by Wake — Thu Oct 31, 2019 10:52 pm

]]>

]]>

The best would be to use a non-parametric effect size like Vargha & Delaney Â measure of dominance. It is implemented in the RProbSup R package. With that package, you will also get the confidence interval around the effect size. I advise you read that article: https://www.researchgate.net/profile/Jo ... esigns.pdf

Peng, C.-Y. J., & Chen, L.-T. (2014). Beyond Cohen's d : Alternative Effect Size Measures for Between-Subject Designs. The Journal of Experimental Education, 82(1), 22-50. doi:10.1080/00220973.2012.745471

"[Â] estimate[s] the degree of one distribution dominating over the other distribution" (p.40)

"Of the nine estimators summarized in Table 2b, we recommend the four estimators of dominance in Category (B) to supplement Cohen’s d to conceptualize ES beyond mean differences. Of these four estimators, Vargha and Delaney’s Â stands out for its meaningful interpretability in terms of stochastic equality/superiority or stochastic homogeneity/heterogeneity in a variety of research contexts and for a variety of data types. Compared to Cohen’s d, Vargha and Delaney’s Â represents a radical reconceptualization of ES with sound statistical properties and well developed theoretical framework." (p.45)

That would be great if Jamovi could provide the Â effect size.

Statistics: Posted by Mik — Tue Oct 29, 2019 3:38 am

]]>