]]>

snfraser wrote:

Ravi,

You can use the userfriendlyscience package in R to calculate Likert Omega and Alpha reliability.

I have a description, links, and example here: http://shawnsstats.blogspot.ca/2016/11/ ... omega.html

-shawn

Ravi,

You can use the userfriendlyscience package in R to calculate Likert Omega and Alpha reliability.

I have a description, links, and example here: http://shawnsstats.blogspot.ca/2016/11/ ... omega.html

-shawn

Thanks for the link!

Statistics: Posted by Raymond89 — Tue Nov 14, 2017 12:20 pm

]]>

And after reading the link you provided, it seems that the one solution to avoid the post-hoc analysis following à khi² test "is to replace chisquare testing with log-linear analysis" (p.08).

So, a lead to solve this problem would maybe be to use the glm() function?

It's just a suggestion and I'll let the stats wizards debate over this, but a solution or a module (someone interested? ) would be nice so that mere mortals are not stuck with a significant difference (when there are more than two categorical variables) and no way to find its location in the data.

And keep up the godd work

Statistics: Posted by RHainez — Fri Jun 16, 2017 5:20 pm

]]>

Statistics: Posted by RHainez — Fri May 26, 2017 3:20 pm

]]>

http://pareonline.net/getvn.asp?v=20&n=8

jonathon

Statistics: Posted by jonathon — Fri May 26, 2017 12:56 pm

]]>

After a little research, maybe a simpler way to implement a post-hoc test for a khi² test would be to use the existing chisq.test from the stats package which gives the khi² value, its significance level and the Pearsons' residues (Xsq$residuals) and the Habermans' residues (Xsq$stdres) ?

Example.

> M <- as.table(rbind(c(212,29,11,2,3), c(318,61,6,11,13), c(160,39,9,6,12)))

> rownames(M) <- c("a1","a2","a3")

> colnames(M) <- c("b1","b2","b3","b4","b5")

> M

b1 b2 b3 b4 b5

a1 212 29 11 2 3

a2 318 61 6 11 13

a3 160 39 9 6 12

> Xsq <- chisq.test(M)

Message d’avis :

In chisq.test(M) : l’approximation du Chi-2 est peut-^etre incorrecte

> Xsq

Pearson’s Chi-squared test

data: M

X-squared = 20.3583, df = 8, p-value = 0.009062

> Xsq$residuals

b1 b2 b3 b4 b5

a1 0.93616090 -1.33963261 1.28206096 -1.48489514 -1.78406400

a2 0.09113804 0.24066234 -1.71501389 0.77521492 0.04505463

a3 -1.12090876 1.10480426 0.93998072 0.54059524 1.84188135

> Xsq$stdres

b1 b2 b3 b4 b5

a1 2.33159333 -1.71672778 1.54215391 -1.77896194 -2.14848117

a2 0.26026470 0.35362027 -2.36537490 1.06489378 0.06221195

a3 -2.72597782 1.38245439 1.10404745 0.63240142 2.16587067

The khi² value gives 1% so every value of Xsq$stdres which sits outside [-2.33;+2.33] flags a significant difference. So here, the signficant differences are found between (a1, b1), (a2, b3) and (a3, b1).

The example comes from http://www.normalesup.org/~carpenti/Not ... esidus.pdf (written in french, sorry). And the 2.33 limit comes from http://www1.udel.edu/FREC/ilvento/FREC408/normhand

I'm not a khi2 specialist at all, so it's just a suggestion. If a stats wizard would be kind enough to step in, his or her advices would be much appreciated

Have a nice day.

Statistics: Posted by RHainez — Fri May 26, 2017 12:33 pm

]]>

You can use the userfriendlyscience package in R to calculate Likert Omega and Alpha reliability.

I have a description, links, and example here: http://shawnsstats.blogspot.ca/2016/11/ ... omega.html

-shawn

Statistics: Posted by snfraser — Thu May 25, 2017 7:52 pm

]]>

Statistics: Posted by Ravi — Mon May 22, 2017 3:24 pm

]]>

]]>

this is a good idea. i will try and do this in the next fortnight. you've put in so many good feature requests, and i feel like we haven't got to any of them. anyway, i *will* do this in the next fortnight, so kick up a stink if i don't

with thanks

Statistics: Posted by jonathon — Mon May 22, 2017 9:50 am

]]>

T1 23 AB

T3 19 B

T4 5 C

The ranking is designated by letters. I am referring to these as lettering. Will it be possible for you to produce results of Tukey test in such a format?

Statistics: Posted by Shahid — Sat May 20, 2017 6:41 am

]]>

A khi² test of association can inform us about the presence/absence of a significant difference between groups.

Is it possible to add a post-hoc test to identify which group is different from the others (when the size of the table is > 2x2)? Maybe a pairwise.prop.test() or something like that?

Have a nice day.

Romaric

Statistics: Posted by RHainez — Thu May 18, 2017 7:09 am

]]>

As a general suggestion - it would be really helpful to record the underlying R syntax for each procedure somewhere in the documentation. It's great that novices can quickly and easily use jamovi without too much expertise, but it would also be very nice to let curious users understand what is happening behind the scenes.

Statistics: Posted by vincep — Wed May 17, 2017 5:58 am

]]>

More specific answers:

1. jamovi reports omegaT

2. It's not possible yet to calculate omega using polychoric correlations in jamovi, I'll look into this a bit more. Generally, you can assume that Likert scale data behave fine for these analyses when there are 5 or more categories (See https://pdfs.semanticscholar.org/3833/3 ... 0524cb.pdf for a simulation study).

3. You can see the specific code we use over here: https://github.com/raviselker/Rjamovi/b ... .R#L43-L48. Basically, we calculate the omegaT without including the specific item.

4. We assume 1 factor. See the paper I linked to at the start of this post for more information.

5. Thanks for reporting this. Could you also report this on github (https://github.com/jamovi/jamovi/issues)? This way you'll get an update once this is fixed + we can see clearly which bugs we still need to fix.

Cheers

Statistics: Posted by Ravi — Mon May 15, 2017 3:49 pm

]]>

http://www.personality-project.org/r/book/Chapter7.pdf

http://personality-project.org/r/psych/HowTo/factor.pdf

Here's my questions:

1. It seems like there are two omega values - omegaH and OmegaT. Which does jamovi report?

2. Is it possible to calculate omega reliabilities using polychoric correlations (ie the equivalent of setting poly=TRUE in the omega function in R). Is this necessary for working out reliabilities on data from Likert scales?

3. What is the equivalent R procedure (without using jmv) to produce a list of 'if item dropped' omega values?

4. The Revelle references explain omega calculations as involving an EFA plus some other steps. It seems like the default option for the psych omega function is to calculate an EFA with 3 factors. Is that also what jamovi does? If so, is that going to be ok for all datasets? Are there any situations where we would need to specifiy a different number of factors?

5. A slight bug: From the Reliability Analysis screen, if I clear all variables out of the 'items' selection box, this does not automatically clear items from the 'Reverse Scaled Items' box and causes an error.

Thanks for any clarity on these points.

Statistics: Posted by vincep — Mon May 15, 2017 2:49 pm

]]>