God bless always.

Statistics: Posted by enricocmendoza — Fri Nov 27, 2020 1:14 am

]]>

jonathon

Statistics: Posted by jonathon — Thu Nov 26, 2020 11:48 pm

]]>

]]>

I think I see what you are getting at, although I have two queries. First, the main problem I have is that a binomial logistic regression is a generalised linear model with a logit link function and binomial errors, and if I run a GLZM on 0,1 data surely it should give the same result as what is termed here a binomial logistic regression with the same 0,1 data? (For example, you can call effectively the same model is SPSS via two routes, either from a logistic regression menu or from the generalised linear models menu.)

Second, since the effects are being tested with (omnibus) LR tests, as I am not sure how the explanation with respects to reference category works. I would get that if we were looking at coefficients with respect to a reference, with/without an intercept fitted, then differences in how the reference category is set-up (albeit somewhat hidden under the bonnet) could lead to different results, but I would not expect this in terms of LR tests of effects.

I hope that is not too garbled.

Statistics: Posted by DavidMShuker — Wed Nov 25, 2020 3:06 pm

]]>

That applies only to main effects, interactions (the highest order interactions in the models) are not affects by the factor coding

Statistics: Posted by mcfanda@gmail.com — Wed Nov 25, 2020 2:04 pm

]]>

I'd add, however, that just simply looking at the ICC of the null model is not necessarily a good strategy to disregard a mixed model when the ICC is zero (o very small). You can have, in fact, models with huge variances of the slopes and no variances of the intercepts. Thus, a better strategy is to estimate the model with all possible random effects, and check if any of the random effects have variance greater than zero.

Statistics: Posted by mcfanda@gmail.com — Wed Nov 25, 2020 1:29 pm

]]>

]]>

we use the wilcox.test() function from R to calculate this value, but i'm not sure what it represents. here's the R documentation:

https://stat.ethz.ch/R-manual/R-devel/library/stats/html/wilcox.test.html

although it doesn't seem to shed much light either.

jonathon

Statistics: Posted by jonathon — Wed Nov 18, 2020 8:38 am

]]>

Stay blessed.

Statistics: Posted by GAS — Wed Nov 18, 2020 8:34 am

]]>

you'll probably find you get better mileage asking questions of the form:

"why doesn't this work? (with an example)", rather than saying "the procedure doesn't work"

jonathon

Statistics: Posted by jonathon — Tue Nov 17, 2020 9:07 pm

]]>

i am wondering how the mean difference is calculated for Mann Whitney U test.

It is not the same as the difference between means, which is okay, since the comparison is not based on means. it is also not the difference between medians (which is also okay, since, strictly speaking, Mann Whitney does not compare medians but mean ranks). However, the number is not even the difference between mean ranks. I have no clue what I am seeing.

Can anyone help me with this? Am I missing something obvious?

Emese

Statistics: Posted by hallgatoemese — Tue Nov 17, 2020 7:42 pm

]]>

GAS wrote:

Hiya, do you have any other procedure for doing sum variables? eg WSS Sum(WSS1,WSS2,WSS3,WSS4).

Hiya, do you have any other procedure for doing sum variables? eg WSS Sum(WSS1,WSS2,WSS3,WSS4).

Hi, @GAS.

If you need to get a new variable with the sum of the case values (rows) for some variables of interest, go to compute variable and use the SUM () function as in the screenshot.

Screenshot_SUM.PNG

Cheers,

Maurizio

Statistics: Posted by MAgojam — Tue Nov 17, 2020 7:20 pm

]]>

Thanks.

Statistics: Posted by GAS — Tue Nov 17, 2020 11:24 am

]]>

GAS

Statistics: Posted by GAS — Mon Nov 16, 2020 1:50 pm

]]>

https://blog.jamovi.org/2017/11/28/jamovi-formulas.html

cheers

jonathon

Statistics: Posted by jonathon — Mon Nov 16, 2020 10:58 am

]]>