Statistics: Posted by zafrin — Sun Aug 01, 2021 10:44 am

]]>

Did you make any progress on this feature? I have a student asking about it.

Best

Deborah.

Statistics: Posted by DeborahA — Fri Jul 09, 2021 12:57 am

]]>

I have the same request: how can I display the results of the post-hoc comparison adding the grouping letters?

I work in the life sciences area and this would be very useful

Statistics: Posted by maxbrambi — Wed Jul 07, 2021 1:14 pm

]]>

https://gamlj.github.io/

cheers

jonathon

Statistics: Posted by jonathon — Thu Jul 01, 2021 3:49 am

]]>

There's also another reason not to implement that. Polynomial contrasts are independent, so observing that some order (say linear or quadratic) is not significant does not say anything about the next order contrast significance. Thus, the only way to know whether a contrast is significant is to test it. Once tested, why not present it to the user?

As regards your example, with 4-time points you need 3 contrasts: linear, quadratic, and cubic. They exhaust the main effect of time. Higher-order contrasts will be redundant, but estimating less than 3 will bias the results. So, we estimate 3 contrasts.

Please. let me know if I got your question right

cheers

mc

Statistics: Posted by mcfanda@gmail.com — Wed Jun 30, 2021 6:53 pm

]]>

Statistics: Posted by mcfanda@gmail.com — Wed Jun 30, 2021 6:41 pm

]]>

What is the best way to set up a mixed model with repeated measures?

I have:

One, continuous, dependent variable.

Two, categorical, between subjects factors.

One, categorical, within subject factor.

Jamovi doesn't appear to have a repeated-measures option built into Mixed procedures. I found a source online suggesting using individual as a random factor (Intercept | ID), but I don't understand enough at this stage to know if that would be appropriate.

Any suggestions and guidance welcome

Statistics: Posted by bencturnbull — Thu Jun 24, 2021 4:46 pm

]]>

Hope this makes sense.

Annie

Statistics: Posted by Annie — Wed Jun 23, 2021 2:01 pm

]]>

I am trying to do simple effects and plots for my multi level logistics regression with moderation. result always say that it cannot compute simple effects and that i need to refine the model and covariates conditioning. I already have tried a variety of ways in adjusting my data especially with covariates conditioning. but still it cannot compute.

I tried using a different data set to check whether the commands i am using are correct. in the other data sets, jamovi can compute simple effects. I do not know anymore what is wrong with my model with moderation or how to refine it to get the simple effects. the models by the way did converge and that jamovi was able to compute the p values, it has log likelihood and aic.

please help. multi level logistics regression moderation results with odd rations and coefficients are not enough to get a clear picrure of the impact and direction of the moderation unless i have the simple effects. please help.

enrico mendoza

Statistics: Posted by enricocmendoza — Wed Jun 23, 2021 3:06 am

]]>

Alternatively, how could I justify moving an item from factor 3 to factor 2, although in exploratory factor analysis a placement in factor 3 is recommended? The index modification for moving such an item from factor 2 to factor 3 in the confirmatory factor analysis is not the highest but is the third highest. Such an item if moved from factor 2 would bring down Cronbach's alpha.

Thank you very much.

Statistics: Posted by a.torelli — Tue Jun 08, 2021 7:52 am

]]>

IQR = Q3 - Q1. The interquartile range shows how the data is spread about the median. It is less susceptible than the range to outliers and can, therefore, be more helpful.

http://net-informations.com/ds/psa/iqr.htm

Statistics: Posted by elonjigar — Mon Jun 07, 2021 6:43 am

]]>

]]>

you may find the following paper helpful:

http://sro.sussex.ac.uk/id/eprint/68206/3/Field%20%26%20Wilcox%20%282016%29%20robust%20estimation%202017.05.08%20%5Brevision%5D%20%281%29.pdf

cheers

jonathon

Statistics: Posted by jonathon — Fri Jun 04, 2021 6:30 am

]]>

If anyone can help explain this to me in some simple languages, it'll be much appreciated.

Thank you very much!

BTW The following file is not comprehensible for me because I don't know R.

https://cran.r-project.org/web/packages/walrus/walrus.pdf

Statistics: Posted by xiaoli.yu — Fri Jun 04, 2021 6:21 am

]]>

No you don't need to do a log transformation before a PCA.

There are several rationales for transforming variables. One is highly skewed data. The normal distribution is symmetric. One good and under-used measure of symmetry is the skewness statistic. The reference value for the normal distribution is sk=0. Take a look at the skewness coefficients (sk) of your variables. Monte carlo studies suggest that variables where |sk| < 2 can be treated "as if" they are normal. Variables with positive skew; that is where sk > 2, can be transformed with a log transformation. (It doesn't matter whether the log(10) or log(e) transformation is used.) The log transformation will reduce the skewness, typically below 2. There is nothing special about 2, it is a rule-of-thumb or guideline, rather than a benchmark.

Statistics: Posted by DavoFromDapto — Wed Jun 02, 2021 5:04 am

]]>