GAMLJ only reports z scores (no longe t scores)

Everything related to the development of modules in jamovi

by mihaidricu » Fri Mar 12, 2021 3:55 pm

Hello!

I am using GAMLJ and the latest jamovi version (1.6.16) for Windows to run linear mixed models.
I have noticed that the t scores have been replaced by z scores in both the Simple Effects and the Posthoc tests outputs.

Was there a reason for this? Strangely, the z scores only pop up for three-way interactions, respectively in the output of Simple Effects / Posthoc tests for a three way interaction. Requesting the same analyses for a one-way or a two-way analysis returns the familiar t scores with degrees of freedom.

In psychology, t tests are more widely reported than z scores. With t scores, one can report df. However, jamovi only outputs z scores and p values, so it's unclear how we can report our findings.

Thank you!
mihaidricu
 
Posts: 17
Joined: Thu Feb 27, 2020 2:21 pm

by mcfanda@gmail.com » Fri Mar 12, 2021 6:00 pm

Hi, which analysis are you running?
User avatar
mcfanda@gmail.com
 
Posts: 251
Joined: Thu Mar 23, 2017 9:24 pm

by mcfanda@gmail.com » Fri Mar 12, 2021 6:05 pm

I've just checked with a 3-way interaction on GLM and Mixed, and not z-tests appear. Please send the omv file so we can check.
thanks
mc
User avatar
mcfanda@gmail.com
 
Posts: 251
Joined: Thu Mar 23, 2017 9:24 pm

by mihaidricu » Mon Mar 15, 2021 1:59 pm

Thank you for looking into this!

I am attaching two datasets. Both have the same paradigm and both are analyzed with linear mixed models (GAMLj) but were collected at different times (and likely analyzed with different jamovi versions).

The old dataset has simple effects and posthoc tests for a two-way interaction. The two outputs show t scores and p values. The simple effects omnibus is based on F tests.
The new dataset has simple effects - i get an error "hitting a resource limit" if i try to add posthoc tests in the same output. This simple effects shows z scores and p values. The simple effects omnibus is based on chi square tests.

In other words, the same paradigm but one spews F tests and t scores and the other spews chi squares and z scores. Is this due to a newer version of jamovi?

P.S. One can still generate posthoc tests in the new dataset but only if one lets go of simple effects. Doing both causes jamovi to give the error message.
P.P.S. The posthoc tests output in the old dataset looks bizarre - it is as if it is missing the entire right half of the comparisons for the two-way interaction.
Attachments
Older dataset with only t scores.omv
It shows both simple effects and posthoc tests
(26.81 KiB) Downloaded 85 times
New dataset with only z scores.omv
It shows only simple effects.
(76.31 KiB) Downloaded 88 times
mihaidricu
 
Posts: 17
Joined: Thu Feb 27, 2020 2:21 pm

by mcfanda@gmail.com » Tue Mar 16, 2021 9:44 am

Hi, thanks for the reply!
1) The issue with the z-tests is not related to the new version. It's due to the data length. GAMLj uses R package `emmeans` to estimate both simple effects and posthoc tests. With the mixed model, we set the degrees of freedom options to `satterhwaite` to obtain the t-test. However, when the data are very long, emmeans switches to the 'asymptotic` method, so it yields a z-test. This is wise because with a large dataset the computation of the t-test may become very slow and resources consuming. If you had run it in R with `emmeans()`, you'd get:
Code: Select all
Note: D.f. calculations have been disabled because the number of observations exceeds 25000.
To enable adjustments, add the argument 'lmerTest.limit = 37248' (or larger)
[or, globally, 'set emm_options(lmerTest.limit = 37248)' or larger];
but be warned that this may result in large computation time and memory use.

This is wise also because when you have so many df (in your case 370001), the z-test and the t-tests are perfectly the same, so there's no need to wait for a half-hour for a result that is known to be equivalent. Indeed, I forced the calculation of the t-test in your dataset and I got the t-tests exactly equals to the z-tests, with 370001 df. So, if you want to report the results in terms of t-tests, just report the z-test values, with 370001 df. If you want to see that that is correct, do the following:

Go to option panel `Factor coding` and set Valence to `dummy`. Look at the `Fixed Effects Parameter Estimates` table, the row corresponding to `Warmth1 warm-cold`. That is now the simple effect of Warmth computed for Valence=NEGATIVE, because with `dummy` scoring we score valence as 0 and 1, so the other effects are computed for valence=0.
You can see that the coefficient, the CI, and the rest are all identical to the `Simple effects` table, row Negative. You can also see that the t-test in `Parameter Estimates` is -13.3 (rounded) as it is the z-test in Simple Effects. The df are 370001, as expected.
The same goes for the post-hoc tests.

2) As regards the fact that you do not see the labels of the second column of the posthoc table, I cannot reproduce it, so it may have to do with the locale (different computer setups, fonts, and the like). I'll investigate this and fix it, thanks for mentioning it.
It should be like this
Code: Select all
Post Hoc Comparisons - Valence ✻ Warmth
Comparison   
Valence   Warmth       Valence   Warmth   
Negative      Cold      -      Negative      Warm      
Negative      Cold      -      Positive      Cold      
Negative      Cold      -      Positive      Warm      
Negative      Warm      -      Positive      Warm      
Positive      Cold      -      Negative      Warm      
Positive      Cold      -      Positive      Warm      
User avatar
mcfanda@gmail.com
 
Posts: 251
Joined: Thu Mar 23, 2017 9:24 pm

by mihaidricu » Tue Mar 16, 2021 9:49 am

Thank you so much for the fast turn around !

It makes sense, as the older dataset was 80 something participants and the new one has 200.

I will give your Factor coding suggestion a try!
mihaidricu
 
Posts: 17
Joined: Thu Feb 27, 2020 2:21 pm

by mihaidricu » Tue Mar 16, 2021 3:11 pm

mcfanda@gmail.com wrote:
This is wise also because when you have so many df (in your case 370001), the z-test and the t-tests are perfectly the same, so there's no need to wait for a half-hour for a result that is known to be equivalent. Indeed, I forced the calculation of the t-test in your dataset and I got the t-tests exactly equals to the z-tests, with 370001 df. So, if you want to report the results in terms of t-tests, just report the z-test values, with 370001 df. If you want to see that that is correct, do the following:

Go to option panel `Factor coding` and set Valence to `dummy`. Look at the `Fixed Effects Parameter Estimates` table, the row corresponding to `Warmth1 warm-cold`. That is now the simple effect of Warmth computed for Valence=NEGATIVE, because with `dummy` scoring we score valence as 0 and 1, so the other effects are computed for valence=0.
You can see that the coefficient, the CI, and the rest are all identical to the `Simple effects` table, row Negative. You can also see that the t-test in `Parameter Estimates` is -13.3 (rounded) as it is the z-test in Simple Effects. The df are 370001, as expected.
The same goes for the post-hoc tests.

[/code]


I have played around with the Factors coding option. The df, the CI and the t values change substantially in the Fixed Parameters table (to the point where the t and the z scores don't correspond) depending on what other factors are coded as "dummy". For example, for the two-way interaction, e.g. warmth * valence, it does not matter whether warmth is coded as "dummy" or "simply" as long as "competence" (the one of no interest for this interaction) is left to "simple". Changing "competence" to "dummy" makes the t values for "Warmth1 warm-cold" no longer correspond to the z scores.

It gets trickier with a three-way interaction, e.g. warmth * competence * valence. To get the df and CI, only the "valence" factor needs to dummy coded while the others are left to "simple"? Or do both warmth and competence need to be changed to "dummy" coding, in addition to valence? It makes sense to me to code all of them as "dummy". However, in light of jamovi's behavior for the two-way interaction above, I am not sure.

I would be generally comfortable reporting the z scores from the Simple effects menu as t values in the manuscript - i just need to be certain of the df and CI. However, it gets tricky with three-way and four-way interactions.

Thank you!
mihaidricu
 
Posts: 17
Joined: Thu Feb 27, 2020 2:21 pm

by mihaidricu » Tue Mar 16, 2021 4:37 pm

Please disregard my previous email! One only needs to use "dummy" in Factors coding for the variables in the interaction of interest. For all other factors, one needs to use "simple" in Factors coding.

All is working well. Thank you for your suggestion!



mihaidricu wrote:
mcfanda@gmail.com wrote:
This is wise also because when you have so many df (in your case 370001), the z-test and the t-tests are perfectly the same, so there's no need to wait for a half-hour for a result that is known to be equivalent. Indeed, I forced the calculation of the t-test in your dataset and I got the t-tests exactly equals to the z-tests, with 370001 df. So, if you want to report the results in terms of t-tests, just report the z-test values, with 370001 df. If you want to see that that is correct, do the following:

Go to option panel `Factor coding` and set Valence to `dummy`. Look at the `Fixed Effects Parameter Estimates` table, the row corresponding to `Warmth1 warm-cold`. That is now the simple effect of Warmth computed for Valence=NEGATIVE, because with `dummy` scoring we score valence as 0 and 1, so the other effects are computed for valence=0.
You can see that the coefficient, the CI, and the rest are all identical to the `Simple effects` table, row Negative. You can also see that the t-test in `Parameter Estimates` is -13.3 (rounded) as it is the z-test in Simple Effects. The df are 370001, as expected.
The same goes for the post-hoc tests.

[/code]


I have played around with the Factors coding option. The df, the CI and the t values change substantially in the Fixed Parameters table (to the point where the t and the z scores don't correspond) depending on what other factors are coded as "dummy". For example, for the two-way interaction, e.g. warmth * valence, it does not matter whether warmth is coded as "dummy" or "simply" as long as "competence" (the one of no interest for this interaction) is left to "simple". Changing "competence" to "dummy" makes the t values for "Warmth1 warm-cold" no longer correspond to the z scores.

It gets trickier with a three-way interaction, e.g. warmth * competence * valence. To get the df and CI, only the "valence" factor needs to dummy coded while the others are left to "simple"? Or do both warmth and competence need to be changed to "dummy" coding, in addition to valence? It makes sense to me to code all of them as "dummy". However, in light of jamovi's behavior for the two-way interaction above, I am not sure.

I would be generally comfortable reporting the z scores from the Simple effects menu as t values in the manuscript - i just need to be certain of the df and CI. However, it gets tricky with three-way and four-way interactions.

Thank you!
mihaidricu
 
Posts: 17
Joined: Thu Feb 27, 2020 2:21 pm


Return to Module development