error-term for pairwise comparisons

Discuss the jamovi platform, possible improvements, etc.

by J_Toby » Thu Jan 21, 2021 4:15 pm

Please forgive me if this has been discussed before; I did search and could not find it.

When conducting a set of pairwise comparisons (e.g., after finding a significant main effect), one has the choice of using an error-term that is based on all of the data (which will, therefore, be the same for all of the pairs) or one that is based on only the data that are being compared (which will [almost always] differ between pairs). When one uses a common error-term for all pairs, Q-based methods of correcting for multiple comparisons become available, such as Tukey's HSD. When one uses unique error-terms, one can only [easily] use p-value correction, such as Bonferroni.

JAMOVI uses the common error-term approach for both between- and within-subject effects. In contrast, for example, SPSS uses a common error-term for between-subject factors and unique error-terms for within-subject factors.

Without getting into a debate on which approach is better for within-subject designs, is there any plan to add the option of using unique error-terms for pairwise comparisons involving a repeated measure? Relatedly, is there any plan to add Dunn-Sidak correction as an option?

Thanks.
J_Toby
 
Posts: 5
Joined: Thu Jan 21, 2021 4:02 pm

by mcfanda@gmail.com » Fri Jan 22, 2021 9:11 am

You can run your repeated measure anova as a mixed model in gmalj, so you get pairwise comparisons with common error-term
User avatar
mcfanda@gmail.com
 
Posts: 251
Joined: Thu Mar 23, 2017 9:24 pm

by J_Toby » Fri Jan 22, 2021 5:16 pm

Thanks, but I'm asking for pairwise comparisons that use unique error-terms.
J_Toby
 
Posts: 5
Joined: Thu Jan 21, 2021 4:02 pm

by jonathon » Sat Jan 23, 2021 9:10 am

hi,

we do get the odd request for post-hocs for RM ANOVA which are basically just treating each of the pairs as paired t-tests. but i gather you still want the BS factors involved?

cheers

jonathon
User avatar
jonathon
 
Posts: 1719
Joined: Fri Jan 27, 2017 10:04 am

by J_Toby » Sat Jan 23, 2021 5:41 pm

Hi. Yes to the idea that they are all paired t-tests, preferably with Dunn-Sidak correction at least as an option, but No to the idea that any between-subject factors would be involved. Between-subject factors are ignored in this situation and the data are collapsed across (i.e., averaged across) any other within-subject factor(s). That's been standard in many areas of psychology for quite some time and it would be nice if it were an option. Whether all this was a conscious decision or simply due to this being what SPSS does by default is an interesting discussion, but maybe saved for another day.

Note that when one uses unique error-terms for the pairwise comparisons, one should probably use Cousineau-Morey error-bars lengths, instead of Loftus-Masson, but now I am probably asking too much. No stats package (that I've seen) offers the former as an option.

The real fun is when one needs to follow up on a mixed-factor interaction and chooses to examine the simple main effects of the within-subjects factor at each level of the between-subjects factor. This isn't easy to do correctly using any point-and-click stats package (that I know of), but that, too, can wait for another day.
J_Toby
 
Posts: 5
Joined: Thu Jan 21, 2021 4:02 pm

by J_Toby » Sat Jan 23, 2021 6:29 pm

Just to be clear about (the justification for) ignoring any between-subject factors when doing pairwise comparisons for a within-subjects main effect.... One would only conduct these pairwise tests when there are no interactions involving the within-subjects factor be the examined, including no interaction with any between-subjects factor.
J_Toby
 
Posts: 5
Joined: Thu Jan 21, 2021 4:02 pm

by jonathon » Sat Jan 23, 2021 10:14 pm

oh yup.

you've (wisely :P) avoided any discussion of whether this is a good approach or not. our intention is to provide this [insert pejorative here] style of testing as a separate module. the independent t-tests part is written (for a BS ANOVA) but not the paired t-tests (for a RM ANOVA).

https://github.com/raviselker/manytee

if you've got some r skills, it should be a fairly easy project to contribute to. otherwise you're sort of at the mercy of ravi's whim. i've been encourating this modules development. it's easy to make a case for it, but it's hard to make a *strong* case for it -- and where time is short, things without strong cases often fall to the wayside. but i remain an advocate for this approach.

cheers

jonathon
User avatar
jonathon
 
Posts: 1719
Joined: Fri Jan 27, 2017 10:04 am

by J_Toby » Sun Jan 24, 2021 1:52 am

I would love to hear a justification for using unique error-terms for a BS factor while still using pooled for a WS factor, but even appearing to complain about free software is below (even) me. I might start to work on the R code. We'll see.

In any event, thanks for the replies.

ps. the best (IMO), short justification for using pooled error-terms for BS factors and unique error-terms for WS factors is this: we want the most-specific error-term that includes all of the subjects
J_Toby
 
Posts: 5
Joined: Thu Jan 21, 2021 4:02 pm

by jonathon » Sun Jan 24, 2021 2:04 am

I would love to hear a justification for using unique error-terms for a BS factor while still using pooled for a WS factor


oh sorry, i think we've misunderstood one another.

but let me know if you need any assistance hacking on manytee. there's the resources at https://dev.jamovi.org for jamovi modules, and drop us a line if you'd like to join our slack group -- that's often a better medium for asking development questions.

cheers

jonathon
User avatar
jonathon
 
Posts: 1719
Joined: Fri Jan 27, 2017 10:04 am


Return to General