question about tukey comparisons

Discuss statistics related things
Post Reply
Steve
Posts: 5
Joined: Tue Mar 21, 2017 3:04 am

question about tukey comparisons

Post by Steve »

Hi guys

I am new to Jamovi and R. I am test driving Jamovi and got a basic question about the df's in the Tukey posthoc tests. In my other stats program I have the results using Tukey HSD tests that reports a single df. In Jamovi I see the tukey table of results has different df's for every comparison. Are the df's being adjusted for each comparison? Similar to the Games Howell df adjustment? I hoping to figure this out because I notice quite a difference in the significance p values (some comparisons are insignificant in Jamovi with these different df's compared to my other stats program results). It would be great to know how tukey results are computed here so I can feel confident in them. Thanks!

Also, suggestion: it would be great to have the ability to customize graphs. Thanks!
User avatar
Ravi
Posts: 194
Joined: Sat Jan 28, 2017 11:18 am

Re: question about tukey comparisons

Post by Ravi »

Could you attach an example where this happens so I can investigate this a bit further? You can save your file plus results as a jamovi file (with the .omv extension) in jamovi and you can subsequently attach the file to a reply in this thread or send it directly to me through a PM.

Customization of graphs is definitely something we want to add in the (near) future
Steve
Posts: 5
Joined: Tue Mar 21, 2017 3:04 am

Re: question about tukey comparisons

Post by Steve »

Thanks Ravi for the quick reply. I have attached the sample data with the results. As you can see in the results, the degrees of freedom are not the same for each tukey comparison. I haven't seen that before. I wondered if some adjustment in the df's are being made. Or, could there be an error in how the Tukey results are being computed. When I run the same analysis in Statistica I have more significant tukey comparisons that here and they have the same df. Hope you can help!

Thanks
Steve
Attachments
Untitled.omv
sample data
(18.87 KiB) Downloaded 571 times
User avatar
Ravi
Posts: 194
Joined: Sat Jan 28, 2017 11:18 am

Re: question about tukey comparisons

Post by Ravi »

Hi Steve,
I indeed see that the df values are a bit weird. I will ask the author of the R package what is going on here (the jamovi post-hoc tests are based on an R package that a lot of people use for doing post-hoc tests in R). I'll get back to you when he answers.
User avatar
Ravi
Posts: 194
Joined: Sat Jan 28, 2017 11:18 am

Re: question about tukey comparisons

Post by Ravi »

Just a quick update: I've emailed the package maintainer so I hope he will get back to me soon. I also looked into the package that we use a bit more and it seems that it uses the pooled standard error and df (where the df is calculated using https://en.wikipedia.org/wiki/Welch%E2% ... e_equation) to calculate the t and p values. The package we are using for post-hoc tests is the standard way of doing post-hoc tests in R (and recommended by a lot of people), so I'm pretty sure it's not an error (only a different way of computing the results).

Could you attach a screenshot of your result in Statistica just so that I have something to compare it with?
Steve
Posts: 5
Joined: Tue Mar 21, 2017 3:04 am

Re: question about tukey comparisons

Post by Steve »

Thank you for the update Ravi. I really appreciate it. I am by no means a stats expert, but if I understand correctly using the pooled SE increases the power of the test, correct? Below I left a DropBox link to a powerpoint file with all the screenshots of different post hoc tests from both Statistica and Jamovi. It requires a little explanation. Apologies for the long comment. I really love what you guys are doing and really hope this takes off. The different levels in Statistica are labeled by number: 1 is fall, 2 is none, 3 is rise. Reading the tables in Statistica is like a correlation matrix table of lining up rows with columns. Looking at the comparisons in sound Statistica's results with Tukey's and Bonferroni has 1 vs 2 (fall vs none) as non-significant, whereas in Jamovi all comparisons are significant (in this example the degrees of freedom are the same between the two programs at 62). Still, there is quite a large difference in the one comparison between the two. The ANOVA result revealed a significant main effect for distance, so I show that here as well. The images showing the bonferroni and tukey tests in Statistica show no significant effects among different levels of distance, whereas the Jamovi result show one significant result between 5 vs 10cm (which would be just cells 1 vs 3 in Statistica table). Here, too, the degrees of freedom are the same at 62. But there is something else strange (or just a coincidence) here. In the bonferroni table from Statistica it has a p value of .3678 for the comparison 1 vs 3 cells (or 5 vs 10cm). That is pretty much the exact p value for the bonferroni result in Jamovi but between 5 vs 7cm. Similarly, with the Tukey's test in Statistica the p value between 1 vs 3 (5 vs 10cm) is .268 is pretty much the same as the Jamovi tukey's test for comparisons between 5vs7cm (I hope this is not getting too confusing). I have seen other similar p values but for different comparisons between the two stat programs and wondered if there might be an error in populating the results table. For the comparisons for Direction * Sound I only show the tukey results from Statistica. Direction 1 is down, 2 is up, sound 1 is fall, sound 2 is none, and sound 3 is rise. You can see that Statistica reports only one df and it is 62. The Jamovi table shows different degrees of freedoms for the different comparisons. Again I am no stats expert but it looks weird that they are not truly unique if each comparison has an adjusted dfs for that comparison. I can't work out a discernible pattern. Despite a lot of agreement with lots of significant comparisons, there is still some big discrepancies - for instance comparing up-fall with up-none: in Statistica the p value approaches significance with a p value of 0.078 (in the table that is row 4 vs column 5). Depending on one's philosophy on statistically reporting, one can say that there is a marginal effect and may even tell students to keep collecting a few more subjects. Compare that one result on Jamovi and the p value is 0.2. Not very marginal at all and quite different. Thanks for all your help and work on this!

https://www.dropbox.com/s/8azl5q3yalvbm ... .pptx?dl=0
User avatar
Ravi
Posts: 194
Joined: Sat Jan 28, 2017 11:18 am

Re: question about tukey comparisons

Post by Ravi »

Hi Steve,

Sorry for not getting back to you sooner. I wanted to wait until the package maintainer answers my email so I can answer your whole question. As that has not happened yet I will answer your first question: yes, using a pooled variance usually increases power (compared to doing a set of individual t-tests). I think the R package uses a technique that is similar to Fisher's LSD to calculate the pooled variances and degrees of freedom.

I'll give a more precise answer (hopefully) once I got the reply.

Cheers,
Ravi
Steve
Posts: 5
Joined: Tue Mar 21, 2017 3:04 am

Re: question about tukey comparisons

Post by Steve »

Thank you Ravi. I look forward to your update when you hear back from the maintainer. Very much appreciated!

Best
Steve
Post Reply