5 columns - are they different?

Discuss statistics related things

by stats2019 » Wed May 27, 2020 4:02 pm

Hi,

please do not laugh :') . I am feeling like I am having a board in front of my head (as said in german) :thinking: .

I have a scale that is used 5 times per participant to evaluate something. I am not sure right now which is the correct test to see, if there are differences in the evaluation of the participants of the five topics.
Which would you suggest?

I have measured the scales on an ordinal level and created the mean of each of the five scales. A factor analysis has shown, that the items represent a single factor. The column of the mean is a "continuous", "decimal" level.

Thank you already.
Olaf
stats2019
 
Posts: 55
Joined: Wed Jan 23, 2019 8:02 am

by jonathon » Thu May 28, 2020 1:09 am

what about a friedman test (non-parametric RM ANOVA)? you can drop the five items into the one analysis.

jonathon
User avatar
jonathon
 
Posts: 1430
Joined: Fri Jan 27, 2017 10:04 am

by stats2019 » Thu May 28, 2020 5:51 am

Hi Jonathon,

thank you for your input. Friedman as my aggregated values (mean of items) would only be ordinal? Couldn't be assumed, that on a 7 point likert scale items might be treated as metric/scale, so that a RM ANOVA would indeed be possible?
Or - are you going (and recommending) the save path and leave likert as ordinal in such cases?

Thank you for your time
Olaf
stats2019
 
Posts: 55
Joined: Wed Jan 23, 2019 8:02 am

by jonathon » Thu May 28, 2020 6:07 am

with the friedman, you don't need to aggregate the items, just provide it with the 5 columns.

the problem with a parametric approach (the 'normal' RM ANOVA) is that ... let's say you have:

1 = strongly agree
2 = agree
3 = somewhat agree
4 = neither agree nor disagree
5 = somewhat disagree
6 = disagree
7 = strongly disagree

you're making strong and very specific assumptions about the magnitudes of people's responses. i.e. you're saying that the difference between 'strongly agree' and 'agree' is the same as the difference between 'agree' and 'somehwat agree', which is in turn the same as the difference between 'somewhat agree' and 'neither agree nor disagree'.

i would suggest that the 'difference' between these values will vary from question to question, and is generally problematic with likert items.

non-parametrics deal with all of this, because they only assume the order. 'strongly agree' might be only .0001 more than 'agree', and 'agree' might be a hundred times more than 'somewhat agree' ... non-parametrics don't care, they only need the values of the levels to be monotonicly increasing/decreasing.

makes sense?

jonathon
User avatar
jonathon
 
Posts: 1430
Joined: Fri Jan 27, 2017 10:04 am

by stats2019 » Thu May 28, 2020 6:27 am

Hi Jonathon,

yes, makes sense.
My five "treatments" I am comparing are actually means of the scale. Scale XY has 8 items (likert, 7 point) and is used to evaluate setting A, B, C, D and E. I did a MEAN(item_1_a,item_2_a,item_3_a,... item_8_a), the same for B ... E. So I get five scores, one for each scale and I am using the test to see if the five are different.
It seems Friedman is indeed working great.

Olaf
stats2019
 
Posts: 55
Joined: Wed Jan 23, 2019 8:02 am

by jonathon » Thu May 28, 2020 6:29 am

oic, so these are 5 means of 8 variables each ... yup, then an RM ANOVA might be appropriate ...

jonathon
User avatar
jonathon
 
Posts: 1430
Joined: Fri Jan 27, 2017 10:04 am

by stats2019 » Thu May 28, 2020 6:53 am

Hi Jonathon,

yep, looks good now. I was irritated as I did not find how to see differences between a-b, a-c ... Now i got that.
Olaf
stats2019
 
Posts: 55
Joined: Wed Jan 23, 2019 8:02 am


Return to Statistics