Hi, @tutkuoztel.

Your output probably corresponds to an SPSS syntax like this:

- Code: Select all
`CROSSTABS`

/ TABLES = Binarized_Accuracy BY Experimental_Condition

/ FORMAT = AVALUE TABLES

/ STATISTICS = CHISQ

/ CELLS = COUNT EXPECTED ROW COLUMN ASRESID BPROP

/ COUNT ROUND CELL.

with the

PROP option, pairwise comparisons of column proportions are calculated and indicates which pairs of columns (for a given row) are significantly different.

Significant differences are indicated in the contingency table with APA-style formatting using subscript letters and are calculated at significance level .05.

With

BPROP also the possibility to adjust the p-values (Bonferroni method), pairwise comparisons of column proportions make use of the Bonferroni correction, which adjusts the observed significance level for the fact that multiple comparisons are made.

As a footnote to the contingency table, this should be reported:

"Each subscript letter denotes a subset of Experimental Condition categories whose column proportions do not differ significantly from each other at level .05".

tutkuoztel wrote: what does this "a,b" tell me? can it be that that one column is not statistically significant from either of the other two columns? Can it be even possible since the other two columns are statistically different from one another (as depicted "a" and "b")?

Your interpretation is correct, it could be that the column (EC = 0) is not statistically significant from either of the other two columns. It might also be possible since the other two columns are statistically different from each other.

Since there is an association between two nominal variables (or whether one can influence the other), but what could this association be.

There are a few different ways to perform a so-called post-hoc analysis. A nice article explaining some of them can be found in an article by Sharpe (2015).

https://scholarworks.umass.edu/cgi/viewcontent.cgi?article=1269&context=pare

Many authors suggest examining so-called adjusted residuals, or Pearson residuals, or standardized adjusted residuals. These standardized residues can then be used to check whether the observed and expected value in the population may actually be different.

In your contingency table, the adjusted residuals appear to be higher for level (1) of Experimental Condition. At a 95% confidence level, if the value is greater than 1.96 or less than -1.96, it could be considered significantly different, but since we are doing this for each cell, we actually have a big risk in taking the wrong decision. To adjust for this multiple test, we would have to adjust the significance level by dividing the original level of 0.05 by the number of tests we perform (in this case the number of cells [R x C]). We should therefore consider a significance of 0.05 / 6 = 0.00833, which would correspond to a critical value of 2.63826 (or less than -2.63826).

This is known as the Bonferroni correction. To determine this critical value and the adjusted p-value for the z-value = 2.52435 (sig.0ff) for Experimental Condition level (1), take a look at this screenshot of some formulas in Excel.

- p-value adjusted.PNG (48.51 KiB) Viewed 489 times

Cheers,

Maurizio