While trying to teach and write exercises using the exploratory factor analysis and PCA functions in base jamovi, I have noticed that the parallel analysis results can vary from iteration to iteration when one of the eigenvalues is right on the line - that is, when you reload the exact same analysis it sometimes comes up with a different result for the simulation threshold, resulting in a different number of factors.
To make the results more stable, then, could we have one or both of these features?
* Displaying and entering a seed for the simulation
* Option to increase the number of iterations in the simulation
Many thanks,
Prof Roger Giner-Sorolla
University of Kent
rsg@kent.ac.uk
Factor analysis core functions: more options for stochastic parallel analysis?
Re: Factor analysis core functions: more options for stochastic parallel analysis?
we're planning on adding a "determinism mode" to jamovi for teaching purposes.
but yeah, in general statistical software has handled it pretty poorly. it's a positive that you've been able to discover that the results aren't particularly stable, where as adding a random seed would likely conceal that.
in an ideal world, any procedure based on sampling or simulation would have a means to assess just how stable it's results are, and either surface that information to the user, or automatically continue sampling until a certain criteria of stability is reached.
this wouldn't always be a trivial thing to achieve, so you can understand why authors of R packages, etc. have just punted the problem to the user and said "if your results aren't stable, increase the number of iterations".
all this to say it's a bit of a crappy situation.
jonathon
but yeah, in general statistical software has handled it pretty poorly. it's a positive that you've been able to discover that the results aren't particularly stable, where as adding a random seed would likely conceal that.
in an ideal world, any procedure based on sampling or simulation would have a means to assess just how stable it's results are, and either surface that information to the user, or automatically continue sampling until a certain criteria of stability is reached.
this wouldn't always be a trivial thing to achieve, so you can understand why authors of R packages, etc. have just punted the problem to the user and said "if your results aren't stable, increase the number of iterations".
all this to say it's a bit of a crappy situation.
jonathon