Debate: What Are Acceptable Model Fit Standards?

Debate: What Are Acceptable Model Fit Standards?

by HUF04 Nguyễn Huỳnh Phương Thảo -

In evaluating my Confirmatory Factor Analysis (CFA) model, I focused on CFI (Comparative Fit Index) and RMSEA (Root Mean Square Error of Approximation) as the two main fit ...

more...

In evaluating my Confirmatory Factor Analysis (CFA) model, I focused on CFI (Comparative Fit Index) and RMSEA (Root Mean Square Error of Approximation) as the two main fit indices.

For the CFI, I used a threshold of ≥ .95 to indicate good model fit. This cutoff is widely recommended in the literature, especially by Hu and Bentler (1999), who suggested that values above .95 represent a well-fitting model.

For the RMSEA, I followed the commonly accepted standards:

  • < .05 = close fit
  • < .08 = acceptable fit
  • < .06 = good fit (stricter criterion)

I personally used RMSEA < .08 as acceptable and < .06 as ideal, because many scholars acknowledge that strict thresholds may vary depending on sample size and model complexity.

In my model, the results were:

  • CFI = .94
  • RMSEA = .067

Based on these values, the model almost met the ideal standard. The CFI is slightly below .95, but still very close, while the RMSEA falls within the acceptable range below .08.

Therefore, I would consider this model acceptable rather than perfect.

I believe a model that “almost” fits can still be accepted, especially when the theoretical framework is strong and the factor loadings are meaningful. However, I would also review modification indices and cross-loadings before making final conclusions.

I respectfully think that relying only on rigid cutoffs may be problematic because recent discussions suggest that fit indices should be interpreted in context rather than as absolute rules

Debate: What Are Acceptable Model Fit Standards?

by HUF04 Nguyễn Đăng Hải -
I think your reasoning is very balanced, especially how you combine statistical thresholds with theoretical considerations. I agree with you that relying too strictly on ...

more...

I think your reasoning is very balanced, especially how you combine statistical thresholds with theoretical considerations. I agree with you that relying too strictly on cutoffs like CFI ≥ .95 can sometimes be unrealistic, particularly with more complex models or smaller samples.

That said, I’m a bit curious about how far is “acceptable” for you in practice. For example, your CFI = .94 is very close to .95, so it feels reasonable to accept—but would you still accept the model if it dropped to, say, .92 or .90? Where would you personally draw the line?

Also, I like that you mentioned checking modification indices, but do you think there’s a risk of overfitting the model if we rely too much on them? Maybe it would be helpful to also report other indices like SRMR or χ²/df to give a more complete picture of model fit.

Overall, I agree with your point that model evaluation should be flexible, but I think it’s also important to be transparent about how much deviation from the “ideal” we are willing to accept and why.