If I had to prioritize one, I would argue that discriminant validity is more critical than convergent validity, especially in the later stages of model validation.
Converg...
If I had to prioritize one, I would argue that discriminant validity is more critical than convergent validity, especially in the later stages of model validation.
Convergent validity ensures that items intended to measure the same construct are indeed correlated (e.g., high factor loadings, AVE ≥ .50). For example, in my study on students’ perceptions of AI in EFL writing, items measuring Perceived Usefulness (e.g., improving writing quality, saving time) showed strong loadings on the same factor, indicating good convergent validity. However, achieving convergent validity is often more straightforward, as items are typically designed to be similar in wording and meaning.
In contrast, discriminant validity tests whether constructs are truly distinct from one another, which is often more challenging and theoretically important. For instance, in my model, Perceived Usefulness and Writing Confidence showed relatively high correlations. Without sufficient discriminant validity (e.g., failing the Fornell–Larcker criterion or HTMT threshold), it becomes unclear whether these are genuinely separate constructs or simply different expressions of the same underlying concept.
Theoretically, if discriminant validity is weak, the entire structure of the model is compromised. Constructs may overlap conceptually, leading to multicollinearity issues, unstable parameter estimates, and ambiguous interpretations. Practically, this means that any conclusions drawn (e.g., “AI usefulness predicts writing confidence”) may be misleading, because the two constructs are not sufficiently distinct.
A concrete example from the literature is when attitude and satisfaction are modeled as separate constructs but show extremely high correlations (e.g., r > .85). In such cases, researchers often need to merge constructs or redefine them, as the lack of discriminant validity undermines the model’s theoretical clarity.
That said, convergent validity cannot be ignored. If items do not converge, the construct itself is poorly measured. However, a model with acceptable convergent validity but poor discriminant validity is more problematic, because it risks misrepresenting the relationships between constructs.
In conclusion, while both are essential, discriminant validity plays a more decisive role in ensuring the integrity and interpretability of SEM models, particularly when testing complex theoretical frameworks.
