How do you know if a test actually measures what it claims to? Here, discriminant validity comes into the picture. It checks whether a test truly measures a distinct concept rather than correlating too much with something it’s not supposed to measure.
Discriminant validity is a subtype of construct validity that ensures two conceptually different constructs are not too closely related in measurement. If two distinct tests or variables show high correlations, it indicates a validity issue.
If a test lacks validity, it might mistakenly evaluate skills that have nothing to do with the role. Keep reading to understand how discriminant validity works, why it matters in hiring, and how to measure it correctly.
Summarise this post with:
What is discriminant validity?
Discriminant validity confirms that any test or assessment is measuring exactly what it’s supposed to, without overlapping with unrelated traits. Simply put, if two different concepts are tested, their results shouldn’t be too similar.
Let’s say a company creates a test to evaluate leadership skills. If this test correlates strongly with a separate test designed for communication skills, that’s a red flag.
While both traits are important, they are not the same, and their measurements shouldn’t overlap significantly. If they do, the leadership test might also unintentionally assess communication.
The term discriminant validity was first introduced by Campbell and Fiske (1959) in their Multitrait-Multimethod (MTMM) Matrix. Their research highlighted that a test must correlate well with similar traits (convergent validity) and stay different from unrelated ones.
In hiring, assessments are created to measure specific competencies such as technical skills, leadership, cognitive ability, etc. If discriminant validity is missing, test results can be misleading.

For companies that use pre-employment tests, ensuring discriminant validity means ensuring each test is scientifically sound and accurately evaluates candidates.
This is where platforms like Testlify become helpful. It offers reasonably-validated assessments that help recruiters make data-driven hiring decisions.
Discriminant validity vs. convergent validity
Two essential concepts come into play when evaluating a test’s accuracy: discriminant validity and convergent validity. While both fall under construct validity, their focus is on different aspects.
- Convergent validity checks whether a test correlates well with similar constructs (i.e., measuring what it should).
- Discriminant validity ensures that a test remains distinct from unrelated constructs (i.e., not measuring what it shouldn’t).
Simply, a good test should align with related traits (high convergent validity) while staying separate from unrelated traits (high discriminant validity).
| Convergent Validity | Discriminant Validity | |
| Focus | Measures how well-related constructs correlate | Ensures unrelated constructs do not correlate |
| Goal | Tests should align with similar traits | Tests should remain distinct from different traits |
| Example | A verbal reasoning test should correlate with a reading comprehension test. | A leadership test should not strongly correlate with a creativity test. |
| Ideal Outcome | High correlation with similar constructs | Low correlation with unrelated constructs |
| Problem if missing? | The test may not be measuring its intended trait correctly. | The test may be unintentionally measuring something else. |
Discriminant validity examples
Understanding discriminant validity becomes easier when we look at real-world examples. Let’s explore how it applies in different scenarios, especially in hiring.
Pre-employment assessments
Suppose you create a test to measure technical problem-solving skills in software engineers. However, while analyzing the test results, you notice that candidates who score high in technical problem-solving have also scored high in a logical reasoning test.
What’s the issue? The high correlation suggests that the test isn’t measuring technical skills alone. This means the test lacks discriminant validity, as it overlaps with another cognitive ability.
Hiring decisions could become flawed if the test doesn’t clearly distinguish between these skills. It might lead to selecting candidates who may be good at logic but not necessarily great at coding.
Leadership vs. communication skills in employee evaluations
In this example, suppose a company wants to assess the leadership potential of employees. However, when they checked the results through a test, they found that candidates who scored high on leadership also scored high on communication skills.
While leadership and communication skills are related, they are not the same. The leadership test might also unintentionally measure communication skills if the scores correlate too closely.
If HR teams use this test to promote employees, they might promote excellent communicators instead of strong leaders.
How to measure discriminant validity?
To check whether a test has strong validity, we must demonstrate that it does not correlate too highly with unrelated constructs.
If a test measuring one skill or trait overlaps too much with an unrelated skill, its discriminant validity is weak. The most common ways to assess validity include Pearson’s Correlation Coefficient (r) Method.
This is the most straightforward and widely used approach to check whether two tests measuring different constructs have low correlations.
Let’s understand how it works. Pearson’s correlation coefficient (r) measures the relationship between two variables.
The correlation coefficient (r) ranges from -1 to +1, where r = 1 (strong positive correlation, imperfect for discriminant validity), r = 0 (no correlation, ideal), and r = -1 (strong negative correlation, but not necessarily discriminant validity).
A high correlation (r ≥ 0.85) suggests poor validity, meaning the test might not measure a truly separate construct.
If a test is designed to measure leadership skills, its results shouldn’t be too similar to a teamwork test because while both are related, they are not the same.
If the correlation between the two tests is r = 0.89, it means the leadership test might also be picking up teamwork skills, making it less reliable in measuring leadership alone.
Final thoughts
Discriminant validity helps confirm that different concepts remain distinct. For HR professionals and recruiters, using assessments with strong discriminant validity means higher-quality talent selection.
Whether it’s a leadership test, problem-solving assessment, or cognitive evaluation, tests must be scientifically validated to reflect a candidate’s abilities truly.
At Testlify, we understand the importance of test validity in recruitment. Our pre-employment assessments are designed with scientifically backed methodologies.
Our tests measure the right skills without overlap. Want to ensure your hiring assessments are truly reliable? Try Testlify today!

Chatgpt
Gemini
Grok
Claude



















