The Standards for Educational and Psychological Testing (American Educational Research Association, American Psychological Association, National Council on Measurement in Education, 2014), include the following statements about validity:

Validity refers to the degree to which evidence and theory support the interpretation of test scores entailed by proposed uses of tests. Validity, therefore, is the most fundamental consideration in developing and evaluating tests.

In 1990, Welsh, Kucinkas, and Curran compiled an exhaustive review of research aimed at examining the validity of the ASVAB. Their discussion segmented the literature into studies that examined three types of validity: content validity, construct validity, and criterion-related validity.

  • Content validity is defined as assurance that the content of the test itself measures the domain of interest and does not tap into unrelated domains. Among the evidence cited for the content validity of the ASVAB was: (a) documentation reviews which substantiated the test development process; and (b) item-level factor analyses of data from a nationally representative sample of American youth which found that each of the ASVAB subtests was relatively free of any contamination due to unrelated content.
  • Construct validity is concerned with whether the test measures the psychological characteristic it is intended to measure. The authors highlighted a number of studies in which the ASVAB was given in conjunction with other tests that purported to tap into the same or overlapping domain. In each case, the correlations between scores on a given ASVAB subtest and similar tests were high.
  • Criterion-related validity refers to whether a test predicts the outcomes of interest. In regard to the ASVAB, the authors discussed research which examined the relationship between ASVAB scores and training performance, attrition, and on-the-job performance. In each case, significant relationships were found that demonstrated that ASVAB scores do predict each of the outcome.

View Full Report

Joint-Service Job Performance Measurement Project

The largest effort to validate the ASVAB came in the form of the Joint-Service Job Performance Measurement Project (JPM). In the late 1970s, Congress became concerned that a good deal of time and money were being spent on trying to enlist high quality recruits (determined in part by ASVAB scores) without substantial evidence of a strong relationship between performance on the ASVAB and subsequent performance in military service. In response to their mandate to investigate this issue, the Army took the major role by initiating what became known as Project A. This massive undertaking involved developing hands-on and other performance tests and administering them to Soldiers at various times in their careers. Administrative data, such as performance ratings and awards and citations, were also collected and amassed into the largest database of its kind ever assembled. Much of this work is summarized in the volume Exploring the Limits of Personnel Selection and Classification (Campbell & Knapp, 2001).

In short, the research demonstrated that performance on the ASVAB did predict subsequent military performance, not just in training but throughout a Soldier’s career. For instance, researchers found that the correlation between ASVAB composites and Core Technical Proficiency (as measured by hands-on and job-knowledge tests) was .69 for Soldiers in their second tour of duty.