
The National Violence Against Women Survey (NVAWS) had a relatively weak sampling procedure. NVAWS contacted persons through random-digit dialing of telephone numbers to reach persons in residential households. NVAWS thus excluded persons living on the street and in households without telephones. In addition, persons living in “institutions, churches, half-way houses, and dormitories” were screened out.^ Only one (randomly chosen eligible person) was interviewed per successful household-telephone contact. Since household size and the number of telephone numbers per household varies, the sample structure of the survey is quite complicated. NVAWS didn’t include information to account statistically for this complicated survey design.
NVAWS sample response rate was low. In the course of generating 16000 completed surveys, 13657 households were screened out, 12160 refused to participate, and 423 persons terminated the interview in the course of the interview.^ In addition, five male respondents who completed the survey were eliminated from the sample because they had “an excessive amount of incongruous data.”^ Why a large share of household were screened out isn’t clear. The number of respondents who refused to participate or terminated the interview amounted to 79% of completed interviews. That extent of attrition leaves considerable room for sample selection bias.
NVAWS sample in important respects was not nationally representative. NVAWS documentation observed:
differences between point estimates from the NVAW Survey and those from the CPS {Current Population Survey} are outside the expected margin of error (i.e. are not included in the 95-percent confidence interval computed from NVAW Survey estimates) for some demographic characteristics. Specifically, the NVAW Survey sample underrepresents older people, African-Americans, Hispanic men, and those with less than a high school education. To a lesser degree, those less than 30 are also underrepresented. Complementary groups (e.g. the middle-aged, whites, and the college educated) are overrepresented.^
These are easily recognized biases. Evaluating sample-selection biases in reporting violence requires an alternative measure of violence. NVAWS did not consider its sample-selection bias with respect to reporting violence.
Compared to NVAWS, the National Crime Victimization Survey (NCVS) has a more comprehensive sample and a higher response rate. NCVS uses a geographic multi-stage cluster design based on Census and building permit data. NCVS includes persons living in groups quarters such as “dormitories, rooming houses, and religious group dwellings.” About 93% of households NCVS sought to include in its sample participated in the NCVCS from 1996 to 2003. Across eligible persons (persons ages 12 and older) within households that NCVS sought for its sample, about 89% of persons participated.^
NCVS also has a statistically better approach to producing national estimates than does NVAWS. NCVS provides both record weights and sample design information for producing nationally representative estimates. Publicly available NVAWS data do not include weights. NVAWS documentation explains that except for “a few small but significant differences…for some outcome measures,” “the differences between weighted and unweighted samples and outcomes were not large enough to make weighting mandatory.”^ That’s an odd explanation. To the extent that good weights were actually constructed, included them in the public dataset would not have made their use mandatory. Weights help to focus attention on sample design, reduce sample bias, and adjust for aggregate patterns of non-response. Not having weights for NVAWS undermines the credibility of NVAWS statistics.
In addition to weaknesses in sample selection and sample weighting, NVAWS has other serious weaknesses. The U.S. National Electronic Injury Surveillance System All-Injury Program (NEISS AIP) provides much better quality national estimates of serious incidents of domestic violence than does NVAWS.