Appendix I - Non Response Analysis from NSCAW I

Appendix I - Non Response Analysis from NSCAW I.doc

National Survey of Child and Adolescent Well-Being Second Cohort (NSCAW II)

Appendix I - Non Response Analysis from NSCAW I

OMB: 0970-0202

Document [doc]
Download: doc | pdf


Non-Response Analysis for NSCAW I

Analysis of Nonresponse Bias in the NSCAW I


In addition to taking steps to maximize response from each type of respondent, the study team assessed the potential for nonresponse bias. The following section describes the analysis for Wave 1; analyses were conducted for Waves 2-4 and results are presented in Sections 7.9-11 in the NSCAW Data File User’s Manual. The NSCAW Wave 1 CPS data were analyzed to address the following questions:

  • Is the language in the consent forms discouraging respondents from giving complete and accurate information?

  • Are item missing rates indicative that current caregivers are concerned about the repercussions of honest and complete answers?

The results of that investigation concluded the following:

  • Although respondents may have been concerned about the privacy of their answers, there is no evidence to suggest a tendency for respondents to either falsify or withhold information, either as a result of the consent form or information from the interviewer. In addition, interviewers appear to be neutral collaborators in the interview, whose presence does not seem to have had a detrimental effect on honest reporting.

  • Sensitive items are subject to significantly greater item nonresponse than non‑sensitive items (98.2 vs. 99.8). However, for sensitive items, the item nonresponse rate is still less than 2 percent, which is negligible for most analyses. Therefore, the tendency for respondents to either actively or passively refuse to answer sensitive questions is quite small in the study.

Another investigation has been conducted in order to provide additional information on the extent of the bias arising from unit nonresponsethe failure to obtain an interview from a NSCAW sample member. An estimate of the nonresponse bias is the difference between the sample estimate (based only on respondents) and a version of the sample estimate based upon respondents and nonrespondents. In the NSCAW, a number of distinct data sources are used to obtain information on the sample child. When the sample child or caregiver did not respond to the survey, other data sources (such as the frame and caseworker data) can be used to provide information about them. Thus, it is possible to compare nonrespondents and respondents for some characteristics in order to investigate the potential nonresponse bias in the NSCAW results. In the remainder of this section, we briefly summarize the results of an investigation of the bias in the NSCAW results due to nonresponse using the data on nonrespondents available from other data sources.

An overall indicator of the severity of the bias due to nonresponse in the NSCAW is simply to count data items in our analysis for which respondents and nonrespondents differ significantly. Although this measure does not take into account either the type of comparisons that are significant or their importance for future analysis, it can be used as an indicator of the extent of the bias for general analysis objectives.

Variables used in this analysis were those that were also collected in the Wave 1 caseworker interview for the nonrespondents. However, only about 60 percent of the nonrespondents had a caseworker interview available. In this regard, the estimates of nonresponse bias are themselves subject to a bias due to incomplete information from caseworkers. However, we did not attempt to account for this potential bias in the analysis. These results assume that nonrespondents for whom caseworker information is unavailable are similar to nonrespondents for whom caseworker data is available.

Using the data collected for CPS and LTFC sample members from caseworkers at Wave 1, we estimated the bias due to using only the data for those with a key respondent interview. Let denote the true average of the characteristic based upon the entire target population; i.e., is the average value of C that we would estimate if we conducted a complete census of the target population. Thus, is the target parameter that we intend to estimate with . Then bias in as an estimate of is simply the difference between the two, viz.,

(1)

The bias can be estimated as follows. Let denote the estimate of the average value of C for the unit nonrespondents in the sample; i.e., is a computed as but over the nonrespondents in the sample rather than the respondents. For example, we may have information on the characteristic C that is measured in the child interview from some other source such as the caseworker or caregiver interview or the sampling frame. If that is true, then can be computed. From this, we can form an estimate of using the following formula:

(2)

where is the unit nonresponse rate for the interview corresponding to the characteristic C. Thus, an estimator of the bias in is obtained by substituting in (2) for in (1). This results in the following estimator

(3)

or, equivalently,


(4)

That is, the estimator of the nonresponse bias for C is equal to the nonresponse rate for the interview that collects C times the difference in the average of C for respondents and nonrespondents.

We estimated these means and their standard errors using the weights and accounting for the survey design, as described in Section B.2.2. We estimated using the unadjusted base weight. We estimated the mean for respondents, , in two ways: (1) using the unadjusted base weight, and (2) using the final adjusted analysis weight. This allowed us to see if the bias was reduced by applying the nonresponse and post-stratification adjustments to the weights.

We first tested the null hypothesis that the bias is 0 with α=0.05, i.e., HO: Bias=0. We used a t-statistic for the test, and Taylor series linearization to estimate the standard errors. Variables with fewer than 20 cases in the denominators of the proportions or means were excluded from the analyses. Because of the dependencies in the tests, we used the largest k-1 categories when a variable had k levels. We counted the number of times that the null hypothesis was rejected.

Exhibit B.3-2 summarizes the results of this analysis. The analysis for children is for those who were key respondents (i.e. age 10 or older); this group of children was eligible to be interviewed and assent from them was necessary in order for the interview to proceed. In the CPS data, for the child interview, the number of tests that were deemed significant is slightly more than the number expected purely by chance (6.9 percent using the final analysis weight). This analysis indicates for the caregiver that there are more variables with significant bias than would be expected by chance (13.8 percent).

We examined the variables with significant bias. The biases, while statistically significant due to the large NSCAW sample size, were generally small and not practically significant. For this reason, we also tested a hypothesis of practical significance. We tested that the relative bias is small, and counted the number of times that the hypothesis was rejected. Specifically, we tested the null hypothesis HO: |Relative Bias|<5 percent, where the relative bias is calculated as 100*Bias/ . Exhibit B.3-2 shows the number of times that the null hypothesis was rejected at =0.05, using both sets of weights. This exhibit shows that for the CPS sample, with the final analysis weight, the number of variables with practically significant relative bias is four percent, or within the range of what would be expected by chance. Thus, we conclude that nonresponse bias in the CPS sample is unlikely to be consequential for most types of analyses.

Variables showing practically significant bias in the CPS sample were variables related to the type and severity of abuse/neglect, relationship of the primary caregiver to the child, likelihood of abuse/neglect in the next 12 months without services, child placement in a group home, and the outcome of the investigation being substantiated. The actual bias in these variables was small (less than 10%).

Exhibit B.3-2 also shows the results for the LTFC sample. When using the final response adjusted analysis weight, approximately four percent of the tests that the bias is zero were significant at a five percent alpha, and less than one percent of the tests that relative bias is small were significant at a five percent alpha. This analysis also suggests that the bias was reduced by applying the nonresponse adjustment to the weights. Thus, there is no evidence of nonresponse biases in the LTFC data.

Exhibit B.3-2. Number of Significant Biases Observed by Type of Respondent for the CPS and LTFC Samples

Caregiver

CPS Sample

LTFC Sample

Base Weight

Final Analysis Weight

Base Weight

Final Analysis Weight

Items with more than 20 cases in the denominator

500

500

1,107

1,107

Items where HO: Bias=0 was rejected

83 (16.6%)

69 (13.8%)

187 (16.9%)

50
(4.5%)

Items where HO: |Relative Bias|<5% was rejected

33 (6.6%)

19 (3.8%)

32 (2.9%)

4
(0.4%)

Child

Base Weight

Final Analysis Weight

Base Weight

Final Analysis Weight

Items with more than 20 cases in the denominator

478

478

802

802

Items where HO: Bias=0 was rejected

48 (10.0%)

33 (6.9%)

108 (13.5%)

33
(4.1%)

Items where HO: |Relative Bias|<5% was rejected

45 (9.4%)

19 (4.0%)

26
(3.2%)

8
(1.0%)


Exhibit B.3-3 indicates that the response rate tends to be slightly lower for children in the LTFC sample component aged 11 to 14 than for children 10 or younger. This suggests that the potential for nonresponse bias is greater for older children and their caregivers. This effect of age on nonresponse was not apparent in the previous analysis because those data were analyzed separately by key respondent type: child and caregiver. (For NSCAW, the caregiver was the key respondent when the child was less than 11 years old.) Therefore, the nonresponse bias results for children included only children who were at least 11 years old. Still, the lack of evidence for nonresponse bias in the previous analysis suggests that the greater relative bias for older children was quite small.

Exhibit B.3-3. Response Rates by Age of Child for the LTFC Sample at Wave 1

Age

# of respondents

% unweighted response rate

% weighted response rate

0 - 2 years old

246

76.64

78.94

3 - 5 years old

122

71.35

64.37

6 B 10 years old

196

73.41

76.07

11- 14 years old

163

69.07

69.41

TOTAL

727

73.07

73.41





File Typeapplication/msword
File TitleNational Survey of Child and Adolescent Well Being
Authordbond
Last Modified ByDHHS
File Modified2009-06-23
File Created2009-06-23

© 2024 OMB.report | Privacy Policy