Nonresponse Bias Analysis Plan

ATT-N1-NonresponseBiasAnalysisPlan-Apr21.pdf

National Survey of Family Growth

Nonresponse Bias Analysis Plan

OMB: 0920-0314

Document [pdf]
Download: pdf | pdf
NSFG

OMB ATTACHMENT N1

OMB No. 0920-0314

Date:

April 5, 2021

To:

National Survey of Family Growth (NSFG) Team – National Center for Health Statistics

From:

Taylor Lewis and Andy Peytchev, RTI International

Subject:

Nonresponse Bias Analysis Plan

Background
Any survey subject to nonresponse runs the risk of nonresponse bias, defined as the difference between a
statistic calculated based on the respondents and the value for the full sample. This quantity is seldom
available, but there are different methods to inform the potential for nonresponse bias. As each method
requires different analytic assumptions, use of multiple methods is desirable. The purpose of this
memorandum is to outline the analyses we plan to conduct to gauge the impact of unit nonresponse on
nonresponse bias at both the household and individual levels for the 2022 – 2029 National Survey of
Family Growth (NSFG). As stipulated in Guideline 1.3.4 of Office of Management and Budget (2006),
this is a requirement for any survey sponsored by the federal government that does not achieve a unit
response rate of 80% or higher. Our current expectation for the NSFG is a (weighted) unit response rate
of 60%.
The target population for the NSFG is men and women aged 15-49 at the time of screening. A total of
5,000 main survey completes are targeted in each NSFG data collection year. To account for ineligibility
(e.g., vacancy, households without any individuals in the targeted age range) and unit nonresponse, we
plan to begin data collection activities with a sample of 19,272 housing units (HUs). In the NSFG design,
nonresponse occurs at two stages: gaining participation to the screening survey, in which eligibility of
household members is established and sample members for the main interview are selected, and
participation to the main survey to collect data from the selected sample members.
Numerous design features within a responsive design framework (Groves and Heeringa, 2006; Schouten,
Peytchev, and Wagner, 2017) will be put in place to maximize response rates and minimize the risk of
nonresponse bias in the 2022 – 2029 NSFG. Sampled HUs will be contacted in multiple modes across
multiple data collection phases. In addition, a more intensive nonrespondent follow-up procedure (Hansen
and Hurwitz, 1946; Deming, 1953) with increased incentive levels will be implemented to a subsample of
nonrespondents towards the end of each quarterly sample release. A nonresponse follow-up (NRFU) will
be conducted in Quarters 1 and 2 of 2022 for screener nonrespondents and for main survey
nonrespondents using short mailed instruments to provide information on NSFG nonrespondents. In the
next section, we briefly review the planned data collection protocol. Thereafter, we describe our
proposed methods for assessing nonresponse bias.
NSFG Data Collection Protocol
Data collection activities occurring by web and face-to-face (FTF) for the quarterly sample releases will
last a total of 20 weeks, and will consist of the following four phases:
1. Phase 1: Web (weeks 1 – 4). Sampled HUs will be contacted by mail with a prepaid incentive of
$2 to encourage them to complete the screener survey by web and continue on to the main survey,
if applicable, for an additional $40.
2. Phase 2: FTF and Web (weeks 5 – 12). FTF follow-up will begin for sampled HUs that have not
completed the screener survey in Phase 1 and for HUs that completed the screener survey but
1

NSFG

OMB ATTACHMENT N1

OMB No. 0920-0314

whose sampled individual has not completed the main survey. Transitioning cases from web to
FTF contact mode does not relinquish the opportunity for the respondent to complete the screener
or main survey to be completed by web, if preferred.
3. Phase 3: FTF (weeks 13 – 16). Analogous to Phase 2 in the 2011 – 2019 NSFG, a subsample of
HUs not completing the screener survey and individuals not completing the main survey will be
selected for a more intensive data collection protocol with higher incentives: an additional $5
prepaid screening incentive and a doubled incentive of $80 for completing the main survey.
4. Phase 4: Mail (weeks 17 – 20). A subsample, determined by budgetary constraints, of
nonresponding HUs will be mailed a one-page eligibility survey. Similarly, a subsample of main
survey nonrespondents will be mailed a one-page questionnaire that includes several questions of
key importance to NSFG, and informed by the reasoned ability to be asked without the context of
the main survey. Both sets of mailings will include a $1 prepaid incentive. Note that Phase 4 is
not intended to collect data for the full main survey. The primary objective of the eligibility
survey is to identify additional ineligible HUs that can be removed from the denominator of
response rate calculations. Data from the one-page main survey questionnaire can be used to
measure nonresponse bias and inform nonresponse weighting adjustments.
Nonresponse Bias Analyses
Table 1 outlines our overall strategy for evaluating nonresponse bias, which can be categorized into three
types of analysis. The first will be to examine descriptive statistics regarding response rates. The second
and third will be investigations into what Little and Rubin (2019) refer to as ignorable and nonignorable
nonresponse bias, respectively. The former is evidenced by observable differences in demographic
characteristics and key survey outcomes that are corrected in the NSFG estimates. Analysis of ignorable
nonresponse bias also offers the opportunity to assess the overall efficacy of each phase of data collection,
since meaningful mitigation of nonresponse bias requires responses obtained from these to impart
substantive differences on the distribution of the survey outcomes.
Nonignorable nonresponse bias is of greatest interest and generally requires special designs to measure.
Our Phase 4 NRFU data collection is designed with this goal in mind. As described in Section 4 of OMB
No. 0920-0314 Supporting Statement A, comparable estimates to benchmark NSFG key outcomes against
are difficult to identify, either because of population coverage incompatibilities or scope (e.g.,
pregnancies resulting in a live birth vs. all pregnancies). We can still make use of benchmark estimates
for comparisons of change over time that should be less susceptible to measurement differences across
surveys.
Table 1: Three Types of Nonresponse Bias Analyses to be Conducted for NSFG 2022 - 2029.
Analysis Type
Examples
1. Descriptive Statistics on
• Tables of response rates by data collection phase, pertinent
Response Rates
sampling frame variables, paradata, and demographics
• Plots of trends in response rates over quarterly sample
releases
2. Investigations into Ignorable
Nonresponse Bias

•
•
•

Tables comparing base-weighted demographic distributions
of respondents against target population figures derived
from the American Community Survey
Presentation and commentary on parameters of the response
propensity model(s) utilized to adjust for unit nonresponse
Tables comparing base-weighted key outcomes of
respondents from Phases 1 and 2 against respondents from
Phase 3
2

NSFG

3. Investigations into Nonignorable
Nonresponse Bias

OMB ATTACHMENT N1

OMB No. 0920-0314

•

Tables comparing base-weighted key outcomes using
respondents from Phases 1 and 2 against base-weighted key
outcomes using respondents from Phases 1, 2, and 3

•

Tables comparing distributions of key outcomes and
demographics derived from the Phase 4 NRFU one-page
questionnaire versus distributions observed for respondents
in (1) Phase 3 only and (2) Phases 1, 2, and 3 combined
Compare trends in estimates to estimates of change in other
surveys (e.g., from the National Health Interview Survey)

•

References
Deming, W. (1953). On a probability mechanism to attain an economic balance between the resultant
error of nonresponse and the bias of nonresponse. Journal of the American Statistical Association, 48,
743-772.
Groves, R., and Heeringa, S. (2006). Responsive design for household surveys: Tools for actively
controlling survey errors and costs. Journal of the Royal Statistical Society: Series A, 169, 439-457.
Hansen, M., and Hurwitz, W. (1946). The problem of nonresponse in sample surveys. Journal of the
American Statistical Association, 41, 517-529.
Little., R., and Rubin, D. (2019). Statistical analysis with missing data. 3rd ed. New York, NY: Wiley.
Office of Management and Budget. (2006). Standards and guidelines for statistical surveys. Available at:
https://www.ftc.gov/system/files/attachments/data-qualityact/standards_and_guidelines_for_statistical_surveys_-_omb_-_sept_2006.pdf.
Schouten, B., Peytchev, A., and Wagner, J. (2017). Adaptive survey design. Boca Raton, FL: CRC Press.

3


File Typeapplication/pdf
AuthorLewis, Taylor
File Modified2021-04-09
File Created2021-04-09

© 2024 OMB.report | Privacy Policy