OMB Number: 1820-NEW
Revised: 10/3/24
SUPPORTING STATEMENT
FOR PAPERWORK REDUCTION ACT SUBMISSION
OMB Number: 1820-XXXX
Revised 10/3/2024
Describe the potential respondent universe (including a numerical estimate) and any sampling or other respondent selection method to be used. Data on the number of entities (e.g., establishments, state and local government units, households, or persons) in the universe covered by the collection and in the corresponding sample are to be provided in tabular form for the universe as a whole and for each of the strata in the proposed sample. Indicate expected response rates for the proposed sample. Indicate expected response rates for the collection as a whole. If the collection had been conducted previously, include the actual response rate achieved during the last collection.
For each administrative and primary data source proposed, Exhibit B.1 summarizes the respondent universe, sampling method, sample size, and expected response rate.
Exhibit B.1. Respondent universe, sample, and expected response rate for study data sources
Data Sources and Respondent |
Respondent Universe |
Type of Sample |
Sample Size |
Expected Response Rate |
Site Visits |
||||
Semi-structured interviews |
||||
Participants |
112 |
Purposive Sample |
112 |
100% |
Parents and guardians |
112 |
Purposive Sample |
112 |
100% |
Provider |
84 |
Purposive Sample |
84 |
100% |
Focus groups with providers, partners and families |
84 |
Purposive Sample |
84 |
100% |
Survey data collection |
||||
Participant baseline survey |
7,500 |
Census |
7,500 |
100% |
Participant follow-up survey |
3,750 |
Census |
2,625 |
70% |
Parent and guardian follow-up survey |
3,750 |
Purposive Sample |
2,625 |
70% |
Administrative data collection |
||||
Staff records |
56 |
Census |
56 |
100% |
Project administrative data |
140 |
Census |
140 |
100% |
Project cost data |
28 |
Census |
28 |
Describe the procedures for the collection of information, including:
Statistical methodology for stratification and sample selection.
Estimation procedure.
Degree of accuracy needed for the purpose described in the justification.
Unusual problems requiring specialized sampling procedures, and
Any use of periodic (less frequent than annual) data collection cycles to reduce burden.
All grantees will be included in each data collection effort. The sampling approach will vary by data collection effort and is described in more detail below.
Site Visits
All project sites will be included in qualitative data collection efforts. During each visit, the site visit team will conduct interviews, focus groups, and on-site observations with project leaders, staff from partner organizations and employers (including 14(c) certificate holders), participants, and their parents or guardians. We will purposively select interview and focus group sample members, working with the project director to select people for each interview based on their project roles, characteristics (such as age and subminimum wage employment involvement), and time involved in the project. We will also select participants based on populations typically underserved by state vocational rehabilitation (VR) agencies (such as people from specific racial and ethnic groups or living in rural areas). We expect to meet with 12 to 15 individuals in each of the 14 projects during the site visits.
Survey Data Collection
Participant Baseline and Follow-up Survey. The study team will not use sampling methods for the baseline survey; the team will collect data from the census of youth (ages 18 to 24) and adult (ages 25 and older) who enroll in Subminimum Wage to Competitive Integrated Employment (SWTCIE) projects and consent to participate in the study, with an expected sample size of 7,500. The follow-up survey will be conducted with everyone who completed the baseline survey. The study team expects a 70 percent response rate on the participant follow-up survey and the parent and guardian survey for two reasons. First, recent data collection experiences with similar populations (Mann et al, 2021, Matulewicz et al, 2017, Sevak et al, 2021) revealed challenges with tracking respondents over time and hesitation to participate in a federal survey. Second, response rates to surveys have generally declined over time (Williams and Brick 2018; Dutwin and Buskirk 2021).
Parent and Guardian Follow-up Survey. The study team will identify respondents for the parent and guardian survey based on two criteria. First, the study team will survey parents and guardians of all participants who were age 18 or younger at baseline. Second, the study team will include all parents and guardians of participants who required proxies to complete the baseline survey. The study team expects to obtain a 70 percent response rate for the parent and guardian survey (for similar reasons described above), for a sample size of 2,625.
Administrative Data Collection
The study team will not use sampling methods for administrative data; the team will collect administrative data from all projects for all participants and staff.
Exhibit B.2 indicates the analysis types to be used for each research question. To address the research questions, the study team will use five types of analysis methods across the study.
Participation analysis. The participation analysis will describe participants’ characteristics, including whether they represent populations historically underserved by VR agencies and people with high support needs, and compare them with other populations with disabilities. It will also analyze participants’ enrollment progress to understand whether the projects achieved their stated goals. For employers and service providers, the participation analysis will describe types of partnerships and their roles in the projects. Finally, the participation analysis will examine employer and provider recruitment and engagement strategies across projects and highlight lessons and best practices for 14(c) certificate holders, other employers, and service provider enrollment. This analysis will rely on data from interviews and focus groups, participant and parent and guardian surveys, and administrative data. For measures using continuous scales, the study team will calculate means, percentiles of distributions, and standard deviations to describe central tendency and variation. For categorical scales, the study team will use frequency distributions and percentages. For qualitative data, we will synthesize diverse perspectives using a flexibly designed coding approach to facilitate analysis by research question, intervention component, intervention level, and focus area (context, components, connections, infrastructure, and scale).
Implementation analysis. The implementation analysis will explore participants’ experiences and service use, Subminimum Wage to Competitive Integrated Employment (SWTCIE) project operations, engagement with other organizations, and system change efforts across the projects. The analysis will inform how projects deliver core components of services to participants; engage organizations such as 14(c) certificate holders, other employers, and service providers; and advance system change efforts. The implementation analysis will examine project structure, including partners, staff, and design. Finally, the implementation analysis will provide insight to the Rehabilitation Services Administration (RSA) that it can share with project staff as formative feedback to inform potential midcourse corrections in implementation. This analysis will rely on data from interviews and focus groups, participant and parent and guardian surveys, and administrative data. For measures using continuous scales, the study team will calculate means, percentiles of distributions, and standard deviations to describe central tendency and variation. For categorical scales, the study team will use frequency distributions and percentages. For qualitative data, we will synthesize diverse perspectives using a flexibly designed coding approach to facilitate analysis by research question, intervention component, intervention level, and focus area (context, components, connections, infrastructure, and scale).
Outcome analysis. The outcomes analysis will describe participant, employer, provider, and system outcomes potentially affected by the SWTCIE projects. The projects intend to affect a wide range of outcomes across categories, including employment and earnings, health and well-being, attitudes and expectations, and VR policies and practices. The SWTCIE national evaluation will describe outcomes potentially affected by the projects regardless of whether a valid comparison group is available to estimate project impacts. This analysis will rely on data from interviews and focus groups, participant and parent and guardian surveys, and administrative data. For measures using continuous scales, the study team will calculate means, percentiles of distributions, and standard deviations to describe central tendency and variation. For categorical scales, the study team will use frequency distributions and percentages. For qualitative data, we will synthesize diverse perspectives using a flexibly designed coding approach to facilitate analysis by research question, intervention component, intervention level, and focus area (context, components, connections, infrastructure, and scale).
Impact analysis. The impact analysis will generate evidence about the effects of the SWTCIE projects on key demonstration-related outcomes. To understand whether any observed changes in these outcomes is attributable to the projects (rather than other changes such as the economic climate), we will compare the outcomes of participants with those of a comparison group whose outcomes represent what would have happened to the participants had they not enrolled in a project. This analysis will rely on RSA-911 administrative data along with the Social Determinants of Health (SDOH) data base.
We will use a quasi-experimental design to estimate SWTCIE project impacts on participants. For each SWTCIE project, the study team will construct a matched comparison group consisting of VR participants who did not enroll in the project but are affiliated with the same state VR agency. Using exact matching and propensity score techniques, the study team will construct comparison groups with members who resemble participants across key characteristics recorded in RSA-911 and SDOH data. The study team’s use of different matching techniques will depend on the composition and size of the treatment groups and the pools of potential comparison group members. We will control for observable characteristics using regression analysis of outcomes.
Benefit-cost analysis. The benefit-cost analysis will show whether the benefits of each project were large enough to justify its costs. To provide a comprehensive accounting of the net benefits of each project, we will analyze benefits and costs from four perspectives: participants, the VR agency, the state (everything associated with the state except VR), and the Federal government. This analysis will rely on data from interviews and focus groups, participant and parent and guardian surveys, and administrative data to calculate anticipated benefits and costs from each project. We will collect and analyze information to generate per-participant costs for each SWTCIE project during Project Year 4 using a template developed to gather cost information from each project across four categories: labor, direct, indirect, and unbudgeted costs. Drawing from various data sources, we will then estimate project participant benefits for each SWTCIE project. To estimate expected benefits in future years and consider whether benefits could potentially exceed costs, we will make assumptions about which program impacts may endure and extrapolate benefits into five- or 10-year increments.
B.2.3 Degree of Accuracy Needed for the Purpose Described
The evaluation’s ability to detect impacts will vary substantially across the SWTCIE projects because the projects have a range of enrollment goals, ranging from 228 to 1,250 participants. In Exhibit B.3, we present the estimated minimum impact the study will be able to detect for each project, based on their enrollment goals.
Before conducting any subgroup analyses, the study team will consider statistical power and experimental group balance. As indicated in Exhibit B.3, analyses for SWTCIE projects with relatively small sample sizes will have limited statistical power. Because they examine fewer participants, subgroup analyses will have even less statistical power than the main analyses. The study team will carefully consider which subgroup analyses to perform and only conduct subgroup analyses with adequate statistical power. Similarly, the study team will examine balance among experimental groups before estimating impacts for subgroups. If the treatment group and the comparison group differ substantially in size, any impact analysis will have a limited ability to detect effects.
Exhibit B.3. Minimum detectable effect for combined treatment and comparison group sample sizes
Combined treatment and comparison groups (sample size) |
Minimum detectible effect (percentage points) |
3,000 |
3.0% |
2,000 |
3.7% |
1,000 |
5.3% |
800 |
5.9% |
600 |
6.8% |
400 |
8.4% |
200 |
11.8% |
100 |
16.8% |
50 |
23.7% |
Note: These calculations assume the following: a one-tailed test, α = 0.1, β = 0.8, equal-sized treatment and comparison groups, a binary outcome with a mean of 19.1 percent for the comparison group (based on the employment rate for people with intellectual and developmental disabilities), 15 percent of the variation in outcomes explained by exogenous variables, and 15 percent of the variation in treatment group status explained by exogenous variables.
The SWTCIE projects are best analyzed as 14 separate projects (instead of as one large project). Although they have the same broad goals, these projects have different intervention components, implementation strategies, and other key characteristics. This variation across projects makes it difficult to draw conclusions from quantitative analyses that combines project samples.
The study team might use statistical tests to compare impacts and outcomes across SWTCIE projects. Because of project differences, cross-project comparisons can provide insights about the most successful intervention and implementation strategies. Although they do not allow the study team to disentangle the specific project components responsible for differences in outcomes, cross-project comparisons can offer helpful insights about intervention and implementation strategies. However, the study team will use statistical tests to make cross-project comparisons only in certain situations. Specifically, the outcomes being compared must be identical; that is, they must be measured in the same way at the same point in time during a project’s follow-up period. In addition, a cross-project comparison must inform our understanding of how differences in project interventions and implementation affect variations in project outcomes or impacts. Because of these considerations, the study team will limit cross-project statistical tests to the most critical measures, such as primary outcomes.
For the SWTCIE national evaluation, the study team will use two strategies to address the multiple comparisons issue. First, the study team will consider statistical tests for each SWTCIE project separately. This approach reduces the magnitude of the multiple comparisons issue because it compares fewer statistical tests simultaneously. Second, the study team will use a small set of primary outcomes for the main assessment of each project’s efficacy. The primary outcomes are spread across five domains the projects aim to affect: (1) employment, (2) earnings, (3) education, (4) service use, and (5) satisfaction. Because the set of primary outcomes is small, the joint Type 1 error rate across the primary outcomes is roughly equivalent to the individual Type 1 error rates. The primary outcomes will be included in both the outcomes and impact analyses. If a primary outcome is not available in the RSA-911 data for the impact analysis, the study team will compare that outcome’s baseline and intervention period values among treatment group members as part of the outcomes analysis. Secondary outcomes include all measures other than the primary outcomes. The study team will not adjust the p-value thresholds for the secondary outcomes to address the multiple comparisons issue as they are considered exploratory.
As part of the SWTCIE national evaluation, the study team will examine differences and impacts among various subgroups. SWTCIE project participants will have differing characteristics—such as demographics, disability type, and SDOH—and these different characteristics could generate different responses to the intervention. Because they can inform both our interpretation of results and future policy and practice recommendations, the study team will want to understand key correlations between important characteristics and outcomes.
Based on their status at enrollment, the study team will examine variations in primary outcomes across six key subgroups: (1) age, (2) race and ethnicity, (3) geographic location, (4) SWE status, (5) service needs, and (6) the social vulnerability index (see Exhibit B.4). The first three subgroups (age, race and ethnicity, and geographic location) represent characteristics that are strongly correlated with VR outcomes. The fourth subgroup, SWE status, is a key factor for the demonstration and might be strongly predictive of service use, employment, and other outcomes. The study team has included service needs (as measured by a rating scale for activities of daily living [ADL]) because participants’ abilities to obtain competitive integrated employment might vary substantially based on their ability to perform basic activities. Finally, the social vulnerability index, which reflects SDOH, captures community-level socioeconomic vulnerability by ranking each census tract on 16 social factors, including unemployment, poverty, household composition, vehicle ownership, and other factors.
Exhibit B.4. Analytic subgroups for the SWTCIE national evaluation
Age
Race and ethnicity
Geographic location
|
SWE status
Service needs
Social vulnerability index
|
SWE = subminimum wage employment.
B.2.4 Unusual Problems Requiring Specialized Sampling Procedures
The study team does not anticipate any unusual problems that require specialized sampling procedures.
B.2.5 Use of Periodic (Less Frequent than Annual) Data Collection Cycles to Reduce Burden
To minimize burden, the study team is planning to collect the study’s data that cannot be found in RSA-911 data or other administrative records. To do this, the team will collect staff records from SWTCIE providers once per year for three years in 2024, 2025, and 2026. The team will also conduct on-site observations for the qualitative data collection that will include interviews and focus groups in 2024 and 2026. The team will conduct the participant baseline survey beginning in 2024 through 2026 based on project enrollment.
The team will complete the following data collection activity only once:
Administer the parent and guardian survey (Fall 2026).
By necessity, the study team will collect other data more frequently. To ensure accurate reporting and allow for continuous monitoring, the team will regularly meet with grantees to discuss the evaluation as well as collect project documents that may not be included in administrative records.
Describe methods to maximize response and to deal with issues of non-response. The accuracy and reliability of information collected must be shown to be adequate for intended uses. For collections based on sampling, a special justification must be provided for any collection that will not yield “reliable” data that can be generalized to the universe studied.
B.3.1 Maximizing Response Rates
As mentioned in B.1, the study team expects up to 7,500 youth (ages 18-24) and adult (ages 25 and older) to complete the baseline survey. The study team expects a 70 percent response rate on the participant follow-up survey and the parent and guardian survey for two reasons. First, recent data collection experiences with similar populations (Mann et al, 2021, Matulewicz et al, 2017, Sevak et al, 2021) revealed challenges with tracking respondents over time and hesitation to participate in a federal survey. Second, response rates to surveys have generally declined over time (Williams and Brick 2018; Dutwin and Buskirk 2021).
Across all aspects of data collection, the study team will use strategies that have proved successful on other RSA studies, including the following:
Instrument development. Before developing the instrument, the study team will clearly define research objectives so that the survey does not impose undue burden with questions that do not inform these objectives. The study team will keep surveys and interview protocols short and targeted. As longer instruments increase respondent burden, including anxiety and fatigue, they can lead to lower completion rates and reduce data integrity and validity. For all questions, the study team will use plain language and avoid complexity.
Specific methods for maximizing response rates in the collection of data in each study component are as follows:
Survey programming. The study team will program skip patterns in all surveys to ensure respondents do not have to read and respond to questions that are not applicable to them. The study team will also program the survey to provide respondents with information about the survey length and a progress bar that indicates completion percentage. Finally, the study team will program the survey so that there are minimal text instructions and use visual cues to signal to the respondent how to continue progressing through the survey.
Outreach. The study team will consistently brand mail and email outreach with logos from RSA, Mathematica and Morris Davis and Company (MDAC; the organization Mathematica will use for survey data collection) to assure recipients of the legitimacy of the data collection effort. The study team will personalize outreach to the desired respondent using their name. All outreach materials will stress the importance of the potential respondent’s participation and the confidentiality of their response. In addition, the study team will provide a toll-free number to address any concerns or questions about the survey.
Offering the survey in both web and CATI formats. The participant survey will primarily be administered as a web survey. Respondents will be notified of the survey by mail and email. They will be encouraged to complete the mobile-optimized online instrument, but can call into a toll-free number to complete it by telephone if they prefer. Respondents will also receive mail and email reminders. If the survey is not completed after four weeks from initial contact, trained telephone interviewers will follow up with nonrespondents to complete the instrument by phone.
Notifying and sending reminders to nonrespondents. The team will send reminder emails throughout the fielding period. The team will also mail a reminder postcard, reminder letter, and refusal conversion letters to those who have not completed interviews or who mildly refuse to participate. Nonrespondents will receive calls at different times of the day and days of the week to increase the likelihood of reaching participants when they are available.
Utilizing trained interviewers. To develop the skills necessary to encourage survey participation, all telephone interviewers receive general interviewer training before being assigned to a study. This training involves essential interviewing skills, probing, establishing rapport, avoiding refusals, eliminating bias, and being sensitive to at-risk and special populations. In addition, all interviewers will receive study-specific training that reviews study goals, instruments, and conveys best practices for interviewing for the specific study.
Leveraging experience in refusal conversion. The team has developed and refined methods to build rapport and overcome the reluctance of sample members to participate in interviews. Trained interviewers will use multi-pronged approaches to focus on preventing and converting refusals when conducting the telephone survey portion of the data collection. The strategies aim to convince sample members that (1) the study is legitimate and worthwhile, (2) their participation is important and appreciated, and (3) the information provided will be held private.
Incentives. We will not offer incentives for the baseline survey because this process will be a part of the participant registration. We will provide a $30 incentive to those who complete a participant follow-up survey, a parent and guardian survey, and those who participate in interviews and focus groups as part of site visits. All incentives will be delivered using Tango Cards. Tango Cards allow respondents to select the vendor gift card of their choice. The study team will create a personalized, project-specific email template that includes a thank you message, instructions, and a link for redeeming the electronic gift card (e-gift card), a Tango help desk phone number, and MDAC email address and phone number for respondents that need help or have not received their gift cards in a timely manner. After choosing how they will redeem the e-gift card, the respondent will then receive a second email from the chosen vendor (or vendors). This email contains the actual gift card, which may include a PIN, a printable bar-coded gift card image, or both. For respondents that lose or cannot access their Tango or gift card redemption links, MDAC can retrieve and forward links. Tango Card uses two-factor authentication, and each MDAC staff has their own login with tailored permissions.
B.3.2 Dealing with Non-Response
The study team and its partner MDAC will closely monitor completion rates by SWTCIE project and respondent characteristics with weekly reports. At the respondent level, MDAC will follow up with nonrespondents via email and phone to encourage survey completion. If MDAC finds that some projects have lower response rates, they will reach out to the Mathematica and SWTCIE project staff to understand potential reasons for nonresponse. For example, with the follow-up survey, participants may no longer use project services, so the projects may have outdated contact information. In those instances, we would consider a more intensive locating effort. Where needed, the study team would ask staff to promote the surveys. As we track response rates, the study team will explore the need for a nonresponse bias analysis to adjust our approach.
Survey nonresponse, missing data, and attrition
The study team will carefully analyze variables with missing data in cases where data may be missing from quantitative or administrative data sources. By accounting for the missing data, the study team will avoid biasing impact estimates and outcome measures for the SWTCIE national evaluation.
The study team will use various strategies to account for missing data across analyses. For example, if an exogenous variable in a regression equation is missing a value for an observation, we will use mean imputation to assign a value. If an outcome measure is missing at random, we will omit that observation from the analysis of that outcome. However, if an outcome is missing, but not at random (for example, when survey respondents skip certain questions based on their previous answers), we cannot omit the observation from the analysis without biasing our findings. In these situations, the study team will use more advanced techniques—such as multiple imputation—to account for patterns of missing data or reassess whether those outcomes can be meaningfully analyzed.
For the impact analysis, the study team will carefully consider how to analyze unrealized outcomes in the RSA-911 data. Most employment outcome measures in the RSA-911 data are captured at or after program exit—that is, after a participant stops receiving VR services. However, we expect that most participants will still have an open VR case when we assess project impacts and outcomes in 2026-2027. The study team plans to analyze the exit outcomes unconditionally, comparing rates of positive outcomes with those of negative or unrealized outcomes. The study team will explore other methodological approaches for analyzing different variables to obtain impact estimates at the time of program exit that better account for unrealized outcome data.
Describe any tests of procedures or methods to be undertaken. Testing is encouraged as an effective means of refining collections of information to minimize burden and improve utility. Tests must be approved if they call for answers to identical questions from 10 or more respondents. A proposed test or set of tests may be submitted for approval separately or in combination with the main collection of information.
To inform data collection activities for which clearance is being requested in this submission, the study team pretested the participate baseline survey, participant follow-up survey and the staff records and project cost data forms. The purpose of the pretests was to confirm burden and identify questions that were unclear to study respondents or where respondents might have difficulty providing the requested information. For four of the instruments that were pretested, average respondent burden was what we expected except for the follow-up survey (described below). Across all instruments, pretests findings were used to revise and improve the wording of specific instructions and items. A summary of procedures and findings for each respondent type are described below:
Participant Baseline Survey.
The baseline survey was pretested with 5 participants including one individual with I/DD completing the survey on their own, two parents assisting their child with the survey, and two parents of a child with I/DD completing the survey on behalf of their child. After three pretests, the baseline survey was revised and then pretested an additional two times. During the pretest, participants completed the online survey via a link shared during the meeting. After the online survey was completed, the study team debriefed with the participant to review any issues they may have encountered and gathered additional feedback on the survey. Interviewers followed a protocol to probe certain items to ensure they were phrased clearly and collected accurate information. The expected respondent burden for the baseline survey was 15 minutes. Respondent burden averaged 9.2 minutes (6.7 minutes before revisions; 13 minutes after revisions), about 6 minutes shorter than expected.
The study team revised the baseline survey in response to the pretest results. The revisions included providing more detailed instructions at certain questions, removing questions that were challenging for participants to answer and not necessary for the study analysis, adding supplemental questions or response options to help to increase clarity of existing questions, providing additional definitions for certain data elements, and revising questions to ensure they align with the follow-up survey.
Participant and Parent and Guardian Follow-up Survey.
The follow-up survey was initially pretested with three participants. Two of the three initial pretest participants were individuals with I/DD who completed the survey with the help of a career counselor, the third initial pretest was conducted with an individual who received assistance from their parent to complete the survey. After those pretests, the follow-up survey was revised, and the survey was administered two additional times. The process was similar to the process used for the baseline survey. Expected respondent burden for the follow-up survey was 15 minutes. Respondent burden averaged 22 minutes (19.17 minutes before revisions; 26.25 minutes after revisions), about 7 minutes longer than expected. The burden table in Table A.4 of Part A has been adjusted to account for this increase.
We revised the follow-up survey in response to the pretest results. The revisions included providing more detailed instructions and rational for the data collection, removing certain variables that were challenging for participants to answer and not necessary for the study analysis, adding certain variables that helped to increase clarity of existing questions, providing additional definitions for certain data elements, revising variables to increase clarity and match changes to the baseline survey, and reordering variables to increase clarity.
Staff records form.
Three projects reviewed the staff record form. The study team asked project directors to review the staff record form and participate in a debrief interview by phone with each respondent to review any issues they may have encountered. Interviewers followed a protocol to probe on specific questions to be sure the items were phrased clearly and collected accurate information. Expected respondent burden for the staff record form was 2 hours which aligns with staff expectations.
Minor revisions were made to the staff records form, including providing more detailed instructions and rational for the data collection, removing certain variables that were challenging for projects to collect and not necessarily for the study analysis, and providing additional definitions for certain data elements.
Project cost data form
Three projects reviewed the project cost form using the same process as the staff records form. Expected respondent burden for the staff record form was 16 hours which aligns with staff expectations.
No changes were made to the project cost data form based on pretest input.
The study team did not pretest the site visit protocols. These were not pretested because they are closely modeled on similar protocols that have been effectively used for other studies.
After finalizing the instruments, the study team will program the survey instruments for administration via computer-assisted web interviewing methods. Before deployment, the team will test the survey instruments to ensure they function as designed. This will include extensive manual testing for skip patterns, fills, and other logic. To reduce data entry errors, numerical entries will be checked against an acceptable range, and, where appropriate, prompts will be presented for valid but unlikely values. This testing will increase the accuracy of data collected while minimizing respondent burden.
Provide the name and telephone number of individuals consulted on statistical aspects of the design and the name of the agency unit, contractor(s), grantee(s), or other persons who will actually collect and/or analyze the information for the agency.
Name |
Title |
Telephone number |
Mathematica staff |
||
David Mann |
Project Director |
(609) 275-2365 |
Noelle Denny-Brown |
Principal Researcher |
(617) 301-8987 |
Marisa Shenk |
Researcher |
(202) 838-3639 |
Lisbeth Goble |
Principal Survey Researcher |
(312) 994-1016 |
MDAC staff |
||
Kim Dorazio |
Vice President |
(215) 790-8903 |
RSA staff |
||
Diandrea Bailey, PhD |
Project Officer, Contracting Officer Representative |
(202) 245-6244 |
Sheryl Fenwick |
Budget Analyst, Alternate Contracting Officer Representative |
(202) 245-6345 |
Dr. Ashley Brizzo |
Director, Training and Service Programs Division |
|
Douglas Zhu |
Chief, Training Programs Unit |
(202) 987-0127 |
Cassandra Shoffler |
Project Officer, DIFPE Program Manager |
(202) 245-7827 |
REFERENCES
Dutwin D., Buskirk T. D. (2021), “Telephone Sample Surveys: Dearly Beloved or Nearly Departed? Trends in Survey Errors in the Era of Declining Response Rates,” Journal of Survey Statistics and Methodology, 9, 353–380.
Matulewicz, Holly, Karen Donelan, and Forest Crigler. “Challenges and Opportunities in Engaging Low-Income Populations with Disabilities in Web Surveys.” Presented at the American Association for Public Opinion Research Conference, New Orleans, May 2017.
Williams D., Brick J. M. (2018), “Trends in U.S. Face-to-Face Household Survey Nonresponse and Level of Effort,” Journal of Survey Statistics and Methodology, 6, 186–211.
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
File Title | Supporting Statement Part B |
Author | Authorised User |
File Modified | 0000-00-00 |
File Created | 2024-10-30 |