Memo request for Non-Substantive Change

SET Eval (OMB 1205-0505) Follow-Up Survey _Non-Substantive Change Memo.docx

Self-Employment Training (SET) Demonstration Evaluation

Memo request for Non-Substantive Change

OMB: 1205-0505

Document [docx]
Download: docx | pdf


Non-Substantive Change to Information Collection Request 201209-1205-001

Follow-Up Survey for Self-Employment Training (SET) Demonstration Evaluation

OMB Control No. 1205-0505

The Employment and Training Administration (ETA) is proposing a non-substantive change to ICR 201209-1205-001 for the Self-Employment Training (SET) Demonstration Evaluation (OMB Control No. 1205-0505). The SET Demonstration is a reemployment program targeted towards dislocated workers, as defined by the Workforce Investment Act (WIA), who are interested in starting or growing a business in their fields of expertise. The requested change pertains to the “Follow-Up Survey of Successful Applicants,” which is to be administered 18 months after study participants apply to the SET demonstration program. The follow-up survey is a critical source of information to evaluate the impacts of the program, and ETA expects that fielding will begin at the start of 2015 for the earliest enrollment groups. Based on resource considerations, ETA will pursue only a web mode when fielding the survey; an additional telephone mode was previously anticipated but is not feasible at this point.

Because a web-only survey data collection has the potential to lower response rates for a given level of burden, ETA proposes to streamline the content of the follow-up survey in a way that reduces respondent burden while retaining the core information needed for the impact evaluation. These changes entail (1) reducing the complexity of questions about some topics and (2) removing other questions about topics that are more descriptive, of secondary importance, and/or covered at least partly by other dimensions of the data collection. ETA and the contractor have identified this strategy as the only feasible way to collect the necessary information to calculate program impacts on the outcomes of greatest importance while achieving adequate response rates and data quality within the resources available for this effort. Without such a change, nonresponse and data-quality issues arising from respondent break-off and fatigue could be problematic, especially when using only a web-based data collection mode.

The requested changes to the survey instrument are concentrated on shortening the average administration time from 60 to 20 minutes. The changes would not affect the sample frame or the fielding timeline, and no changes are proposed to the dollar amounts to be tested in an incentive payment experiment previously approved by OMB.1 In addition, no changes are proposed to the other four data collection efforts under this evaluation (an application package, program participant records, site visit interviews, and case study interviews) that were previously approved by OMB as part of ICR 201209-1205-001.

The next three sections describe in detail the rationale for reducing the length of the follow-up survey, the scope of the proposed reduction, and the resulting change in respondent burden. References to literature cited in this justification follow at the end. A copy of the proposed revised survey instrument is included as an enclosure to this memo, as is a table summarizing the differences between the original and revised versions of it.

1. Rationale for Reducing the Length of the Follow-Up Survey

During the design phase of the SET Evaluation, a 60-minute follow-up survey was developed for administration using a mixed-mode (interviewer-assisted telephone and self-assisted web) fielding approach. Based on a subsequent assessment of resources, ETA and the contractor determined that a web-only fielding approach would be required because a telephone mode could not be supported within the available evaluation budget. Although this shift towards exclusive use of a self-administered web mode has some benefits based on cost, it is also have the potential to reduce response rates and increase survey break-offs because of more limited encouragement from human interviewers compared to an interviewer-assisted telephone mode (National Research Council, 2013). Based on recent experience fielding surveys to target populations similar to SET participants, it is the assessment of ETA and the contractor that, unless the changes requested here are approved, the web mode will not achieve a high enough response rate—ideally 80 percent or higher—to ensure that expected impacts can be statistically detected and that the impact estimates have a low potential for nonresponse bias.

Given the available resources, the main way that ETA has identified to achieve sufficiently high response rates for the SET follow-up is to reduce the survey’s length and complexity, retaining only the core information needed to calculate key impact measures and contextualize important findings. Experimental studies of web surveys have shown that reducing administration times can increase initial participation and, in some cases, reduce break-offs (Galesic and Bosnjak 2009; Yan et al. 2011). Nonexperimental research focusing on web surveys has also shown that break-offs tend to increase with survey duration and are more likely to occur around long or complex questions (Peytchev 2009). An additional benefit of reducing survey length and complexity is that it is expected to improve the quality of survey responses, since research also suggests that respondents tend to put less time and care into answering questions as they progress through web and phone surveys (Galesic and Bosnjak 2009; Roberts et al. 2010).

2. Scope of Proposed Revisions

ETA and the contractor undertook a comprehensive review of all survey items with the goal of streamlining the follow-up instrument to reduce respondent burden while preserving the core information for the impact analysis. As part of this process, four types of content were identified for removal: (1) questions about outcomes that are of substantive interest, but are not critical for the impact analysis; (2) questions with a relatively high cognitive complexity and/or intrusiveness that could be particularly problematic for break-offs or reduced response quality; (3) questions unlikely to produce reliable analysis measures based on a web-only instrument; and (4) questions about topics being partly addressed through other qualitative data sources approved under ICR 201209-1205-001.

This effort resulted in a reduction in the estimated survey administration time of approximately two-thirds, which would significantly reduce burden per respondent. The OMB-approved follow-up questionnaire included six major content areas divided into sections, along with a seventh section for collecting updated contact information. The burden estimate for the original version was 60 minutes per respondent. The revised survey is estimated to require 20 minutes per respondent and covers the six most important content areas:

  1. Current employment status

  2. Receipt of self-employment assistance services

  3. Business development activities

  4. Self-employment experiences

  5. Experiences in wage and salary employment

  6. Job satisfaction and program participation

An initial screening section and a streamlined contact-information section are also retained in the revised instrument. The enclosed table summarizes the differences between the original and proposed revised survey instrument by content area. Although most changes were to remove or streamline questions, three questions were added to capture important aspects of the target population’s experiences after random assignment not available from other sources.

The revised version omits one content area from the earlier version of the survey covering descriptive topics such as marital status and household composition that are less critical than the primary impact measures needed for the evaluation. The omitted section also included items about household income, economic hardships, and receipt of public income support—topics that might be considered quite sensitive and could result in increased break-offs or lower-quality information. Several of these topics could also be partially addressed through the evaluation’s case studies or a later collection of administrative data, or both. Moreover, removing the content area is expected to result in a better overall response rate, while still allowing the impacts of primary importance to be reliably estimated from follow-up survey data.

3. Revised Estimates of Burden

The total hour burden of the revised follow-up survey proposed to OMB is expected to be 800. This figure is based on the assumptions that (1) approximately 2,400 sample members will complete the follow-up survey (as in the clearance package for the survey, this figure assumes a response rate of 80 percent from an initial sample of 3,000 study members); and (2) the survey will take, on average, 20 minutes to complete. The revised estimate of 20 minutes is based on first scoring all questions in the initial version according to their contribution to burden, and then adjusting the initial burden estimate downward based on the removed content. In a final step, the burden estimate was adjusted back upward slightly to account for a small number of questions that were added; this adjustment was done using the burden scores of questions of a similar complexity from the original instrument. Using these assumptions, the estimated total hour burden for this data collection effort is calculated as 2,400 × (20/60) = 800 hours. As shown in Table 1, this is a 1,600 hour reduction from the burden estimate for the initial version of follow-up survey approved by OMB.2


Table 1. Total Hour Burden Estimates for Initial and Revised Versions of the SET Follow-up Survey

Version of Follow-Up Survey

Respondents

Number of Responses/ Instances of Collection

Frequency of
Collection

Average Time per Response

Total Hour Burden

Initial OMB-Approved Version

Eligible applicants who went through random assignment

2,400

Once

60 minutes

2,400 hours

Revised Version Proposed to OMB

Eligible applicants who went through random assignment

2,400

Once

20 minutes

800 hours


The estimated annualized number of burden hours associated with the revised follow-up survey is 533, which is 1,067 fewer hours than estimated for the initial version of the survey. As with the original clearance package, a survey fielding period of 18 to 24 months is planned. The Annualized burden hours are calculated based on the shorter, 18-month duration: 800 / (18/12) = 533. If the field period lasts longer, annualized burden hours would be still lower. Applying a wage rate of $17.28 implies an annualized burden cost for the follow-up survey of $17.28 × 533 = $9,210 in 2014 dollars.3 Hence, as indicated in Table 2, the annualized cost of burden in 2014 dollars is reduced from $27,648 to $9,210.

Table 2. Annualized Burden Cost Estimates for Initial and Revised Versions of the SET Follow-up Survey

Version of Follow-Up Survey

Respondents

Total Hour Burden

Length of Collection Perioda

Annualized Number of Burden Hours

Average Hourly Costb

Annualized Dollar Cost of Burdenb

Initial OMB-Approved Version

Successful applicants who went through random assignment

2,400

18 months

1,600

$17.28

$27,648

Revised Version Proposed to OMB

Successful applicants who went through random assignment

800

18 months

533

$17.28

$9,210

aThe numbers listed in this column represent the lower bounds of the duration for each collection period, as discussed in the main text, which implies that the table presents upper bounds on annualized burden hour and cost information.

bAs noted in the main text, burden cost calculations are done in 2014 dollars and assume a wage rate of $17.28 per hour among participants in the SET Demonstration.

References

Galesic, Mirta and Michael Bosnjak. “Effects of Questionnaire Length on Participation and Indicators of Response Quality in a Web Survey.” Public Opinion Quarterly, vol. 73, no. 2, Summer 2009, pp. 349–360.

National Research Council. Nonresponse in Social Science Surveys: A Research Agenda. Roger Tourangeau and Thomas J. Plewes (eds). Panel on a Research Agenda for the Future of Social Science Data Collection, Committee on National Statistics. Division of Behavioral and Social Sciences and Education. Washington, DC: The National Academies Press, 2013.

Peytchev, Andy. “Survey Breakoff.” Public Opinion Quarterly, vol. 73, no. 1, Spring 2009, pp. 74–97.

Roberts, Caroline, Gillian Eva, Nick Allum, and Peter Lynn. “Data Quality in Telephone Surveys and the Effect of Questionnaire Length: A Cross-National Experiment.” Institute for Social and Economic (ISER) Working Paper Series No. 2010-36. Essex, England: ISER, University of Essex, 2010.

Yan, Ting, Frederick G. Conrad, Roger Tourangeau, and Mick P. Couper. “Should I Stay or Should I Go: The Effects of Progress Feedback, Promised Task Duration, and Length of Questionnaire on Completing Web Surveys.” International Journal of Public Opinion Research, vol. 23, no. 2, Summer 2011, pp. 131–147.



Enclosures(2)

1 Although a reduction in survey content will reduce burden, ETA and the contractor are still concerned about the potential for nonresponse issues with a web-only survey. Based on recent experience fielding surveys to target populations similar to SET participants, ETA and the contractor continue to expect that incentive payments will be needed to achieve high response rates when fielding this study’s follow-up survey. The incentive-payment experiment will help assess whether this expectation is borne out in the field effort for the SET Evaluation follow-up survey. Given the shift to a web-only fielding, the experiment will now test response-rate and timing effects of $50 versus $25 or $0, whereas the original experiment would have tested response-rate effects and a combination of timing and mode effects. As before, the test of $25 versus $0 would be purely for response-rate effects. As previously agreed with OMB, the contractor will summarize the results of the auxiliary incentive payment experiment in a memo to be provided to OMB.

2 With this change, the estimated total burden hours across all components of the information collection approved under ICR 201209-1205-001 would be reduced from 8,902 to 7,302.

3 Hourly wage rates were calculated using the public use dataset for the Growing America Through Entrepreneurship (GATE) demonstration based on members of the study’s control group whose characteristics at baseline were similar to the criteria used to identify dislocated workers for the SET Demonstration. At the 18-month follow-up survey (the midpoint of which was March 2006), the average wage rate among employed members of this GATE subgroup was $14.62, which translates to $17.28 in 2014 dollars after adjusting for inflation.

2

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorHHock
File Modified0000-00-00
File Created2021-01-27

© 2024 OMB.report | Privacy Policy