JSA OMB Supporting Statement B - Revised 1.4.18 clean

JSA OMB Supporting Statement B - Revised 1.4.18 clean.docx

Job Search Assistance Strategies (JSA) Evaluation - Extension for the Six-Month Follow-up Survey

OMB: 0970-0440

Document [docx]
Download: docx | pdf



Supporting Statement B

For the Paperwork Reduction Act of 1995: Request for an Extension for the Six-Month Follow-up Survey for the Job Search Assistance Strategies Evaluation




OMB No. 0970-0440





Revised January 2018



Submitted by:

Office of Planning,
Research & Evaluation

Administration for Children & Families

U.S. Department of Health
and Human Services


Federal Project Officer

Carli Wulff

Table of Contents

Part B:

Part B: Collection of Information Employing Statistical Methods

In October 2013, the JSA Evaluation received OMB approval for data collection instruments used as part of the field assessment and site selection process (OMB No. 0970-0440). Instruments approved in that earlier submission included the Discussion Guide for Researchers and Policy Experts, the Discussion Guide for State and Local TANF Administrators, and the Discussion Guide for Program Staff. Data collection with these previously approved instruments is complete. OMB approved the next set of data collection forms—the Baseline Information Form, Staff Surveys, and Implementation Study Site Visit Guides—under the same control number on November 30, 2014. In February 2016, OMB provided approval for three additional data collection efforts related to a follow-up survey data administered to sample members – the Contact Update Form, Interim Tracking Surveys, and the JSA six-month follow-up survey instrument – under the same approval number. Approval for these activities expires on February 28, 2018.

This submission seeks OMB approval for the continuation of the JSA six-month follow-up survey instrument. All other information collection under 0970-0440 will be complete by the original OMB expiration date of February 28, 2018. The extension to the follow-up survey is needed because the study enrollment period was lengthier than planned; the enrollment period was originally estimated to span 12 months, but it took 18 months to complete enrollment, leaving insufficient time to complete the six-month follow-up survey. This submission is requesting a four-month extension in order to allow individuals randomly assigned between June and August 2017 to complete the follow-up survey in the same timeframe as earlier enrollees.

The purpose of the survey is to document study participants’ JSA service receipt and experiences. Specifically, the survey will measure receipt of JSA services; participant knowledge and skills for conducting a job search; the nature of their job search process, including tools and services used to locate employment; and their job search outputs and outcomes (e.g., the number of applications submitted, interviews attended, offers received and jobs obtained).

The JSA six-month follow-up survey, administered to sample members by telephone, will be a key source for outcomes of interest in the JSA Evaluation. While the principal source of employment and earnings data for the impact study is quarterly Unemployment Insurance records from the National Directory of New Hires (NDNH) (maintained by the Office of Child Support Enforcement (OCSE) at HHS), this follow-up survey will provide critical information on additional measures of interest. This includes the content, mode, and duration of JSA services received; job characteristics related to job quality (e.g. wage, benefits, and schedule); factors affecting the ability to work; public benefit receipt beyond TANF; and household income. Evaluators will use the survey results in both the impact study, to collect data on key outcomes, and the implementation study, to document the JSA services received.

B.1 Respondent Universe and Sampling Methods

The respondent universe for this study reflects the sites chosen for the JSA Evaluation and the cash assistance applicants and recipients enrolled into the study in these sites. Site selection criteria for the impact study included:

  • Willingness to operate two JSA approaches simultaneously;

  • Willingness to allow a random assignment lottery to determine which individuals participate in which model;

  • Ability to conduct random assignment for a relatively large number of cash assistance applicants and/or recipients (at least 1,000 in each site);

  • Ability to implement the desired approach with fidelity and consistency—which may hinge on whether the site has policies and systems in place to ensure that tested services are implemented relatively uniformly in all locations that are participating in the evaluation;

  • Commitment to preventing study subjects from “crossing over” to a different JSA approach than the one to which they are assigned; and

  • Capability to comply with the data collection requirements of the evaluation.

The research team identified three sites to be included in both the implementation and impact study (Genesee and Wayne County, MI, New York, NY, and Sacramento County, CA,), and two with an implementation study only (Ramsey County, MN and Westchester County, NY). The JSA impact evaluation is using an experimental design to determine the effectiveness of contrasting approaches of JSA. For the impact evaluation, TANF recipients were randomly assigned to one of two JSA approaches to measure the “value-added” of more intensive approaches. The evaluation does not include a true “no services” control group. The total sample size across the three sites participating in the impact study is 5,273. This represents a decrease from original projections of 8,000 participants in the impact study sample. This reduction is the result of fewer participating sites. See section B2.3 for a discussion of minimum detectable differential effects based on the final sample size. All enrolled participants randomly assigned to one or the other JSA models in a site will be part of the respondent universe. The target response rate for the six-month survey is 80 percent.

Exhibit B.1 shows the original sample size estimates for the six-month follow-up survey and projected response rates, the actual sample size and response rate target and concludes with the remaining sample to be worked under four month extension, the subject of this request.

Exhibit B.1: Sample Sizes and Response Rates by Instrument

Six-month Follow-up Survey

Selected

Target Returned

Target Response Rate

Original Proposed Sample

8,000

6,400

80%

Actual Sample Enrolled

5,273

4,218

80%

Remaining Sample to Complete Under Extension Period

957

766

80%





B.2 Procedures for Collection of Information

B.2.1 Sample Design

As described, no sampling is required for the six-month follow-up survey; all study participants will be selected for the follow-up survey.

B.2.2 Estimation Procedures

We start this section with a restatement of the JSA Evaluation research questions outlined in Section A.1.1 of Supporting Statement A. The evaluation will address the following principal research question:

  1. What are the differential impacts of alternative TANF JSA approaches on short-term employment and earnings outcomes?

In addition, the evaluation will address the following secondary research questions:

  1. What are the differential impacts of alternative TANF JSA models on: (a) job quality (including wages, work-related benefits, consistency and predictability of hours); (b) public benefits received; (c) family economic well-being; and (d) non-economic outcomes (including motivation to search for a job and psycho-social skills such as perseverance and self-efficacy)?

  2. What components of JSA services are linked to better employment outcomes?

  3. What are the job search strategies used by successful job seekers?

The first research question will be answered with administrative data and is therefore not discussed further here. The other research questions will require data from the six-month follow-up survey. Since these are secondary research questions that will be treated as exploratory rather than confirmative, no multiple comparison adjustments will be made on these estimated incremental effects. Two-sided hypothesis tests with alpha=0.05 will be used.

With regard to the second research question, it is likely that the nature of the JSA approaches being studied will vary enough across sites to make it necessary to analyze each site separately when estimating the impacts of the JSA approaches. In each site there are two alternate JSA approaches being studied. To estimate the effect of being in this approach, it would be possible to just compute the difference in average outcomes across the two JSA approaches. Although the simple difference in means is an unbiased estimate of the incremental effect of the enhancements in this approach, we will instead estimate intent-to-treat (ITT) impacts using a regression model that adjusts the difference between average outcomes for members of the two groups by controlling for exogenous characteristics measured at baseline. Controlling for baseline covariates reduces distortions caused by random differences in the characteristics of the experimental arms’ group members and thereby improves the precision of impact estimates, allowing the study to detect smaller true impacts. Regression adjustment also helps to reduce the risk of bias due to follow-up data sample attrition. We use the following, standard impact equation to estimate the incremental effect of the more structured and time intensive approach in a given sites:

            yi = α + δTi + βXi + εi

where

yi is the outcome of interest (e.g., earnings, time to employment);

α is the intercept, which can be interpreted as the regression-adjusted control group mean;

Ti is the treatment indicator (1 for those individuals assigned to a particular (existing) JSA model; 0 for the individuals assigned to the other JSA model);

δ is the incremental effect of the first JSA model (relative to the second JSA model);

Xi is a vector of baseline characteristics measured by the BIF (baseline information form) and centered around site-level means;

β are the coefficients indicating the contribution of each of the baseline characteristics to the outcome;

εi is the residual error term; and

the subscript i indexes individuals.

We will use survey regression software like SAS/SurveyReg for the analysis of incremental effects of the various JSA models on survey-measured endpoints so that we can use weights in the analysis. The weights will reflect inverse probabilities of response to the survey modeled through logistic regression as a function of baseline characteristics measured by the BIF, as is discussed further in Section B.3.4. With regard to clustering, although multiple offices will typically be involved in each study site, it appears unlikely that there will be more than a handful of cooperating offices within any single site. As a result, it seems likely to be infeasible to reflect the impact of office-level clustering on variances.

For the third research question, we plan two types of analysis. One will closely resemble the analyses for the second research question, but will focus on estimating differences in the level and content of job search services received by sample members in each of the two groups for each site. These will help interpret any significant findings related to the first and second research questions. The other type of analysis will involve developing models of earnings and TANF receipt in terms of use of various JSA services measured in the survey. These exploratory analyses will examine linkages between JSA services and employment and earnings. These analyses will be conducted on the pooled dataset and will be useful for developing recommendations for the next generation of JSA models.

For the fourth research question, we will also build a logistic model for employment in terms of job search strategies. We anticipate that this model may involve a fairly deep set of interactions. To try to minimize the danger of overfitting this model, we will reserve at least a fourth of the sample for testing models developed on the first portion of the sample.

B.2.3 Degree of Accuracy Required

Minimum detectable differential effects (MDDEs) between JSA approaches in a given site are provided in Exhibit B.2. The MDDEs are based on the final sample size in each site. They show that there is an 80-percent chance that the impact analysis will detect as statistically significant true differential effects on the employment rate of between 5 percentage points (New York City and Michigan) and 11 percentage points (Sacramento). Bigger differential effects than these will have a greater than 80-percent chance of detection. Smaller true impacts may elude detection and are more likely to occur than differential effects of the magnitude shown in Sacramento—and possibly in the other sites—given that one JSA approach must outstrip the other in a given site for any differential effect at all to occur. In Sacramento, a true differential impact of 8.6 percentage points will have a 60-percent chance of detection and 7.4 percentage points will have a 50-percent chance of detection.1

MDDEs for illustrative survey outcomes are shown in the bottom panel of Exhibit B.2. Sample sizes are smaller here. While a response rate of 80 percent is target in survey efforts using a mixed-mode fielding approach, because the JSA six-month follow-up survey is being conducted entirely by phone, the response rate is approximately 60 percent. Thus, the MDDEs use the more conservative estimate. Here, the smallest detectable differential impacts are expressed as “effect sizes,” in standard deviation units. This makes the guidance provided universally applicable to all outcomes. Hence, for example, a 0.25 effect size represents a differential impact sufficient to take a person at the mid-point of the outcome distribution (the 50th percentile) to the 60th percentile, no matter what survey-measured outcome is under consideration. As can be seen in the exhibit, the impact evaluation in New York City will have an 80-percent chance of detecting a differential impact half this size, 0.124 of a standard deviation. Survey analysis in Michigan will be nearly as discerning, with a 0.140 standard deviation minimum detectable differential effect. Sacramento, on the other hand, has a much larger 0.312 standard deviation minimum detectable differential impact, although true differences in impact for survey outcomes in Sacramento as small as 0.206 of a standard deviation will have a 50-percent chance of detection.

Exhibit B.2: Minimum Detectable Differential Effects (MDDEs) of JSA Approaches

Employment in second quarter after random assignment (confirmatory outcome)

Differential impact on percent employed

New York City (N=2,680)

4.8 percentage points

Sacramento (N=493)

11.2 percentage points

Michigan (N=2,078)

5.5 percentage points

Survey outcomes

Differential impact in standard deviation units (effect size)

New York City (N=1,582)

0.124

Sacramento (N=2542)

0.312

Michigan (N=1,2183)

0.142

Note: MDDEs based on two-tailed tests. They are for detection of improvements, but capable of detecting reverses. The assumptions and calculations are similar to those in Abt Associates (2014) for the evaluation of Pathways for Advancing Careers and Education (PACE)(OMB No. 0970-0397). The projected variance reductions due to use of baseline variables are from Nisar, Klerman, and Juras (2012).

B.2.4 Who Will Collect the Data and How Will It Be Done

The JSA six-month follow-up survey is administered by telephone by professional interviewers working in a centralized computer-assisted telephone interview (CATI) system that allows real-time error checking and observation by supervisors.

B.2.5 Unusual Problems Requiring Specialized Sampling Procedures

Not applicable.

B.2.6 Periodic Data Collection Cycles

The six-month follow-up survey is administered once per study participant.

B.3 Methods to Maximize Response Rates and Address Non-response

The methods to maximize response rates are discussed with regard first to participant tracking and locating and then regarding the use of monetary tokens of appreciation.

B.3.1 Participant Tracking and Locating

The JSA Evaluation team developed a comprehensive participant tracking system, in order to maximize response to the six-month follow-up survey. This multi-stage locating strategy blends active locating efforts (which involve direct participant contact) with passive locating efforts (which rely on various consumer database searches).

The active tracking for the JSA Evaluation began when the research team sent the previously approved a welcome packet to all sample members within the first month of enrollment. This packet consisted of a welcome letter, a study brochure, a contact update form and business reply envelope, and a $2 bill.4 The welcome letter and study brochure provided comprehensive information about the tracking and survey data collection activities. The contact update form captured updates to the respondent’s name, address, telephone and email information. It also collected contact data for up to three people that did not live with the participant, but would likely know how to reach him or her. Interviewers only use secondary contact data if the primary contact information proves to be invalid—for example, if they encounter a disconnected telephone number or a returned letter marked undeliverable. Attachment A of Supporting Statement A shows a copy of the contact update form.

Sample members were invited to complete a short interim tracking survey once a month. Participants that provide consent to text at enrollment complete the interim tracking survey via a SMS text message. Sample members that refused to provide consent to text message contact or do not have cell phones are invited to complete the interim survey online. This survey captures data on current employment and JSA service receipt. It also prompts study participants to update their contact information, and the contact information of up to three friends or relatives (comparable to the contact update form). Participants who respond to the monthly interim surveys receive a token of appreciation (see B.3.2 below). The use of the previously approved welcome packets and interim surveys will be complete by the time this package expires under the current OMB control number. Only data collection using the previously approved six-month follow-up survey will continue beyond the February 28, 2018 expiration date.

In addition to the direct contact with participants, the research team conducts several database searches to obtain additional contact information. Passive tracking resources are comparatively inexpensive to access and generally available, although some sources require special arrangements for access.

B.3.2 Tokens of Appreciation

Offering appropriate monetary gifts to study participants in appreciation for their time can help ensure a high response rate, which is necessary to ensure unbiased impact analysis. For this reason, sample members receive small tokens of appreciation during the six- month period between enrollment and the follow-up survey data collection. Study participants receive $2 initially, as part of their welcome packet. Those who complete the interim survey accrue an additional $2 for each completed interim survey, as explained in the welcome letter and study brochure. Just prior to the start of the six-month follow-up survey, the team sends an survey pre-notification letter explaining the purpose of the follow-up telephone survey, the expectations of participants who agree to complete the telephone survey, and the promise of an additional $25 as a token of appreciation for their participation. The token structure is based upon similar reward models commonly used in consumer panels where panelists earn points for each survey they complete, and when they reach a certain level, they may redeem their points for rewards (such as a gift cards or cash) or continue to accrue points for even larger rewards. The survey pre-notification letter thanks study participants for their time in the study and includes the cumulative amount, if any, accrued by completing the interim surveys. For example, a study participant who responds to three of the five interim surveys will receive $6 and someone who responds to all five interim surveys will receive $10 with their pre-notification letter. Finally, study participants who complete the six-month follow-up survey receive a check for $25 as a token of appreciation for their time spent participating in the survey. In total, enrolled participants can receive between $2 and $37 dollars depending on how many rounds of data collection they complete.

B.3.3 Sample Control during the Data Collection Period

During the data collection period, the research team will minimize non-response levels and the risk of non-response bias in the following ways:

  • Using trained interviewers (in the phone center) who are skilled at working with low-income adults and skilled in maintaining rapport with respondents, to minimize the number of break-offs and incidence of non-response bias.

  • Using updated contact information captured through the contact update form or the monthly interim surveys conducted monthly to keep the sample member engaged in the study and to enable the research team to locate them for the follow-up data collection activities.

  • Using an advance letter that clearly conveys the purpose of the survey to study participants, the incentive structure, and reassurances about privacy, so they will perceive that cooperating is worthwhile.

  • Taking additional tracking and locating steps, as needed, when the research team does not find sample members at the phone numbers or addresses previously collected.

  • Employing a rigorous telephone process to ensure that all available contact information is utilized to make contact with participants. The approach includes Spanish-speaking telephone interviewers for participants with identified language barriers.

  • Requiring the survey supervisors to manage the sample in a manner that helps to ensure that response rates achieved are relatively equal across treatment and control groups and sites.

The researchers will link data from various sources through a unique study identification number. This will ensure that survey responses are stored separately from personal identifying information thus ensuring respondent privacy.

B.3.4 Nonresponse Bias Analysis and Nonresponse Weighting Adjustment

If, despite our best efforts, the response rate in a site comes in below 80 percent, we will conduct a nonresponse bias analysis. Regardless of the final response rate, we will construct nonresponse adjustment (NRA) weights. Using both baseline data collected just prior to random assignment and post-random assignment administrative data on continued receipt of TANF and SNAP, we will estimate response propensity by a logistic regression model. Within the combination of site and experimental arm, study participants will be allocated to nonresponse adjustment cells defined by the intervals of response propensity. Each cell will contain approximately the same number of study participants. Within each nonresponse adjustment cell, the empirical response rate will be calculated. Respondents will then be given NRA weights equal to the inverse empirical response rate for their respective cell. An alternative propensity adjustment method could use the directly modeled estimates of response propensity. However, these estimates can sometimes be close to zero, creating very large weights, which in turn lead to large survey design effects. The use of nonresponse adjustment cells typically results in smaller design effects. The number of cells will be set as a function of model quality. The empirical response rates for a cell should be monotonically related to the average predicted response propensity. We will start with a large number of cells and reduce that number until we obtain the desired monotonic relationship.

Once provisional weights have been developed, we will look for residual nonresponse bias by comparing the estimates of the effects of the higher intensity JSA strategy on administrative outcomes estimated with the NRA weights in the sample of survey respondents vs. the estimates of the same effects estimated on the entire randomized sample (including survey nonrespondents) without weights. If they are similar (e.g., within each other’s confidence intervals), then we will be reasonable confident that we have ameliorated nonresponse bias. If, on the other hand, there are important differences, then we will search for ways to improve our models and recalculate the weights as in Judkins, et al. (2007).

B.4 Tests of Procedures or Methods to be Undertaken

In designing the JSA six-month follow-up survey, the research team developed items based on those used in previous studies, including the PACE (OMB # 0970-0397) and HPOG (OMB # 0970-0394) evaluations conducted for ACF. The study also drew questions from a range of studies of comparable populations (see Part A for detail on surveys consulted). We conducted a formal pretest of the follow-up surveys, with a convenience sample of nine respondents, with characteristics and job search statuses comparable to the study participants. The results of the pre-test were provided to ACF and incorporated into the survey prior to the start of the six-month follow-up survey.





B.5 Individuals Consulted on Statistical Aspects of the Design

Consultations on the statistical methods used in this study have been undertaken to ensure the technical soundness of the research. The following individuals were consulted in preparing this submission to OMB:

        1. ACF

Ms. Erica Zielewski Contracting Officer’s Representative (previous)

Ms. Carli Wulff Contracting Officer’s Representative

Mr. Mark Fucello Division Director

Ms. Naomi Goldstein Deputy Assistant Secretary for Planning, Research and Evaluation

        1. Abt Associates

Ms. Karin Martinson Project Director (301) 347-5726

Dr. Stephen Bell Principal Investigator (301) 634-1721

Mr. David Judkins Statistician (301) 347-5952

Dr. Alison Comfort Analyst (617) 520-2937

Ms. Debi McInnis Survey Operations (617) 349-2627



References

Abt Associates (2014). Pathways for Advancing Careers and Education Evaluation Design Report. OPRE Report #2014-76, Washington, DC: Office of Planning, Research and Evaluation, Administration for Children and Families, U.S. Department of Health and Human Services.


CDC Morbidity and Mortality Weekly Report, Julye 27, 2012. Supplement/Vol. 61 “CDC’s Vision for Public Health Surveillance in the 21st Century.


Chang, T., W. Gossa, A. Sharp, Z. Rowe, L. Kohatsu, E. M. Cobb, & M. Heisler. Text messaging as a community-based survey tool: a pilot study. BMC Public Health, 2014; 14 (1): 936 DOI: 10.1186/1471-2458-14-936.


Gurol-Urganci, de Jongh T, Vodopivec-Jamsek V, Car J, Atun R: Mobile phone messaging for communicating results of medical investigations.


Judkins, D., Morganstein, D., Zador, P., Piesse, A., Barrett, B., Mukhopadhyay, P. (2007). Variable Selection and Raking in Propensity Scoring. Statistics in Medicine, 26, 1022-1033.


Nisar, H., Klerman, J. A., & Juras, R.(2012) Estimation of Intra Class Correlation in Job Training Programs (working paper). Bethesda, MD: Abt Associates.



Vervloet M, Linn AJ, van Weert JC, de Bakker DH, Bouvy ML, van Dijk L: The effectiveness of interventions using electronic reminders to improve adherence to chronic medication: a systematic review of the literature. J Am Med Inform Assoc 2012, 19(5):696-704.



1 Any statistically significant findings for second quarter employment that emerge from the planned test procedure will constitute confirmatory evidence with a high degree of certainty that a non-zero differential impact has truly occurred (90-percent certainty, given the 0.10 alpha factor used in the test procedure). A lower probability of “tripping” the statistical significance “wire”—such as the 60- and50-percent odds referenced here—does not change this.

2 Projected through the end of survey data collection in December 2017.

3 Projected through the end of survey data collection in February 2018.

4 The JSA Evaluation enrollment period began in October 2015 in the first study site, prior to OMB approval of the contact update form and use of tokens of appreciation. Welcome packets for the participants enrolled prior to OMB approval will only contain a cover letter and a study brochure. Tracking for these early enrollees will draw heavily upon passive tracking sources until OMB approval is received and this group can be transitioned into the active tracking system as well.


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorDebi McInnis
File Modified0000-00-00
File Created2021-01-21

© 2024 OMB.report | Privacy Policy