Supporting Statement B
For the Paperwork Reduction Act of 1995: Approval for the Parcticipant Contact Information Form, Interim Surveys, and Follow-up Survey for the Job Search Assistance Strategies Evaluation
OMB No. 0970-0440
November 9, 2015
Submitted by:
Office
of Planning,
Research & Evaluation
Administration for Children & Families
U.S. Department of
Health
and Human Services
Federal Project Officer
Erica Zielewski
Table of Contents
Part B:
Collection of Information Employing Statistical Methods 2
B.1 Respondent Universe and Sampling Methods 2
B.2 Procedures for Collection of Information 4
B.3 Methods to Maximize Response Rates and Address with Non-response 7
B.4 Tests of Procedures or Methods to be Undertaken 9
B.5 Individuals Consulted on Statistical Aspects of the Design 10
OMB approval was received in October 2013 for JSA Strategies Evaluation data collection instruments used as part of the field assessment and site selection process (OMB No. 0970-0440). Instruments approved in that earlier submission included the Discussion Guide for Researchers and Policy Experts, the Discussion Guide for State and Local TANF Administrators, and the Discussion Guide for Program Staff. Data collection with these previously approved instruments is complete. OMB approved the next set of data collection forms—the Baseline Information Form, Staff Surveys, and Implementation Study Site Visit Guides—under the same control number on November 30, 2014.
This submission seeks OMB approval for three additional data collection activities that will be part of the JSA Strategies Evaluation:
Contact update form. A paper version of this form will be included in a “welcome packet” that is mailed to sample members shortly after study induction to encourage them to complete the survey. The purpose of the form is to update sample member’s contact information, including contact information on alternate contacts, for the study’s follow-up survey.
Interim tracking surveys. This activity involves conducting a brief monthly survey with sample members during the first five months after random assignment. The primary purpose of these surveys is to keep contact information current and maintain the sample member’s connection to the study. The contact information captured will be comparable to that collected with the contact update form. The interim surveys will also provide information on employment and participation in job search support services. These surveys will be conducted via text messaging.1
JSA six-month follow-up survey instrument. This survey, administered to sample members by telephone, will be a key source for outcomes of interest in the JSA Strategies Evaluation. While the principal source of employment and earnings data for the impact study is quarterly Unemployment Insurance records from the National Directory of New Hires (NDNH) (maintained by the Office of Child Support Enforcement (OCSE) at HHS), this follow-up survey will provide critical information on additional measures of interest. This includes the content, mode, and duration of job search assistance services received; job characteristics related to job quality (e.g. wage, benefits, and schedule); factors affecting the ability to work; public benefit receipt beyond TANF; and household income. Survey results will be used in both the impact study, to collect data on key outcomes, and the implementation study, to document the JSA services received.
The respondent universe for this study reflects the sites chosen for the JSA Strategies Evaluation and the cash assistance applicants and recipients enrolled into the study in these sites. Site selection criteria for the study included:
Willingness to operate two JSA approaches simultaneously;
Willingness to allow a random assignment lottery to determine which individuals participate in which model;
Ability to conduct random assignment for a relatively large number of cash assistance applicants and/or recipients (at least 1,000 in each site);
Ability to implement the desired approach with fidelity and consistency—which may hinge on whether the site has policies and systems in place to ensure that tested services are implemented relatively uniformly in all locations that are participating in the evaluation;
Commitment to preventing study subjects from “crossing over” to a different JSA approach than the one to which they are assigned; and
Capability to comply with the data collection requirements of the evaluation.
As of the time of this submission in October 2015, site selection is still underway. Thus far, one site has started random assignment to the two job search assistance approaches: the TANF program operated by the Human Resources Administration (HRA) in New York, NY. The study is being conducted in four TANF offices across Brooklyn and Queens. In this site, cash assistance applicants are randomly assigned to an approach that requires participation in job search classes and activities for 35 hours per week or to one that requires applicants to meet weekly with a staff person who supervises their job search. The research team is currently working with four other states and localities to develop the evaluation design including Sacramento County, CA; Ramsey County, MN; Genesee and Wayne County, MI; and Westchester County, NY. We expect that most of these sites will start random assignment in early 2016.
Because the final number of sites has not been determined, at this time, we cannot predict the sample size for the evaluation. However, the follow-up survey will target a maximum of 8,000 sample members. If all sites participate at the level planned, all sample members randomly assigned to one or the other JSA approach in a site will be part of the respondent universe. The survey will use 100-percent sampling if less than 8,000 individuals enroll. If more than 8,000 enroll, the sample will include only the first 8,000 to enter the study chronologically. Exhibit B.1 shows sample sizes and predicted response rates for each of the three instruments assuming enrollment of at least 8,000. The predicted response rate for the contact update form is low because we will only ask those with changes in their contact information to return it.
Exhibit B.1: Sample Sizes and Response Rates by Instrument
Instrument |
Selected |
Returned |
Response Rate |
Contact update form |
8,000 |
1,200 |
15% |
Interim tracking survey(per round) |
8,000 |
2,800 |
35% |
JSA Six-month Follow-up Survey |
8,000 |
6,400 |
80% |
As described, it is likely that no sampling will be required for the six-month follow-up survey. If 8,000 or fewer individuals enroll in the study, all of them will be selected for follow-up survey. If more than 8,000 beneficiaries enroll, those beyond the 8,000th participant chronologically will not be selected for follow up. This is the only practical sampling procedure for this study given the start-up period for the sites and relatively short lag from the point of random assignment to follow-up interview.
We start this section with a restatement of the JSA Evaluation research questions outlined in Section A.1.1 of Supporting Statement A. The evaluation will address the following principal research question:
What are the differential impacts of alternative TANF JSA approaches on short-term employment and earnings outcomes?
In addition, the evaluation will address the following secondary research questions:
What are the differential impacts of alternative TANF JSA models on: (a) job quality (including wages, work-related benefits, consistency and predictability of hours); (b) public benefits received; (c) family economic well-being; and (d) non-economic outcomes (including motivation to search for a job and psycho-social skills such as perseverance and self-efficacy)?
What components of JSA services are linked to better employment outcomes?
What are the job search strategies used by successful job seekers?
The first research question will be answered with administrative data and is therefore not discussed further here. The other research questions will require data from the six-month follow-up survey. Since these are secondary research questions that will be treated as exploratory rather than confirmative, no multiple comparison adjustments will be made on these estimated incremental effects. Two-sided hypothesis tests with alpha=0.05 will be used.
With regard to the second research question, it is likely that the nature of the JSA approaches being studied will vary enough across sites to make it necessary to analyze each site separately when estimating the impacts of the JSA approaches. It is assumed that in each site there will be two alternate JSA approaches being studied, one of which is more structured or with a greater time commitment than the other. To estimate the effect of being in this approach, it would be possible to just compute the difference in average outcomes across the two JSA approaches. Although the simple difference in means is an unbiased estimate of the incremental effect of the enhancements in this approach, we will instead estimate intent-to-treat (ITT) impacts using a regression model that adjusts the difference between average outcomes for members of the two groups by controlling for exogenous characteristics measured at baseline. Controlling for baseline covariates reduces distortions caused by random differences in the characteristics of the experimental arms’ group members and thereby improves the precision of impact estimates, allowing the study to detect smaller true impacts. Regression adjustment also helps to reduce the risk of bias due to follow-up data sample attrition. We use the following, standard impact equation to estimate the incremental effect of the more structured and time intensive approach in a given sites:
yi = α + δTi + βXi + εi
where
yi is the outcome of interest (e.g., earnings, time to employment);
α is the intercept, which can be interpreted as the regression-adjusted control group mean;
Ti is the treatment indicator (1 for those individuals assigned to the more intensive JSA model; 0 for the individuals assigned to the less intensive JSA model);
δ is the incremental effect of the more intensive JSA model (relative to the less intensive JSA model);
Xi is a vector of baseline characteristics measured by the BIF (baseline information form) and centered around site-level means;
β are the coefficients indicating the contribution of each of the baseline characteristics to the outcome;
εi is the residual error term; and
the subscript i indexes individuals.
We will use survey regression software like SAS/SurveyReg for the analysis of incremental effects of the various JSA models on survey-measured endpoints so that we can use weights in the analysis. The weights will reflect inverse probabilities of response to the survey modeled through logistic regression as a function of baseline characteristics measured by the BIF, as is discussed further in Section B.3.4. With regard to clustering, although multiple offices will typically be involved in each study site, it appears unlikely that there will be more than a handful of cooperating offices within any single site. As a result, it seems likely to be infeasible to reflect the impact of office-level clustering on variances.
For the third research question, we plan two types of analysis. One will closely resemble the analyses for the second research question, but will focus on estimating differences in the level and content of job search services received by sample members in each of the two groups for each site. These will help interpret any significant findings related to the first and second research questions. The other type of analysis will involve developing models of earnings and TANF receipt in terms of use of various JSA services measured in the survey. These exploratory analyses will examine linkages between JSA services and employment and earnings. These analyses will be conducted on the pooled dataset and will be useful for developing recommendations for the next generation of JSA models.
For the fourth research question, we will also build a logistic model for employment in terms of job search strategies. We anticipate that this model may involve a fairly deep set of interactions. To try to minimize the danger of overfitting this model, we will reserve at least a fourth of the sample for testing models developed on the first portion of the sample.
Minimum detectable effects (MDEs) of the JSA Evaluation are given in Exhibit B.2 for several different possible site sample sizes. As site recruitment is ongoing, it is impossible at the current time to be specific about actual sample sizes. However, even at the smallest sample size considered for a site, 1,000, the MDEs are adequate. For example, a “4.00 pp” entry means that a true change from 15% to 19% can be detected with 80 percent power.
As noted above, estimates of incremental effects of alternate JSA approaches on survey-measured endpoints are a secondary purpose of the survey data collection. For the primary purpose of exploring linkages between JSA services and employment and earnings outcomes on the pooled dataset, there are no fixed precision requirements. The main reason for wanting a large pooled sample size is to permit the application of “greedy algorithms” for data mining. Given the large dimension of the space defined by various combinations of potential services, a large sample size is required to maintain reasonable control over generation of false positive findings of relationships.
Exhibit B.2: Minimum Detectable Effects (MDEs) of JSA Approaches
Statistic |
Percent employed at job with hourly wage of $15 or more, health insurance and paid sick days (Survey) |
MDE given 3000 randomized and 2400 surveyed |
4.0 pp |
MDE given 2000 randomized and 1600 surveyed |
4.9 pp |
MDE given 1000 randomized and 800 surveyed |
7.1 pp |
Control Group Mean |
15% |
Threshold p-value for statistical significance |
0.05 |
Power |
0.80 |
|
0.15 |
Note: MDEs based on two-tailed tests. They are for detection of improvements, but capable of detecting reverses. The assumptions and calculations are similar to those in Abt Associates (2014) for the evaluation of Pathways for Advancing Careers and Education (PACE)(OMB No. 0970-0397). The projected variance reductions due to use of baseline variables are from Nisar, Klerman, and Juras (2013).
The contact update form will be self-administered on paper. The form will be mailed to sample members with a welcome letter, a study overview brochure and a $2 token of appreciation. Participants will be encouraged to retain the form and return the form if there are any changes in contact information.
The interim tracking survey will be administered through text messaging for those who have given consent to receive text messages on their cell phone. Researchers will send a short message service (SMS) text message to sample members inviting them to complete the interim tracking survey each month.2 Sample members will have seven days to complete the survey. If no response is received or the survey is partially complete, a reminder text will be sent on the sixth day. Sample members that refused to provide consent to text, or do not have cell phones, will receive an email with a link to an online version of the survey. The JSA Six-Month Follow-up Survey will be administered by telephone by professional interviewers working in a centralized computer-assisted telephone interview (CATI) system that allows real-time error checking and observation by supervisors.
Not applicable.
The contact form and the six-month follow-up surveys will each be administered once. The interim tracking survey will be administered monthly for a period of up to five months. Because text tracking of cash assistance applicants and recipients to increase response rates on a subsequent survey is a new procedure, the optimal frequency of contact is unknown. We think that monthly contact will be helpful in terms of capturing new contact information before too much time has lapsed and individuals are more difficult to locate.
The methods to maximize response rates are discussed with regard first to participant tracking and locating and then regarding the use of monetary tokens of appreciation.
The JSA Strategies Evaluation team developed a comprehensive participant tracking system, in order to maximize response to the six-month follow-up survey. This multi-stage locating strategy blends active locating efforts (which involve direct participant contact) with passive locating efforts (which rely on various consumer database searches).
The active tracking planned for the JSA Strategies Evaluation begins with a welcome packet, sent to all sample members within the first month of enrollment. This packet will consist of a welcome letter, a study brochure, a contact update card and business reply envelope, and a $2 bill.3 The welcome letter and study brochure provide comprehensive information about the tracking and survey data collection activities. The contact update form will capture updates to the respondent’s name, address, telephone and email information. It will also collect contact data for up to three people that do not live with the participant, but will likely know how to reach him or her. Interviewers will only use secondary contact data if the primary contact information proves to be invalid—for example, if they encounter a disconnected telephone number or a returned letter marked undeliverable. Attachment A of Supporting Statement A shows a copy of the contact update form.
Sample members will be invited to complete a short interim tracking survey once a month. Participants that provide consent to text at enrollment will complete the interim tracking survey via a SMS text message. Sample members that refuse to provide consent to text message contact or do not have cell phones will be invited to complete the interim survey online. This survey will capture data on current employment and JSA service receipt. It will also prompt study participants to update their contact information, and the contact information of up to three friends or relatives (comparable to the contact update form). Participants who respond to the monthly interim surveys will receive a token of appreciation (see B.3.2 below).
In addition to the direct contact with participants, the research team will conduct several database searches to obtain additional contact information. Passive tracking resources are comparatively inexpensive to access and generally available, although some sources require special arrangements for access.
Offering appropriate monetary gifts to study participants in appreciation for their time can help ensure a high response rate, which is necessary to ensure unbiased impact analysis. For this reason, sample members will receive small tokens of appreciation during the six- month period between enrollment and the follow-up survey data collection. Study participants will receive $2 initially, as part of their welcome packet. Those who complete the interim survey will accrue an additional $2 for each completed interim survey, as explained in the welcome letter and study brochure. Just prior to the start of the six-month follow-up survey, the team will send an survey pre-notification letter explaining the purpose of the follow-up telephone survey, the expectations of participants who agree to complete the telephone survey, and the promise of an additional $25 as a token of appreciation for their participation. The token structure is based upon similar reward models commonly used in consumer panels where panelists earn points for each survey they complete, and when they reach a certain level, they may redeem their points for rewards (such as a gift cards or cash) or continue to accrue points for even larger rewards. The survey pre-notification letter will thank study participants for their time in the study and will include the cumulative amount, if any, accrued by completing the interim surveys. For example, a study participant who responds to three of the five interim surveys will receive $6 and someone who responds to all five interim surveys will receive $10 with their pre-notification letter. Finally, study participants who complete the six-month follow-up survey will receive a check for $25 as a token of appreciation for their time spent participating in the survey. In total, enrolled participants can receive between $2 and $37 dollars depending on how many rounds of data collection they complete.
During the data collection period, the research team will minimize non-response levels and the risk of non-response bias in the following ways:
Using trained interviewers (in the phone center) who are skilled at working with low-income adults and skilled in maintaining rapport with respondents, to minimize the number of break-offs and incidence of non-response bias.
Using updated contact information captured through the contact update form or the monthly interim surveys conducted monthly to keep the sample member engaged in the study and to enable the research team to locate them for the follow-up data collection activities.
Using an advance letter that clearly conveys the purpose of the survey to study participants, the incentive structure, and reassurances about privacy, so they will perceive that cooperating is worthwhile.
Taking additional tracking and locating steps, as needed, when the research team does not find sample members at the phone numbers or addresses previously collected.
Employing a rigorous telephone process to ensure that all available contact information is utilized to make contact with participants. The approach includes Spanish-speaking telephone interviewers for participants with identified language barriers.
Requiring the survey supervisors to manage the sample in a manner that helps to ensure that response rates achieved are relatively equal across treatment and control groups and sites
The researchers will link data from various sources through a unique study identification number. This will ensure that survey responses are stored separately from personal identifying information thus ensuring respondent privacy.
If, despite our best efforts, the response rate in a site comes in below 80 percent, we will conduct a nonresponse bias analysis. Regardless of the final response rate, we will construct nonresponse adjustment (NRA) weights. Using both baseline data collected just prior to random assignment and post-random assignment administrative data on continued receipt of TANF and SNAP, we will estimate response propensity by a logistic regression model. Within the combination of site and experimental arm, study participants will be allocated to nonresponse adjustment cells defined by the intervals of response propensity. Each cell will contain approximately the same number of study participants. Within each nonresponse adjustment cell, the empirical response rate will be calculated. Respondents will then be given NRA weights equal to the inverse empirical response rate for their respective cell. An alternative propensity adjustment method could use the directly modeled estimates of response propensity. However, these estimates can sometimes be close to zero, creating very large weights, which in turn lead to large survey design effects. The use of nonresponse adjustment cells typically results in smaller design effects. The number of cells will be set as a function of model quality. The empirical response rates for a cell should be monotonically related to the average predicted response propensity. We will start with a large number of cells and reduce that number until we obtain the desired monotonic relationship.
Once provisional weights have been developed, we will look for residual nonresponse bias by comparing the estimates of the effects of the higher intensity JSA strategy on administrative outcomes estimated with the NRA weights in the sample of survey respondents vs. the estimates of the same effects estimated on the entire randomized sample (including survey nonrespondents) without weights. If they are similar (e.g., within each other’s confidence intervals), then we will be reasonable confident that we have ameliorated nonresponse bias. If, on the other hand, there are important differences, then we will search for ways to improve our models and recalculate the weights as in Judkins, et al. (2007).
The research team did not conduct a pretest on the contact update form. However, the items on this information form are adapted from similar studies of comparable populations including the Pathways for Advancing Careers and Education Evaluation (PACE) (OMB # 0970-0397) and the Health Professions Opportunity Grants (HPOG) Impact Evaluation (OMB # 0970-0394) both conducted for ACF and the Green Jobs and Health Care Impact Evaluation (OMB # 1205-0481) conducted for the U.S. Department of Labor. The contact update form will be translated into Spanish versions once the English version is finalized.
These interim surveys constitute a methods test. We have not used this approach before, but are recommending it because of the challenge of achieving an 80-percent response rate on the six-month follow-up survey using phone administration only. Studies with comparable populations, such as HPOG and PACE include an in-person follow-up component. The in-person follow-up allows field interviewers to go to the respondent’s home or go door-to-door, talk to neighbors to try to get updated contact information. A CATI-only first follow-up survey would typically be expected to achieve a response rate of 60 to 65 percent. We are hopeful that the interim tracking surveys will allow us to achieve higher response rates with a CATI-only approach than would typically be possible by engaging participants in the study and keeping their contact information as up-to-date as possible through the convenient use of text and email. Text-based surveys are a rapidly developing data collection methodology. They have been commonly used in consumer research but emerging rapidly in public health research (CDC, 2012). Preliminary studies show promise for reaching and engaging low-come populations (Chang, 2014; Vervloet et al, 2012).
Content from the tracking survey on current employment status and receipt of JSA services,15 will mostly be used for methods research. We will use the information to compare reporting patterns in monthly contacts versus retrospective recall over six months. We considered asking questions about contact updating only, but we are concerned that such an approach might make it difficult for respondents to distinguish us from malware and thereby reduce response rates.
The research team is planning to test the methodology with no more than nine people with similar characteristics to the study participants. We will also explore whether the text survey can be in Spanish.
In designing the JSA six-month follow-up survey, the research team developed items based on those used in previous studies, including the PACE (OMB # 0970-0397) and HPOG (OMB # 0970-0394) evaluations conducted for ACF. The study also drew questions from a range of studies of comparable populations (see Part A for detail on surveys consulted). We will conduct a formal pretest of the follow-up surveys, with a convenience sample of nine respondents, with characteristics and job search statuses comparable to the study participants. The results of the pre-test will be provided to ACF.
These pretests will provide more definitive estimates about the length of the survey and their various components, as well as lead to improvements in questions, introduction scripts, wording, and document formatting. Following the pretests, respondents will be debriefed about the clarity of the questions and any potential problems with the instruments. Interviewers will also be debriefed concerning any problems they encountered in the survey administration—and they will recommend improvements. The pretest findings will be used to modify the instrument as needed. However, given that many of the questions are from existing surveys, we do not expect many changes in the instruments after piloting. The survey questionnaire will be translated into Spanish versions once the English version is finalized.
Consultations on the statistical methods used in this study have been undertaken to ensure the technical soundness of the research. The following individuals were consulted in preparing this submission to OMB:
Ms. Erica Zielewski Contracting Officer’s Representative
Ms. Carli Wuff Contracting Officer’s Representative
Mr. Mark Fucello Division Director
Ms. Naomi Goldstein Deputy Assistant Secretary for Planning, Research and Evaluation
Ms. Karin Martinson Project Director (301) 347-5726
Dr. Stephen Bell Principal Investigator (301) 634-1721
Mr. David Judkins Statistician (301) 347-5952
Dr. Alison Comfort Analyst (617) 520-2937
Ms. Debi McInnis Survey Operations (617) 349-2627
Abt Associates (2014). Pathways for Advancing Careers and Education Evaluation Design Report. OPRE Report #2014-76, Washington, DC: Office of Planning, Research and Evaluation, Administration for Children and Families, U.S. Department of Health and Human Services.
CDC Morbidity and Mortaility Weekly Report, Julye 27, 2012. Supplement/Vol. 61 “CDC’s Vision for Public Health Surveillance in the 21st Century.
Chang, T., W. Gossa, A. Sharp, Z. Rowe, L. Kohatsu, E. M. Cobb, & M. Heisler. Text messaging as a community-based survey tool: a pilot study. BMC Public Health, 2014; 14 (1): 936 DOI: 10.1186/1471-2458-14-936.
Gurol-Urganci, de Jongh T, Vodopivec-Jamsek V, Car J, Atun R: Mobile phone messaging for communicating results of medical investigations.
Judkins, D., Morganstein, D., Zador, P., Piesse, A., Barrett, B., Mukhopadhyay, P. (2007). Variable Selection and Raking in Propensity Scoring. Statistics in Medicine, 26, 1022-1033.
Vervloet M, Linn AJ, van Weert JC, de Bakker DH, Bouvy ML, van Dijk L: The effectiveness of interventions using electronic reminders to improve adherence to chronic medication: a systematic review of the literature. J Am Med Inform Assoc 2012, 19(5):696-704.
1 Participants can choose not to give their consent to contact via text message. Those that do not consent to text messaging or do not have cell phones will receive an email invitation to participate in the interim tracking surveys online.
2 SMS stands for “short message service.” This type of texting does not require a smart phone. It allows for an exchange of short messages to be threaded together, a critical feature for the administration of a short survey.
3 The JSA Strategies Evaluation enrollment period began in October 2015 in the first study site, prior to OMB approval of the contact update form and use of tokens of appreciation. Welcome packets for the participants enrolled prior to OMB approval will only contain a cover letter and a study brochure. Tracking for these early enrollees will draw heavily upon passive tracking sources until OMB approval is received and this group can be transitioned into the active tracking system as well.
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
Author | Debi McInnis |
File Modified | 0000-00-00 |
File Created | 2021-01-23 |