Part A of the Supporting Statement for RExO follow-up surveys, 5.14.2012

Part A of the Supporting Statement for RExO follow-up surveys, 5.14.2012.doc

Evaluation of the Reintegration of Ex-Offenders—Adult Program (RExO)

OMB: 1205-0498

Document [doc]
Download: doc | pdf

SUPPORTING STATEMENT FOR

PAPERWORK REDUCTION ACT 1995 SUBMISSION

Evaluation of the Reintegration of Ex-Offenders—Adult Program (RExO)


The U.S. Department of Labor (DOL), Employment and Training Administration (ETA) is seeking approval from the Office of Management and Budget (OMB) to collect information from program participants and staff in the evaluation of RExO. This evaluation aims to examine the impact of comprehensive employment-centered services on formerly incarcerated individuals’ employment, earnings, and recidivism. The evaluation will rely on a comparison of the outcomes for RExO service recipients with those for eligible individuals who are randomly assigned to the control group and do not receive RExO services. Information will come from two rounds of surveys of participants in the treatment and control groups, which will include questions about relevant respondent characteristics as well as employment, earnings, and offending after random assignment.


RExO began in 2005 as a joint initiative of DOL, the Department of Justice (DOJ), and several other federal agencies. The purpose of the program, which was formerly known as the Prisoner Re-Entry Initiative (PRI), is to provide employment-centered services as well as case management, mentoring and a range of other supportive services to nonviolent offenders who are newly released from prison. The initiative’s design builds on several earlier and ongoing federal reentry initiatives, mostly supported by DOJ or DOL, including Weed and Seed, the Serious and Violent Offender Reentry Initiative, the Reentry Partnership Initiative, and Ready4Work.


PRI funding was first awarded to 30 community-based organization grantees in April 2005 and renewed in 2006 and 2007. In 2008, DOL conducted a limited competition for the fourth year of funding, as a result of which 24 of the 30 grantees received awards and agreed to participate in this random assignment study. These 24 programs obtained additional funding in 2009.


RExO grantee programs follow a three-stage reentry framework that begins with pre-release services, progresses through structured community-based reentry programming, and culminates in community reintegration with a reduced need for program services. A typical grantee program participant receives services for about three months with continued follow-up of up to a year.


In 2009, ETA contracted with Social Policy Research Associates (SPRA), a research, evaluation and technical assistance firm located in Oakland, California, to carry out an impact evaluation of RExO. MDRC and NORC at the University of Chicago are serving as SPRA’s subcontractors, with the former involved in the administration of random assignment in the 24 participating sites as well as site visits to learn about the program’s implementation, and the latter conducting the survey of study participants.


Between February 2010 and January 2011, sixty percent of eligible clients at the grantee sites have been assigned to the program group receiving RExO services, and the rest are assigned to the control group and may receive other services available in their communities. Altogether 4,660 participants have been assigned to one of the two groups. The impact evaluation design relies on the comparison of the employment, earnings and recidivism outcomes between these two groups.

A. Justification

1. Explain the circumstances that make the collection of information necessary. Identify any legal or administrative requirements that necessitate the collection. Attach a copy of the appropriate section of each statute and regulation mandating or authorizing the collection of information.


This information collection is authorized under Section 172 of the Workforce Investment Act of 1998, which states that “the Secretary shall provide for the continuing evaluation of the programs and activities [carried out under tile I of WIA], including those programs and activities carried out under section 171 [Demonstration, Pilot, Multiservice, Research and Multistate Projects].” Section 172 adds in subsection (c), “Evaluations conducted under this section shall utilize appropriate methodology and research designs, including the use of control groups chosen by scientific random assignment methodologies.” Please see Appendix A.


This information collection is essential to carry out the evaluation of RExO, whose purpose is to shed light on the implementation and impact of RExO grants. Evidence from this study may help to inform the design and implementation of offender re-entry programs in the future.


More than two million people are incarcerated in federal and state prisons and local jails, and over 683,000 people were released from state prisons in 2008 (Sabol, West and Cooper, 2009). Formerly incarcerated individuals face obstacles to successful reintegration in their communities, including difficulties with finding a job, housing, services for substance abuse or mental health problems, child support arrears, and family challenges. A Bureau of Justice Statistics study reported that two-thirds of ex-prisoners were rearrested and half were re-incarcerated within three years of release (Langan and Levin, 2002). Formerly incarcerated individuals frequently return to prison for violations of parole conditions, rather than for new crimes (Petersilia, 2003).


The importance of learning what works for successful reentry extends beyond the life chances and career trajectories of former inmates themselves. Urban neighborhoods, to which ex-offenders disproportionately return, already struggle with high concentrations of poverty and other social problems. Clearly, facilitating successful re-integration of ex-offenders is critical for addressing the escalating costs of incarceration and the complex problems of low-income families and communities.


Although there is not sufficient evidence to establish an incontrovertible link between post-release employment and recidivism outcomes, most experts believe that finding stable employment is key for a successful transition to life outside the prison gates. Surveys of returning offenders show that they consider finding a job important in helping them avoid returning to prison (Urban Institute, 2006). However, a large share of former prisoners have low levels of education and work experience, suffer from health problems or face other challenges in finding stable and well-paying jobs. Moreover, although it is difficult to isolate the impact of incarceration on labor market outcomes, several studies have found that earnings – and possibly employment as well – are lower for individuals who have spent time in prison than for otherwise similar individuals who have not been incarcerated (Western, Kling, and Weiman, 2001). Studies also show that employers are reluctant to hire ex-offenders, especially those who are African American, have been convicted of violent offenses, or were recently released, and are increasingly likely to conduct background checks before hiring (Holzer, Raphael and Stoll, 2007).

Because reintegration of returning prisoners into their communities’ labor markets is as important for successful reentry as it is challenging, DOL and other government and private entities are seeking to develop the evidence base on the effectiveness of different models of services to the formerly incarcerated by supporting studies such as this one.


An earlier process and outcomes study of PRI described the grantees’ programs, services, outcomes, and costs. It found that participants spent an average of about twelve weeks in the programs. Nearly all received workforce preparation services such as job readiness classes, and many received direct job placement assistance. About half benefited from mentoring, some of which took place in a group setting. Upon program completion, about two-thirds obtained unsubsidized jobs, and recidivism rates were reported to be substantially lower than the national figures, although problems with tracking participants may have affected the reliability of the data (Holl, et al., 2009).


These prior findings underscore the important role of the present random assignment study in determining the extent to which participant outcomes reflect the impact of RExO programming rather than other influences or characteristics of participants or their communities. For example, Holl, et al. (2009) showed that the rates of employment for PRI participants were higher, and the rates of recidivism much lower, than those found in other studies of prisoner re-entry programs. These findings may imply that the PRI/RExO approach is a promising one for overcoming the multiple obstacles returning prisoners face. On the other hand, the initiative may be serving a distinct subset of the ex-prisoners who tend to do better with or without the program. This random assignment evaluation will provide evidence in support of one of these a priori plausible explanations.


The proposed participant survey is a central part of information collection for this study. Indeed, it is the only viable means to elicit key information on study participants in the treatment and control groups, including data on their receipt of program services and subsequent outcomes of interest. A copy of the survey instrument is included as Appendix B.


Employment information provides a ready illustration of the survey’s seminal importance. With RExO implemented by 24 grantees across 18 states, some states may not be willing to provide the research team access to their unemployment insurance (UI) wage records. Some participants may hold jobs that are not covered by their state’s UI system (e.g., by being self-employed), which would limit the utility of those records. In addition, UI records provide no information about the consistency or quality of employment, such as whether an individual worked throughout a given period, whether s/he received benefits from his/her work, and what the hourly rate of pay was. Hence, we do not plan to collect UI data from states and will instead rely on the participant survey to obtain needed employment and earnings information.


Another component of the evaluation is a process study of the 24 grantees, aimed at documenting their program operations, including general patterns and specific characteristics. The site visits help the research team to learn about service provision, partnerships, and staffing. These visits include interviews with a number of RExO grantee staff members, including program directors, intake and recruitment personnel, and case managers, as well as key partners and non-RExO service providers in the communities.


  1. Indicate how, by whom, and for what purpose the information is to be used. Except for a new collection, indicate the actual use the agency has made of the information received from the current collection.

The information will be used by staff members at DOL, other public and private entities, and the research community. It will help us understand and analyze the impacts of the program overall and for different target groups. In particular, the information will include service receipt, covering employment-oriented services, mentoring, housing, and substance abuse services; participant outcomes such as employment entry, earnings, and recidivism; and participant characteristics such as family status, number of children, English language ability, and barriers to employment. This information will help inform the design of future policy initiatives and contribute to the evidence base on serving formerly incarcerated individuals. Since we are proposing a new collection, there has been no use of the information to date.


3. Describe whether, and to what extent, the collection of information involves the use of automated, electronic, mechanical, or other technological collection techniques or other forms of information technology, e.g., permitting electronic submission of responses, and the basis for the decision for adopting this means of collection. Also describe any consideration of using information technology to reduce burden.

The research team will use computer-assisted telephone interviewing (CATI) for the survey. Generally, telephone interviews are more cost-effective and impose a lower burden on respondents than in-person interviews do.1 CATI is more cost-effective than paper-and-pencil interviewing for many reasons, including the fact that CATI programs accept only valid responses and can be programmed to check for logical consistency across answers. Interviewers are then able to correct errors during the interview, eliminating the need to call back respondents to obtain missing data. Furthermore, calls will be made through an auto-dialer, linked to the CATI system, which virtually eliminates dialing error. The automated call scheduler will simplify scheduling and rescheduling of calls to respondents at their convenience and can assign cases to specific interviewers, for example to those fluent in Spanish.


4. Describe efforts to identify duplication. Show specifically why any similar information already available cannot be used or modified for use for the purposes described in item 2 above.

The information to be collected is not otherwise available. The abovementioned process and outcomes study did not collect any information on individuals who sought to receive RExO services but did not receive them (i.e., a control group), nor did it collect comprehensive data on employment-related and criminal justice outcomes as proposed for this evaluation, as it did not involve a survey of participants (Holl, et al., 2009).


5. If the collection of information impacts small businesses or other small entities (Item 5 of OMB Form 83-I), describe any methods used to minimize burden.

The collection of information is not expected to have a significant impact on small businesses or other small entities.


6. Describe the consequences to Federal program or policy activities if the collection is not conducted or is conducted less frequently, as well as any technical or legal obstacles in reducing burden.

If the information collection is not conducted, the impacts of RExO will not be known even approximately. Given that ex-offender programs continue to be funded across the country, including by DOL, failure to gather such information would be a foregone opportunity for funders and grantees alike to draw potentially useful and actionable program design and service delivery lessons.


7. Explain any special circumstances that would cause an information collection to be conducted in a manner:

  • requiring respondents to report information to the agency more often than quarterly;

  • requiring respondents to prepare a written response to a collection of information in fewer than 30 days after receipt of it;

  • requiring respondents to submit more than an original and two copies of any document;

  • requiring respondents to retain records, other than health, medical, government contract, grant-in-aid, or tax records, for more than three years;

  • in connection with a statistical survey, that is not designed to produce valid and reliable results that can be generalized to the universe of study;

  • requiring the use of statistical data classification that has not been reviewed and approved by OMB;

  • that includes a pledge of confidentiality that is not supported by authority established in statute or regulation, that is not supported by disclosure and data security policies that are consistent with the pledge, or which unnecessarily impedes sharing of data with other agencies for compatible confidential use; or

  • requiring respondents to submit proprietary trade secrets, or other confidential information unless the agency can demonstrate that it has instituted procedures to protect the informations confidentiality to the extent permitted by law.

There are no plans to require respondents to report information more than quarterly, to prepare a written response to a collection of information within 30 days of receiving it, to submit more than one original and two copies of any document, to retain records, or to submit proprietary trade secrets. The survey will include only statistical data classifications that OMB has reviewed and approved. The informed consent form each participant has signed includes a pledge of privacy by the researchers that is supported by appropriate disclosure and data security policies (please see Appendix D).


Upon enrolling in the RExO program, participants were informed their eligibility for benefits was predicated on an agreement to participate in any evaluation of the program that might be undertaken, and information provided at that time did provide an assurance of confidentiality which the agency has subsequently determined might not be supported in statute or regulation. With appropriate replacements of “privacy” instead of “confidentiality” assurances, this pledge will be repeated prior to administration of the survey.


No statistical methods will be used. All data collection will be based on a 100 percent sample of the inference population. In all reports and other publications and statements resulting from this work, no attempt will be made to draw inferences to any population other than the set of units that responded to the data collection effort. A high response rate is expected, however, because the most advanced methods will be used to locate and contact these individuals, and based on past efforts of this type the study team has been able to achieve high response rates.


8. If applicable, provide a copy and identify the date and page number of publication in the Federal Register of the agencys notice, required by 5 CFR 1320.8(d), soliciting comments on the information collection prior to submission to OMB. Summarize public comments received in response to that notice and describe actions taken by the agency in response to these comments. Specifically address comments received on cost and hour burden.

Describe efforts to consult with persons outside the agency to obtain their views on the availability of data, frequency of collection, the clarity of instructions and recordkeeping, disclosure, or reporting format (if any), and on the data elements to be recorded, disclosed, or reported.

Consultation with representatives of those from whom information is to be obtained or those who must compile records should occur at least once every 3 years – even if the collection of information activity is the same as in prior periods. There may be circumstances that may preclude consultation in a specific situation. These circumstances should be explained.


a. Federal Register Notice and Comments

The published 60-day Federal Register notice appeared on March 8, 2011 (Vol. 76, pp. 12758-12759). Please see Appendix F. No comments were received.


b. Consultations outside the Agency

There have been none.


9. Explain any decision to provide any payment or gift to respondents, other than remuneration of contractors or grantees.

Following best practices in the field, the research team will use several strategies to attain high response rates to the survey. They include sending a letter immediately before the survey to remind respondents about the study, using experienced and well-trained interviewers, and call scheduling to allow respondents to select the most convenient time for their interview.


As an additional strategy to encourage response and acknowledge that participation carries some burden, the research team plans to offer $40 payments to sample members who complete the telephone interviews. Our strategy of providing incentives for participation in surveys draws on the extensive literature pointing to the importance of incentives in helping to achieve high levels of cooperation. Incentives also reduce overall costs by lowering the burden involved in follow-up efforts using means such as in-person interviewing and extensive search for participants. As a result, OMB has approved the use of incentives in numerous other studies.


In chapter 4 of the National Academy of Sciences Studies of Welfare Populations: Data Collection and Research Issues, Singer and Kulka (2002) offer a thorough review of research on the use of incentives. They find that incentives significantly reduce nonresponse and are cost-effective, lowering the overall cost and burden for most surveys. Reviewing research findings on the use of incentives in telephone and face-to-face surveys, the authors conclude that: (1) incentives improve response rates; (2) the difference between the effects of prepaid incentives and promised incentives is not statistically significant; and (3) incentives have a significant effect in both low-burden and higher-burden surveys (op. cit., pp. 105-128).


A number of studies have also reported on the effects of incentives on sample composition and consequently on the potential for nonresponse bias. In some of these, incentives compensated for respondent lack of interest in the survey. Incentives have been shown to increase response rates for younger people (Dillman and Sangster, 1996), those with lower educational levels (Berlin et al., 1992), and low-income and minority respondents (James and Bolstein, 1990). In the National Survey of College Graduates, response rates for scientists and engineers substantially exceeded the average in the 1980s without incentives; however, their introduction in the 1990s narrowed the gap (Shettle and Mooney, 1999). Baumgartner and Rathbun (1997) found a significant impact of incentives on response rates for groups in which the survey topic had little salience, but virtually no impact in the high-salience group. In a review article, Singer and Kulka (2002) state that based on the literature, “certain kinds of dependent variables would be seriously mismeasured if incentives had not been used.”


A number of recent studies provide additional evidence that larger incentives can improve response rates and reduce overall survey costs. Singer and Kulka (2002) document specific results of incentive experiments from the 1996 panel of the Survey of Income and Program Participation, which showed that a $20 incentive significantly increased response rates, while a $10 incentive had no effect. A recent incentive experiment was conducted for DOL by SPRA as part of the Impact Evaluation of the Trade Adjustment Assistance Program. In an unpublished report, Schochet, Berk, and Nemeth (2008) found that response rates were significantly greater for sample members who received a $50 payment compared with those who received a $25 payment with the difference as high as fifteen percent.

Because of the hard-to-reach nature of this population, it will be imperative to offer a significant incentive in order to reach individuals. Given that we must rely upon a series of previously provided contact names and numbers (of family and friends) to locate and eventually interview our respondents, the size of the incentive will be critical in communicating the importance of participants’ participation. The offer of $40, noted in our telephone scripts, will serve to underline the importance of the contact for the respondent and subsequently increase the likelihood that gatekeepers will provide updated information or transfer messages to the individual targeted.

Further, initially we had planned to contact individuals approximately one year after their entry into the study. Thus, there is now substantial concern about response rates given that most respondents will be contacted two years after recruitment. Given this lengthy delay, the intrinsically difficult nature of surveying these former offenders, and the fact that successful prior surveys of former offenders have used a $50 incentive, we believe the $40 incentive is necessary to gain participants’ attention, and to ensure that a sufficient number of them agree to respond to the survey.


In addition to concerns about the cost of in-person interviews, the RExO survey also poses substantial cost challenges because the potential respondents are highly mobile and very difficult to locate. Hence, substantial costs will be borne in simply trying to track them down. Accordingly, we also seek to investigate the utility of offering a $15 bonus to those participants who, upon receiving an initial letter informing them they will be contacted soon to complete the interview, call in themselves to the toll-free lines to complete or schedule the interview. This ‘early bird incentive’ would be an attempt to reduce project costs for cooperative respondents so that we can devote the resources necessary for tracking and locating difficult cases. NORC has successfully used an ‘early bird incentive’ on the National Longitudinal Survey of Youth 1979, a major longitudinal survey conducted for the Bureau of Labor Statistics. We anticipate only a small percentage of respondents—approximately fifteen percent—will take advantage of this “early bird” incentive, but we do believe the savings generated by not having to track these individuals down will render this approach a cost effective one.


We propose to investigate the effectiveness of the early bird incentive by conducting an experiment during the first round of survey administration. To do so, during this wave of survey administration, we will randomly select one-half of all potential respondents to receive notice of the early bird incentive. The remaining half will not receive such notice (and will be ineligible to receive the additional funds). Logistically, we propose the following:


  • All treatment and control subjects will be offered a $40 incentive in the initial invitation letter for completion of the survey. All subjects will be provided the call-in 1-800 number and be asked to call in to complete the interview.

  • One-half of the sample will be offered an additional $15 Early Bird incentive if the respondent initiates the call in the first two weeks of the follow-up period and completes the interview. Prior to sampling, all subjects (treatment and controls) will first be organized by site; within each site one-half of the potential respondents will be selected to receive the incentive offer. The remaining half will not be eligible for this incentive. Selecting respondents within sites will help control for demographic or systemic variations that could alternatively account for rates of non-response (e.g., failure to call or locate due to recidivism).

Interviewers will be instructed to work cases normally with no special focus on one group or the other; except of course for the monetary offering and confirmation. The first two weeks of the eight week follow-up period will involve only receipt of calls; all cases will subsequently follow a similar plan with 1) call outs to all respondent numbers not yet completed, 2) call outs to other contact numbers, 3) on-line searching, 4) accurint searching, and 5) field contacting.

  • Each day, the number (and percentage) of completed cases for the Early Bird and control group will be plotted, throughout the eight week follow-up period for each round.

  • At each 10 percent increment of completion rates (10 percent, 20 percent, 30 percent, etc) for each group, the average number of record of calls for cases within the group will be determined. The record of calls documents each attempt to contact or locate a respondent and the results of the activity; this will provide a general level of effort to work the cases to specific target completion rates. This will allow the comparison of a general level of effort required to obtain a 10 percent (20 percent, 30 percent, etc.) rate of completion for the Early Bird respondents compared to the level of effort needed to obtain similar completion rates for the control group.

  • At the end of the project period, assuming noticeable differences are obtained in initial indicators, average lengths of time will be attributed to different disposition codes for the records of calls (e.g., a disconnected number is shorter than a voice message left, and both are shorter than a conversation or a subsequent interview). While such figures must be estimated, this conversion will allow a comparison of the interviewer costs for the control group to the interviewer costs plus additional incentives for the Early Bird group, standardized at a similar completion rate.


Results from this experiment will be provided to OMB for its review to determine if, for the second wave of survey administration, all potential respondents should be offered the early bird incentive. Should the results clearly indicate that the incentive increased response rates and/or reduced the costs of survey administration, we will propose that OMB allow us to offer all respondents in the second round of the survey the early bird incentive. Should the results not indicate the incentive is effective, no respondents will be offered the incentive in the second wave of survey administration.


Our plan is to mail a check or postal money order for $40 (or $55) after the sample member completes the interview. Some research has shown that pre-paid incentives may increase response rates on mail surveys (Church 1993), but other research shows that, for telephone surveys, conclusions are less clear-cut (Singer et al. 1999). A pre-payment strategy also costs more since the payment would go to non-respondents as well as respondents. Hence, our plan is to reimburse respondents after they have completed the survey.


The incentives are expected to cost approximately $295,927, based on an 80 percent response rate, a fifteen percent response to the early bird incentive, and two waves of survey administration. Based on prior surveys conducted with the prisoner population, it is estimated that the additional cost of the survey necessary if no incentives were used would exceed the actual cost of the incentives by approximately fifty percent.


10. Describe any assurance of confidentiality provided to respondents and the basis for the assurance in statute, regulation, or agency policy.

SPRA and its subcontractors will follow procedures consistent with provisions of the Privacy Act (5 U.S.C. § 552a) for assuring and maintaining privacy. Privacy agreements will be established with states and localities in the collection of administrative records. Item 7 explains assurances provided at the time persons enrolled in the RExO Program, (Appendix D), in an advance letter describing the survey (see Appendix E), and again at the outset of the interview as part of the interviewer’s introductory comments. Respondents will be informed that all information they provide will be treated as private unless its release is required by law. Interviewers will be trained in privacy procedures and will be prepared to describe these procedures in full detail, if needed, or to answer any related questions respondents might raise.

All data items that identify respondents will be kept by SPRA and its subcontractors for use in assembling records data and conducting interviews. Any data submitted to DOL will not contain personal identifiers, precluding individual identification, unless SPRA is specifically ordered to do otherwise by Congress or a court ruling.

In addition, the following safeguards are routinely used by research team members to assure privacy in the collection and maintenance of survey data:

  • Access to research sample members’ personally identifiable information will be limited to individuals with direct responsibility for providing respondent names to interviewers.

  • Personally identifiable information will be kept in a file separate from interview data. The files will linked only via participant identification numbers.

  • Access to files containing sample members’ identification numbers will be limited to the Project Director and the Principal Data Programmer.

  • Access to hard-copy documents will be strictly limited. Physical precautions will include use of locked files and cabinets, shredders for discarded materials, and interview instrument control procedures.


The research team will also use standard methods to guard against inadvertent disclosure.2 They include methods for handling tabular results of frequency and magnitude data and for preparing public use files. Only results with adequate statistical precision will be reported in tables, which is more than is strictly necessary to protect against inadvertent disclosure. We intend to exceed the guidelines below in most cases.

Tabular Results of Frequency Data. Following standard practices,3 for tabular results of frequency data, we will mitigate the risk of inadvertent disclosure by adhering to these two conditions:

  • No cell shall be reported if the number of respondents is less than ten and

  • No single cell shall solely account for a row or column total.

Tabular Results of Magnitude Data. For tabular results of magnitude data, we will require each cell’s value to be based on ten or more respondents and will only report data if the two respondents with the largest values for the measure in question contribute less than 60 percent of the cell’s total value.

Rows or columns will be combined, as necessary, until the conditions for frequency or magnitude data are met.

Reporting Microdata. One of this project’s deliverables is a public use file of microdata. Following customary guidelines, we will implement the following safeguards to guard against inadvertent disclosure:

  • No personal identifiers will be appended to any record

  • Geographic units will not be identified4

  • The prison or jail from which the individual was released will not be revealed to anyone outside key staff from the research team

  • Key information drawn from administrative data that could be used to identify an individual (including enrollment date, date of training, and date of exit) will be rounded (e.g., dates will be reported in mmyyyy format, rather than mmddyyyy format) and random perturbations will be applied and

  • Variables will be bottom-coded or top-coded, if extreme values are present.


11. Provide additional justification for any questions of a sensitive nature, such as sexual behavior and attitudes, religious beliefs, and other matters that are commonly considered private. This justification should include the reasons why the agency considers these questions necessary, the specific uses to be made of the information, the explanation to be given to persons from whom the information is requested, and any steps to be taken to obtain their consent.


The survey for the RExO evaluation contains items that may be considered sensitive in nature. These questions include the receipt of income by the sample member from employment, other household income, public assistance receipt, health and substance abuse, and a few questions about criminal offending. Questions about income, public assistance receipt, and offending are necessary to construct key outcome measures for the study, since the primary goals of the program are to improve the long-term earnings and income of program participants and to reduce their recidivism. Questions about substance use help the research team establish the degree of the respondents’ reintegration into society outside prison as well as gauge possible need for services and determine whether it is met.


This study will involve the use of Social Security numbers and criminal justice or prison ID numbers to collect information from state and Federal agencies, possibly including respondent quarterly earnings and any involvement with the criminal justice system both before and after enrollment in the study. This information will be collected from the management information system (MIS), where its collection is authorized under OMB Control Number 1205-0455.


As described in item 10 above, all respondents will be assured their responses will be kept private at the outset of the interview, unless release of their information is required by law. All questions in the current survey, including those of a sensitive nature, have been pre-tested and used extensively in prior surveys with no evidence of harm.

12. Provide estimates of the hour burden of the collection of information. The statement should:

  • Indicate the number of respondents, frequency of response, annual hour burden, and an explanation of how the burden was estimated. Unless directed to do so, agencies should not conduct special surveys to obtain information on which to base hour burden estimates. Consultation with a sample (fewer than 10) of potential respondents is desirable. If the hour burden on respondents is expected to vary widely because of differences in activity, size, or complexity, show the range of estimated hour burden, and explain the reasons for the variance. Generally, estimates should not include burden hours for customary and usual business practices.


The total number of potential respondents for the RExO survey is 4,660. Assuming an 80 percent response rate in the first round and 70 percent in the second, the total expected number of respondents will be 3,728 and 3,262 respectively. NORC is an industry leader in the fielding of national and local surveys on numerous topics, including longitudinal surveys of hard-to-locate populations. This experience will enable the research team to obtain high response rates even with this challenging population.


Several prior projects illustrate NORC's ability to successfully survey highly mobile and low-income populations, including specifically ex-offenders, by understanding, anticipating, and responding to changes in respondent circumstances, including: the National Longitudinal Survey of Youth (NLSY), which after 23 rounds of data collection since 1979 had an 82 percent response rate in 2009; a Multisite Evaluation of Foster Youth Programs with final response rates by site that ranged from 89 to 92 percent for a 24-month interview; and Wave 6 of the Chicago-based Study of Adolescent Health where a 90.6 percent response rate was attained.


NORC has also conducted the General Social Survey (GSS) since 1972, which includes a set of diverse questions on social attitudes and behaviors, helping to elucidate the complex and evolving nature of the American society and place it in a comparative perspective. NORC obtained an 83 percent response rate for the GSS in 2006 and a 78 percent response rate in 2008. Further, the Survey of Consumer Finances, sponsored by the Federal Reserve Board, is the only fully representative source of information on the broad financial circumstances of U.S. households. In 2009 NORC obtained an 87% response rate. Finally, the Making Connections project, sponsored by the Annie E. Casey foundation, examines mobility, social capital and networks, neighborhood processes, resident perceptions and participation, economic hardship, the availability and utilization of services, and child and adolescent well-being in poor urban communities.  NORC averaged an 80 percent response rate across all participating sites in the recent wave 3.


Since a pilot test of the survey with fewer than ten respondents indicated that the interview took thirty minutes to complete, the estimate of burden to complete the first round of the survey is 1,864 hours. The second round of the survey yields an additional 1,631 hours. In total, the estimate of burden for two administrations of surveys is therefore 3,495 hours.


In addition, we are collecting administrative data on criminal justice involvement in each of the 18 states in which RExO was operating. We plan to collect these data at three separate points in time (in advance of the follow-up period, and again approximately two and three years after participants entered the study). While these data are publically available, they can be obtained only on an individual basis, and one must pay a fee for each record. Hence, collecting the information on a statewide basis, with little to no fee, is a cost effective means for obtaining these records. Nevertheless, this effort does involve some burden on those who maintain and extract the records within each state. Our pilot tests of this data collection indicated each state would require about 2 hours of time to retrieve these records. Thus, the burden estimate also includes a total of 108 hours for this data collection (18 states, 2 hours per state, 3 different points of data extraction).


TABLE 2

ESTIMATE OF BURDEN HOURS AND COSTS

Information Collection Activity

Total Respondents

Frequency

Average Time per Response

Burden Hours

Impact component

12-month survey

3,728

Once

30 minutes

1,864

36-month survey

3,262

Once

30 minutes

1,631

Administrative Records Collection

18

3

120 minutes

108

Total (unduplicated)

3, 752



3,603


  • If this request for approval covers more than one form, provide separate hour burden estimates for each form and aggregate the hour burdens in Item 13 of OMB Form 83-I.


This request for approval covers more than one form. As stated, the survey form involves 3,495 total burden hours, and the administrative data collection involves 108 burden hours.


  • Provide estimates of annualized cost to respondents for the hour burdens for collections of information, identifying and using appropriate wage and rate categories. The cost of contracting out or paying outside parties for information collection activities should not be included here. Instead, this cost should be included in Item 13.


As noted above, the total estimate of burden for two rounds of surveys is 3,495 hours. At an average wage of $105 per hour for the first wave of the survey, this represents a total cost of $18,640. Using an hourly rate of $11, the estimated burden for the second wave of the survey is $17,941. The burden estimate is greatly surpassed by the incentive payments to participants, which is estimated at $366,975 for the two rounds (as detailed below). The estimated burden for the administrative data collection is $3,888, which is calculated as the 108 hours times $36 per hour, an estimate of the average wage for administrative database staff. The average wage of $36 for staff members was calculated using the average hourly wage for database administrators, as reported by the Bureau of Labor Statistics (for May 2010; http://www.bls.gov/oes/current/oes_nat.htm#21-0000). The total estimate of burden is thus $40,469.


Information Collection Activity

Total Respondents

Annual Burden

Average Cost per Hour

Annual Cost of Burden Hours

Impact component

12-month survey

3,728

1,864 hours

$10

$18,640

36-month survey

3,262

1,631 hours

$11

$17,941

Administrative Data Collection

18

108 hours

$36

$3,888

Total


3,603 hours


$40,469




13. Provide an estimate for the total annual cost burden to respondents or recordkeepers resulting from the collection of information. (Do not include the cost of any hour burden shown in Items 12 and 14).

  • The cost estimate should be split into two components: (a) a total capital and start-up cost component (annualized over its expected useful life) and (b) a total operation and maintenance and purchase of services component. The estimates should take into account costs associated with generating, maintaining, and disclosing or providing the information. Include descriptions of methods used to estimate major cost factors including system and technology acquisition, expected useful life of capital equipment, the discount rate(s), and the time period over which costs will be incurred. Capital and start-up costs include, among other items, preparations for collecting information such as purchasing computers and software; monitoring, sampling, drilling and testing equipment; and record storage facilities.


The proposed information collection will not require that respondents purchase equipment or services or establish new data retrieval mechanisms. Survey content draws on opinions and factual information that is presumed to be readily available to respondents. Therefore, the sole cost to respondents is the value of the time they spend on answering the survey or interview questions. In particular:

(a) We do not expect any capital and start-up costs.

(b) We do not expect respondents to spend much time on generating, maintaining, disclosing or providing the information.


  • If cost estimates are expected to vary widely, agencies should present ranges of cost burdens and explain the reasons for the variance. The cost of purchasing or contracting out information collections services should be a part of this cost burden estimate. In developing cost burden estimates, agencies may consult with a sample of respondents (fewer than 10), utilize the 60-day pre-OMB submission public comment process and use existing economic or regulatory impact analysis associated with the rulemaking containing the information collection, as appropriate.


We do not expect wide variances in the cost estimates for conducting this information collection.


  • Generally, estimates should not include purchases of equipment or services, or portions thereof, made: (1) prior to October 1, 1995, (2) to achieve regulatory compliance with requirements not associated with the information collection, (3) for reasons other than to provide information or keep records for the government, or (4) as part of customary and usual business or private practices.


We do not expect survey respondents to purchase equipment or services in order to respond to this information collection effort.


14. Provide estimates of annualized costs to the Federal government. Also, provide a description of the method used to estimate cost, which should include quantification of hours, operational expenses (such as equipment, overhead, printing, and support staff), and any other expense that would not have been incurred without this collection of information. Agencies may also aggregate cost estimates from Items 12, 13, and 14 in a single table.


The total cost to the Federal government of carrying out this study is $6, 894421, to be expended over its five-year period of performance. Of this, $3,060423 is due to the administration of and data collection for the survey, including $$295,927 for incentives to respondents.6 An additional $888,249 is due to the analysis and reporting of the data collected. The remaining $2,945,749 is due to other parts of the study, notably design, implementation and monitoring of random assignment.


15. Explain the reasons for any program changes or adjustments reported in Items 13 or 14 of the OMB Form 83-I.

This one-time request is new and will contribute 3,879 additional hours toward ETA’s information collection burden.


16. For collections of information whose results will be published, outline plans for tabulation and publication. Address any complex analytical techniques that will be used. Provide the time schedule for the entire project, including beginning and end dates of the collection of information, completion of report, publication dates, and other actions.


A. Tabulations

Information will be collected and tabulated in two broad areas: (1) the program’s net impacts on employment, earnings, and recidivism and (2) variation of these impacts across subgroups of participants and grantee types. Specific tabulations will reflect the multiple types of analyses discussed below. Results will be presented and interpreted in the context of community characteristics and other relevant factors.


B. Analytic Approaches

Overall Analysis. The impact analysis will begin with establishing the extent to which the outcomes of RExO program participants differ from those of the control group, which had access to other community services but not to RExO.


Our impact analysis will employ methods that are appropriate and accessible. Because the two randomly assigned groups exhibit similar socioeconomic, demographic and criminal history characteristics and differ only along the dimension of interest (RExO service receipt), we will primarily compare the averages and distributions of the outcome variables between them. Standard statistical tests such as the two-group t-test (for continuous variables) or chi-square tests (for categorical measures and distributions) will be used to determine whether estimated effects are statistically significant at the 1, 5, or 10 percent level (Greene, 1999).7


Since we will analyze multiple outcomes, we will explore the possibility of adjusting estimates to account for the multiplicity of hypotheses. One option is to use the Bonferroni correction (Darlington, 1990).8  This correction, however, is quite conservative in that it makes it rather difficult to reject the null hypothesis and find a significant difference between the groups.  Accordingly, we also plan to consider less conservative techniques, including Sidak's correction (which assumes that the various tests are independent of one another),9 sequential Bonferroni correction methods (such as Holm's or the Simes-Hochberg methods, which eliminate rejected hypotheses from the number of comparisons, thereby increasing the power of the tests), or the false discovery rate, originally discussed by Benjamini and Hochberg (1995).10


We will use regression adjustment to increase the power of statistical tests, while closely monitoring any implications it may have for impact estimates. Where appropriate, we will explore more sophisticated statistical methods such as discrete choice regression for categorical outcomes (Maddala, 1986); Poisson regression for outcomes that can be counted (Amemiya, 1985); spell analyses (Lancaster, 1990); and panel data methods for outcomes that are measured at several points in time such as quarterly earnings (Hsiao, 1990).


Because we are primarily interested in the average effect of RExO for the 24 grantees that were part of the initial funding for the program (all of which are included in our study) and are not trying to predict what effects would be of some alternative grantee implementing the program, we will include fixed effects for each grant program in our regression specification.


Variation by subgroup. We will estimate impacts for key subgroups defined by age, race/ethnicity, gender, and criminal history. We will estimate subgroup impacts in three ways. First, we will use “split-sample” subgroup analyses; under this approach, the sample is divided into mutually exclusive groups, and impacts are separately estimated for each group. In addition to determining whether the intervention had statistically significant effects for each subgroup, Tukey-Kramers q-statistics are used to determine whether impacts differ significantly across subgroups (Hedges and Olkin, 1985).11 A related type of subgroup analysis uses regression methods to see if the effects of the intervention vary significantly with a continuous baseline measure (or one that takes on many values) such as age. Finally, we will employ “conditional” subgroup analyses, which take the regression approach one step further by controlling for the effects of other baseline characteristics when estimating the relationship between a particular subgroup and program effects. For example, in estimating whether the programs have larger effects for older sample members, conditional subgroup analysis controls for gender, type of offense, criminal history, and so on.12 By estimating the impacts by subgroups using multiple approaches, we can ensure that the findings from these analyses are robust under different sets of assumptions that underlie the differing methods.


Variation in impacts by sites. We expect that the main analysis will pool data from all participating sites; sample sizes in the individual sites are too small to permit site-specific impact analysis. Nevertheless, we are aware that future program operators will benefit from information on how impacts vary with the interventions that are used, local labor market conditions, services available to control group members, and the characteristics of program group members. The impact study will address these issues by relating a few specific programs, policies, and management practices—determined a priori— to program impacts.13 Our first step will be to determine whether there is significant variation in impacts across sites using standard tests from the literature on multi-level modeling (e.g., Bryk and Raudenbush, 1992). If there is variation, we would then use multi-level methods to explore the relationship between site and program characteristics and site-level impacts.14 In doing so, we would focus on a small number – perhaps three or four – of the most important factors revealed in the implementation research because 24 sites will provide limited statistical precision in estimating the association between site impacts and site characteristics.15 An example of this approach can be found in one of our earlier random assignment studies (Bloom et al., 2007), which examined how program impacts in welfare-to-work programs vary with management practices across 59 sites, controlling for individual and site characteristics.

The few site-level characteristics we are likely to use are those on which there is clear variation across the sites, and each site is well-defined in terms of where it would fall on the given characteristic. Specific site-level characteristics to be examined include: immediate job placement emphasis versus job readiness training (as measured by the length of the work readiness component of the program, which varies from a couple hours to several weeks); the intensiveness of mentoring (which varies from all participants receiving substantial individual mentoring to loosely-based group mentoring “events” at which attendance is optional); the level of screening grantees used prior to enrolling a participant (which ranged from a simple check that individuals filled our paperwork to requiring attendance at multiple events); and the intensity of case management (which ranged from optional sessions at the participants’ discretion to required meetings more than once per week to address outstanding issues and concerns). We must stress, however, that estimates of the effects of program components would be non-experimental and not as certain as if sites were randomizing individuals to one component or another.

C. Publication Plans

Publication plans for the RExO evaluation are as follows:

  • Interim Report. This report presents a summary of findings from the implementation of random assignment, participant characteristics, and the process study, including the implementation of the program, variation in services, and the community context across all 24 grantees. A revised draft will be delivered in 2011.

  • Impact Report. The impact report will present a comprehensive synthesis of evaluation findings for the first year after participants’ entry into the study, drawing on the results of the first survey round. It will summarize the implementation of random assignment, draw on information from administrative data and the participant survey, and include impact estimates on all key outcome measures. A draft will be submitted to ETA in September 2012, and the final version will be submitted by November 2012.

  • Longer-Term Impact Report. The longer-term impact report will extend the impact analysis to cover the three-year period following participants’ enrollment and draw on the results of the second round of the survey. A draft report will be submitted to ETA in April 2014, and a final version will be submitted by June 2014.


D. Project Schedule

The evaluation began in July 2009 and has a projected end date of June 2014. The timing of key activities is shown in Table 3.

TABLE 3

SCHEDULE FOR THE RExO EVALUATION

Activity

Time Period

Study Design

July 2009 – January 2011

Implementation and Monitoring of Random Assignment


Implementation

January– April 2010

Monitoring

February 2010– January 2011

Collection of Administrative Data


Round 1

November 2011 – August 2012

Round 2

January 2014 – April 2014

Collection of Survey Data


Round 1

December 2011 – February 2012

Round 2

March 2013 – February 2014

Analysis and Reporting


Interim Report

October 2011

Impact Report

September – November 2012

Longer-Term Impact Report

April – June 2014



17. If seeking approval to not display the expiration date for OMB approval of the information collection, explain the reasons that display would be inappropriate.

We are not seeking approval to not display the expiration date.


18. Explain each exception to the certification statement identified in Item 19, Certification for Paperwork Reduction Act Submissions, of OMB Form 83-I.

There are no exceptions to the certification statement.

References

Amemiya, T. (1985). Advanced econometrics. Cambridge, MA: Harvard University Press.

Baumgartner, R. and Rathbun, P. (1997). Prepaid monetary incentives and mail survey response rates. Paper presented at the Annual Conference of the American Association of Public Opinion Research, Norfolk, Virginia.

Benjamini, Y., and Hochberg, T. 1995. Controlling the False Discovery Rate: a practical and powerful approach to multiple testing. J. Royal Stat. Soc. B 85: 289–300.

Berlin, M., et al. (1992). An experiment in monetary incentives. In Proceedings of the Section on Survey Research Methods, 393-398. Alexandria, VA: American Statistical Association.

Bloom, D., Redcross, C., Zweig, J. and Azurdia, G. (2007). Transitional jobs for ex-prisoners. Early impacts from a random assignment evaluation of the Center for Employment Opportunities (CEO) prisoner reentry program. Working paper. New York: MDRC.

Bryk, A. and Raudenbush, S.  (1992).  Hierarchical linear models.  Newbury Park, CA: Sage Publications.

Church, A.H. (1993). Estimating the effects of incentives on mail response rates: A meta-analysis. Public Opinion Quarterly, 57, 62-79.

Cooper, H.M., and Hedges, L.V. (Eds.). (1994). The handbook of research synthesis. New York: Russell Sage Foundation.

Darlington, R.B. (1990). Regression and linear models. New York: McGraw-Hill.

Dillman, D. and Sangster, R.L. (1996). Understanding differences in people’s answers to telephone and mail surveys. In M.T. Braverman and J.K. Slater (Eds.), New directions for evaluation series, 70, 45-62.

Dillman, Don A., Glenn Phelps, Robert Tortora, Karen Swift, Julie Kohrell, and Jodi Berck. 2001. :Response Rate and Measurement Differences in Mixed Mode Surveys Using Mail, Telephone, Interactive Voice Response and the Internet.” Draft Paper. http://survey.sesrc.wsu.edu/dillman/papers/Mixed%20Mode%20ppr%20_with%20Gallup_%20POQ.pdf

Greenberg, D., Meyer, R., Michalopoulos, C., and Wiseman, M. (2003). Explaining variation in the effects of welfare-to-work programs. Evaluation Review, no. 4.

Greenberg, D., Michalopoulos, C., and Robins, P. K.  (2005).  A meta-analysis of government-sponsored training programs. Industrial and Labor Relations Review, 57(1), 31-53.

Greene, W.H. (1999). Econometric analysis (4th ed.). New York: Prentice-Hall.

Hamilton, G., et al. (2001). How effective are different welfare-to-work approaches? Five-year adult and child impacts for eleven programs. Report prepared for U.S. Department of Health and Human Services and U.S. Department of Education.

Hedges, L.V. and Olkin, I. (1985). Statistical methods for meta-analysis. Boston: Academic Press.

Holl, D. B., Kolovich, L., Bellotti, J. and Paxton, N. (2009). Evaluation of the Prisoner Re-Entry Initiative: final report. Bethesda, Maryland: Coffey Consulting, LLC and Mathematica Policy Research, Inc. Prepared for U.S. Department of Labor, Employment and Training Administration.

Holzer, H., Raphael, S., and Stoll, M. (2007). The effect of an applicant’s criminal history on employer hiring decisions and screening practices: evidence from Los Angeles. In S. Bushway, M. A. Stoll, and D. F. Weiman (Eds.), Barriers to reentry? The labor market for released prisoners in post-industrial America. New York: Russell Sage Foundation.

Hsiao, C. (1990). Analysis of panel data. Cambridge: Cambridge University Press.

James, J. and Bolstein, R. (1990). The effect of monetary incentives and follow-up mailings on the response rate and response quality in mail surveys. Public Opinion Quarterly, 54, 346-361.

Lancaster, T.  (1990).  The econometric analysis of transition data.  Cambridge: Cambridge University Press.

Langan, P., and Levin, D. (2002). Recidivism in Prisoners Released in 1994. Washington, DC: Bureau of Justice Statistics.

Lattimore, P. K. and Steffey, D. M. (2009). The Multi-site evaluation of SVORI: methodology and analytic approach. Research Triangle Park: RTI.

Maddala, G. S. (1986). Limited-dependent and qualitative variables in econometrics. Cambridge: Cambridge University Press.

Office of Management and Budget. (1994). Report on statistical disclosure limitation methodology. Prepared by Subcommittee on Disclosure Limitation Methodology, Statistical Policy Office. Statistical Policy Working Paper 22.

Petersilia, J. (2003). When prisoners come home: parole and prisoner reentry. Oxford: Oxford University Press.

Sabol, W.J., West, H.C. and Cooper, M. (2009). Prisoners in 2008. Bureau of Justice Statistics Bulletin.

Shettle, C. and Mooney, G. (1999). Monetary incentives in government surveys. Journal of Official Statistics, 15, 231-50.

Singer, E., Groves, R. M., and Corning, A.D. (1999). Differential incentives: beliefs about practices, perceptions of equity, and effects on survey participation. Public Opinion Quarterly, 63, 251-60.


Singer, E. and Kulka, R. A. (2002). Paying respondents for survey participation. In M. Ver Ploeg, R. A. Moffitt, and C. F. Citro (Eds.), Studies of welfare populations: data collection and research issues. Panel on data and methods for measuring the effects of changes in social welfare programs (pp. 105-28). Washington, DC: National Academy Press.

Urban Institute, Justice Policy Center. 2006. Understanding the challenges of prisoner reentry: research findings from the Urban Institute’s prisoner reentry portfolio. Washington, DC.

Western, B., Kling, J.R., and Weiman, D.F. (2001). The labor market consequences of incarceration. Crime and Delinquency, 47(3), 410-427.

1 Dillman, Phelps, Tortora, Swift, Kohrell, and Berck. 2001.

2 See Office of Management and Budget, Statistical Policy Working Paper 22 (1994).

3 Ibid.

4 A standard practice is that units of geography should be reported at a high enough level of aggregation so that there are no fewer than 100,000 individuals in the sampling frame in that unit. No single grantee would meet this criterion in this study.

5 The average wage at placement of program participants enrolled in the previous study that examined this program was $9.29 in 2008. That population was slightly less educated with fewer having earned a high school diploma or GED compared to the members of this research sample. Thus, an average placement wage of $10 (and $11 in the second round) is realistic for purposes of calculating average wage for this population.

6 This cost includes an estimated fifteen percent of respondents collecting the “early bird” incentive.

7 The chi-squared test is derived from: , while the t-test is derived from: t = MT – MC/√(VarT/nT + VarC/nC)

8 The Bonferroni correction is given by: In simplest terms, this correction multiplies the number of tests by the observed probability of a specific test. Thus, if the probability of a test is .012, but there are ten tests being conducted, the Bonferroni correction would yield a probability level of .12.

9 In Sidak’s correction, the adjusted p-value is equal to 1-(1-unadjusted p-value)k , where k is the number of comparisons being made.

10 The false discovery rate is given by: E[V/(V+S)] = E[V/R], where V is the number of false positives, S is the number of true positives, and R is an observable random variable.

11 This statistic is given by: in which qT is the studentized range statistic, MSs/A is the mean square error from the overall F-test, and n is the sample size for each group.

12 In notation, the basic impacts are calculated from a regression of the form yi = α + β1Ei1 + β2Ei2 + δXi + εi where yi is the outcome for individual i, Eij equals one for those assigned to alternative j (j can be 1 or 2) and 0 otherwise, and Xi is a set of baseline characteristics. The parameter β1 measures the effect on program group 1, β2 measures the effect on program group 2, and β12 measures the difference in effects of the two alternatives. For subgroup analysis with a continuous subgroup measure, the regression would take the form yi = α + β1Ei1 + β2Ei2 + γ1Zi Ei1 + γ2Zi Ei2 + δXi + εi. Here, γ1 and γ2 would indicate how impacts vary with the baseline characteristic, and Zi is a particular baseline characteristic for which subgroup impacts are being estimated. Conditional subgroup analysis can be represented by the equation yi = α + β1Ei1 + β2Ei2 + γ1Zi Ei1 + γ2Zi Ei2 + δ1Xi Ei1 + δ2Xi Ei2 + εi.

13 Outcomes can be thought of as having four parts: an average outcome level, a treatment effect, individual-level idiosyncratic factors, and site-level idiosyncratic factors (such as quality and availability of local employment supports). In notation, yij = αj + βEi + εi + uj. Here β is the impact of the AB intervention, εi represents idiosyncratic individual-level characteristics, and urepresents unobserved site-level characteristics. If the variances of the idiosyncratic components are V(εi)=σ2 and V(uj)=τ2, then including n sample control and n program group members from J sites would yield an estimated impact with a variance of 2σ2/n + τ2/J (see, e.g., Greenberg, Michalopoulos, and Robins 2005). Adding additional sites increases the precision by reducing the influence of any one site on the estimated impact, thus reducing the effect of that site’s special characteristics.


14 Under reasonable assumptions, the precision of the estimated relationship between a site-level characteristic and site-level impacts is shown by Greenberg, Meyer, Michalopoulos, and Wiseman (2003) to increase by spreading a sample across more sites. They conclude that for an evaluation like the National Evaluation of Welfare-to-Work Strategies (Hamilton et al. 2001), balancing added site costs against the ability to understand the influence of local context would have resulted in an evaluation that spread the sample of about 40,000 individuals over nearly 1000 sites. In reality, the study’s 40,000 participants came from only 7 counties and about 20 local welfare offices. Although mandatory welfare-to-work programs are very different from the intervention being studied in AB, the essential points of their analysis are still true: (1) spreading a given sample across more sites will give us greater ability to estimate the relationship between site characteristics site impacts and (2) more sites are needed to detect this relationship if we are interested in the effects of more site characteristics.

15 The number of sites will limit such models to only a few site characteristics, whereas the number of individuals will make it possible to control for numerous individual characteristics. Note that, although estimates of program impacts for each site may be fully experimental, analyses of the factors related to variations in impacts across sites must be non-experimental.

27



File Typeapplication/msword
File TitleSUPPORTING STATEMENTS FOR
AuthorSamantha Wayne
Last Modified ByNaradzay.Bonnie
File Modified2012-05-23
File Created2012-05-15

© 2024 OMB.report | Privacy Policy