RENT REFORM DEMONSTRATION
SUPPORTING STATEMENT B (FINAL)
THE RENT REFORM DEMONSTRATION
SUPPORTING STATEMENT – PART B
OMB CLEARANCE PACKET
8-14-2014
Submitted to:
U.S. Department of Housing and Urban Development
Contract No: DU205NC-12-D-02
Task Order: DU205NC-12-D-02-001
B. COLLECTION OF INFORMATION USING STATISTICAL METHODS
B1. Respondent Universe, Sampling Selection, and Expected Response Rates
The Rent Reform demonstration sample will include four MTW public housing agencies (PHAs) and at least 2,000 Housing Choice Voucher (HCV) households at three sites and 1,400 HCV households at one site. Random assignment procedures will be used to create these two groups. In working with each site, MDRC is determining where exceeding these sample sizes, which would strengthen the evaluation, is feasible.
The alternative rent policy will apply only to HCV recipients. Eligible sample members will only include voucher holders with vouchers that are administered under the (MTW) demonstration. Non-MTW Vouchers (i.e., Veterans Assisted Special Housing, Moderate Rehabilitation, and Shelter Plus Care), Enhanced Vouchers, Project-Based Vouchers are excluded from the study. Additionally, the study is focused on work-able populations and will not include elderly households, disabled households, households headed by people older than 56 years of age (who will become seniors during the course of the long-term study), or households where the head does not have legal working status in the U.S. Households that are receiving a child care deduction at the time of random assignment will also be excluded from the study. Lastly, households currently participating in Family Self-Sufficiency and homeownership programs will not be included in the study.
At this time, we assume that although the new rent policy will designed to be applicable to elderly and disabled tenants, and although the housing authorities may choose to apply the policy to them during the period of the demonstration, those tenants will not be included in the research sample. The reason is because we do not hypothesize that the new rent policies will have a substantial effect on work outcomes for this group. Furthermore, if the study is limited to the minimum sample sizes indicated above (i.e., 800 voucher holders in total per site), including elderly/disabled would reduce the sample sizes and statistical power available for estimating the effects on the working-age/nondisabled population for whom the new rent model is likely to have its largest employment effects.
The research team expects sample build-up to take no more than 12 months; and, depending on how sites decide to roll out the new rent policy (within a condensed period, for example), it may take a shorter length of time.
Number of MTW housing authorities
The demonstration will include 4 MTW PHAs. Currently, MDRC is developing the alternative model with District of Columbia Housing Authority, Louisville Metropolitan Housing Authority, Lexington Housing Authority, and San Antonio Housing Authority. These MTW PHAs have submitted letters of interest to MDRC, committing to work through the details of how the Rent Reform model and research design would be implemented in their site, and to ultimately join the study if these details can be worked through to all parties’ satisfaction. In addition, MDRC and PHAs have signed initial data-sharing agreements, to start testing the data that will be used for random assignment purposes, and for conducting early rounds of random assignment. Each PHAs will also sign an MOU, which covers activities and roles for the term of the evaluation.
Criteria for HA Selection
Building on discussions with HUD and MDRC’s own analysis of 34 MTW sites, the team identified 12 HAs selected from a list of 14 HAs that the HUD MTW office staff recommended, based on their knowledge of the MTW sites. Most of these HAs have large voucher populations. As agreed with the project GTR, MDRC excluded the four new MTW HAs that HUD announced in late 2012. These HAs serve very small numbers of voucher holders and will be considered only if we need to move beyond our initial list.
For a number of reasons, MDRC is not drawing a probability sample of HAs:
HAs must be housing authorities that are participating in Moving to Work (MTW); their participation in MTW allows them flexibility with HUD’s approval to make changes to traditional rent calculations and other voucher rules which is needed in the Rent Reform demonstration.
HAs must volunteer to participate. If MDRC drew a random sample, there is no guarantee that the selected sites would participate. In fact, many would likely refuse given the participation activities and associated burdens.
HAs must not have made so many changes under their current MTW authority that MDRC considers that the current rules versus new rent reforms wouldn’t be a fair test. HAs that could provide a control group that represented the traditional national 30-percent-of-income policy are preferred.
The sites must be of sufficient size to generate an adequate sample size for analysis. Ideally, each participating city will contribute no fewer than 400 voucher holders to the study’s program group and 400 to the control group—and preferably many more. (See below for a further discussion of sample sizes.)
In addition, we sought agencies that together would reflect important dimensions of the diversity of tenants and local conditions found among housing authorities across the country. This is important because one goal is to determine whether rent reform policies can be effective when operated for different types of tenants and in different contexts. Thus, we hope to recruit a pool of sites that reflect some diversity in local housing markets, local labor markets, tenant race and ethnicity, and other local or tenant characteristics that, in theory, present different kinds of challenges in finding work or affordable housing.
Sites must be willing to comply with random assignment and other research demands of the demonstration, and be willing to sustain the alternative rent policy through to the end of the demonstration.
Strategy to engage housing authorities to join the study
The process of recruiting housing authorities to participate in the Rent Reform Demonstration began with joint efforts by HUD and MDRC to introduce the study through informational meetings and conference calls with housing authorities identified as potential candidates for the project. These included special informational sessions at conferences sponsored by the Public Housing Directors Association and the Council of Large Public Housing Authorities.
By the end of 2012, following the information sessions at the PHADA and CLPHA conferences and a special HUD-initiated conference call with selected housing authorities, MDRC completed a series of one-on-one exploratory discussions by telephone with 11 housing authorities about their current rent policy reforms and plans and their potential willingness to be part of the demonstration. Based on these calls, MDRC identified a “short list” of eight HAs for more in-depth planning activities.
The MDRC team subsequently conducted two day-long planning sessions with a group of eight HAs in Chicago—in February and May, 2013. These meetings were used to explore a variety of alternative rent policies and to try to identify a common set of approaches that all of the candidate sites might be willing to adopt. Subsequently, MDRC conducted a series of (mostly) conference calls with the eight candidate sites to discuss how the reforms might apply to in their housing authorities. As part of the design process, MDRC conducted a variety of statistical analyses, using national data from HUD and data from the candidate housing agencies, to assess the possible implications of alternative approaches for both tenants and the agencies. These analyses were undertaken to explore how certain alternative approaches may affect households’ total tenant payment (TTP) for rent and utilities, household’s net income, and housing agency HAP. The analyses incorporated a number of assumptions, informed by other research on financial work incentives, of how the new policy might affect tenants’ labor market outcomes.
Expected Response Rates
This submission does not include details on the administration and collection of data from the follow-up surveys. This information will appear in a later submission. As all other data on the sample will be derived from the Baseline Information Form (BIF) and administrative records, we expect high data coverage for both the program and control groups. Some tenants may refuse to complete a BIF, but we believe that these rates will be low because of the incentive payment offered to them for completing that form.
B2. Procedures for Data Collection and Statistical Analysis
The study will involve randomly assigning the total sample1 to one of two groups:
The Alternative Rent Reform Group. These individuals will have their rent calculated according to the Rent Reform framework adopted by the HA, which includes that TTP is based on 28 percent of gross income, minimum rent instituted, a simplified utilities policy, and a 3-year recertification period.
The Current Rent Policy Group (or Control Group). These individuals will be subject to the existing rent rules for the duration of the demonstration, for a minimum of four years, into the second recertification period. (The exact length of the embargo period will be determined in consultation with HUD and the participating sites.)
Eligible households will be informed about the study and opt-out procedures at their regularly scheduled recertification meeting or in a specially scheduled meeting, as determined by the HA. As discussed in Supporting Statement A, households will be asked to complete the Baseline Information Form (BIF) and will be offered a small incentive for completing the BIF.
While the exact enrollment details are being developed with sites, we expect that participants randomly selected to each group will proceed with their usual recertification meeting (or at a specially scheduled meeting) and be informed of their new shelter costs and TTP . Housing subsidy specialist will explain to the Alternative Rent Reform group that the new longer recertification period allows them to increase their earnings without their TTP increasing and explain under what circumstances they will be eligible for an interim recertification and the PHA’s hardship policy. This discussion will also touch on the other features of the new rent policy (see Supporting Statement A for a brief description). The control group will still need to report changes in income as specified in the current HA procedures. MDRC staff will work with each HA to determine how best to incorporate random assignment at their offices to minimize disruptions to their normal work routines. Moreover, options for conducting random assignment may include building random assignment into their own automated systems, a process MDRC has used successfully in several prior evaluations, or MDRC conducting batch random assignment in advance for those households approaching recertification (based on discussions so far, it appears that MDRC will conduct batch RA for DCHA and may have to conduct batch random assignment for the remaining sites (LMHA, LEX, and SAHA during the period the software is still under development).
Once an HCV household2 has been assigned to the program group, they will be provided with information about their respective rent policies. MDRC will work with the PHAs to develop materials that discuss all the elements of the rent reform model, as well as methods of outreach to households in between their recertifications to communicate the benefits of the longer recertification period in allowing them to keep more of their earnings. Families in the control group will be provided with the usual materials that they would receive at their recertification meeting, and will be subject to the existing recertification schedule.
Statistical Impact Analysis
Impact analysis (see Supporting Statement Part A for an overview of study components) will assess the overall and independent effects of the Rent Reform Demonstration by comparing the key outcomes of the treatment group to the outcomes of the control group. The study will track both the program and the control groups for at least three years, using administrative and survey data to measure outcomes.
The impact analysis will examine the program’s effects on a wide range of outcomes. Key clusters of outcomes under consideration for this study are included below.
Housing Subsidy Receipt and Housing Outcomes: Voucher use: Changes in the rent rules could change tenant turnover in a number of ways. First, they may increase earnings and income and, in turn, increase or hasten exits from vouchers. But, second, and in contrast, the new rent policy could reduce tenant turnover if more voucher holders come to view voucher receipt as more attractive than unsubsidized housing on the private market than they would otherwise view it. Using the HUD 50058 data and survey information, the study would examine the effects of alternative rent strategies on the duration of voucher receipt and exit reasons.
Household Composition and Structure: Rent Reform is a household level intervention. To explore effects on household composition and structure, MDRC will obtain basic information about all household members, including names, ages, employment status (if appropriate), and relationship to the head of household through the survey and housing records.
Work Behaviors: MDRC will use both Unemployment Insurance wage records and the survey to collect data on employment, earnings, job characteristics, and work search behaviors. One of the primary research questions is if reducing the “tax” on earnings from housing subsidy would increase work effort of voucher holders.
Income, assets, finances, and rent burden: If Rent Reform increases participants’ disposable income, it may help them reduce debt and accumulate assets. With survey data, MDRC will assess the effects of the program on household finances and financial behaviors (such as savings, access to credit, and debt reduction). Data on income, combined with housing authority and survey data on tenant rent and utilities payments, would be used to construct measures of rent burden.
Health, material hardship, and family well-being: As in Work Rewards, Jobs-Plus, and other housing studies, the Rent Reform Demonstration will estimate the effects of Rent Reform on residents’ overall health and specific health conditions, and their access to preventive health care. All of these indicators may be affected, indirectly, by changes in residents’ income, and by potential changes in their housing and neighborhood contexts. These factors may also affect mental health outcomes, such as depression. Changes in the rent rules affect tenants’ rent burden and thus their likelihood of being evicted or having their utilities shut off. For example, families at the lower end of the income distribution may strain to afford a high minimum rent, or those with higher incomes may fall into arrears if their income drops, unless adequate protections are included in the rent policy. Food-related hardships will also be examined.
Neighborhood conditions and safety, and housing quality: Rent reform may affect the types of housing and neighborhoods in which voucher holders live. We would draw on our previous research for items on resident perceptions of social and physical disorder, violent crime, fear of crime, and victimization. And, we would include items on perceptions of access to and adequacy of neighborhood services including schools and amenities. We would also obtain aggregate data on neighborhood conditions, such as from the Urban Institute’s National Neighborhood Indicators Project (NNIP) and the American Community Survey, to examine dimensions of neighborhood quality and context. Through surveys, we would collect information on housing quality and conditions. We would draw questions from the American Housing Survey (AHS), and other studies, including Jobs-Plus, HOPE VI, and Moving to Opportunity (MTO).
Other benefits: Depending on the evaluation resources, HUD may want to consider collecting TANF, SNAP, and Medicaid data, since changes in the receipt of these public benefits may flow from any impacts that rent reform has on tenants’ earnings. If so, it would be important to capture these effects as part of the proposed cost-benefit analysis. Having these data would also make it possible to examine the pattern of income supports received by the study sample, and also compare this to the income reported to the housing authorities.
Child outcomes: Rent reform’s effects on family income and neighborhood and housing conditions may, in turn, affect child outcomes. Through the tenant surveys, we would ask respondents about the children in the household, using items used in studies of child-wellbeing, including social behaviors, school engagement, school performance, and health. In addition, using SABINS administrative data, we would link each household to zoned elementary schools to assess whether any location effects caused by the intervention result in children being more likely or less likely to attend higher-performing schools.
Knowledge and perceptions of rent rules: The survey would be used to examine voucher recipients’ perceptions, understanding, and awareness of the rent rules, and their attitudes toward the HA and frontline staff.
The power of the experimental design for the Rent Reform Demonstration comes from the fact that random assignment ensures that the treatment and control groups are alike in all aspects of the distribution of observed and unobserved baseline and pre-baseline characteristics. As a result, any post-baseline differences between the two groups can be interpreted as effects of the intervention.
Therefore, the basic estimation strategy is to compare average outcomes for the program and control groups. We will use regression adjustment to increase the power of statistical tests that are performed, in which the outcome, such as “employment during Year 1” is regressed on an indicator for program group status and a range of other background characteristics.
The general form of the regression models which will be used to estimate program impacts is as follows:
Yi = α + βPi + δXi + εi
where
Yi is the outcome measure for sample member i;
Pi equals one for program group members and zero for control group members;
Xi is a set of background characteristics for sample member i; and
εi is a random error term for sample member i.
The coefficient β is interpreted as the impact of the program on the outcome. The regression coefficients, δ, reflect the influence of background characteristics. The functional form and estimation method will depend on the scale of measurement of the outcome for which impacts are estimates; for example, continuous outcomes will be estimated using ordinary least squares (OLS) regression. We can use a more complex set of methods depending on the nature of the dependent variable and the type of issues being addressed, such as: logistic regressions for binary outcomes (e.g., employed or not); Poisson regressions for outcomes that take on only a few values (e.g., months of employment); and quantile regressions to examine the distribution of outcomes for continuous outcomes.
The evaluation will examine outcomes across a number of domains. When multiple outcomes are examined, the probability of finding statistically significant effects increases, even when the intervention has no effect. For example, if 10 outcomes are examined in a study of an ineffective treatment, it is likely that one of them will be statistically significant at the ten percent level by chance. While the statistical community has not reached consensus on the appropriate method of correcting for this problem, we propose to address it by being parsimonious in our selection of outcome variables. In particular, we plan to identify a set of “primary” outcomes and subgroups before beginning the impact analysis. All other outcomes and subgroups will be considered “secondary” and will be used to provide context for the primary impact findings or to generate hypotheses about impacts. Schochet (2008) suggests that this strategy is flexible enough to credibly test the key hypotheses about the program, while at the same time allowing the analyst to examine a range of outcomes in a more exploratory manner in order to uncover policy-relevant information.
Site-specific versus pooled impacts
The core impact analysis of the full evaluation will estimate the model’s impacts for each site separately and combining the three sites with a one-year control group recertification period.3 As discussed below, expected sample sizes at each HA should provide adequate statistical power for producing policy-relevant site-specific impact estimates. Such estimates will allow us to test the robustness of the rent model – that is, each site will provide an independent replication test. If the model’s impacts are positive and consistent across the HAs, we would have greater confidence that they were caused by rent reform and are not due to chance or site-specific factors. Such consistency would also indicate that rent reform can succeed under a variety of locations and types of tenants. Large and statistically significant variations in effects across sites may provide an opportunity to understand what local conditions and/or implementation factors influence the model’s effectiveness.
We would also pool the three PHA samples that have a one-year recertification length for the control group. Pooling would increase precision of impact estimates, which becomes especially relevant when estimating effects for subgroups of the full sample. The Jobs-Plus evaluation is one example in which impacts were estimated for all sites combined, for the full implementation sites combined, and for each site separately. This way of looking at impacts helped produce important insights about the robustness of the Jobs-Plus model in the stronger implementation sites (which served very different types of residents in very different housing and labor markets), and also supported the interpretation that rent reform was an important contributor the program’s impacts.
We may include DC in the pooled analysis if it is determined that the biennial recertification does not differ significantly from the current traditional one-year policy in terms of work incentives. Although the control group in DC will be on a biennial recertification schedule, if their income increases by more than $10,000 per year, they will still need to adjust their TTP in the interim. Earnings from a full-time job at minimum wage would exceed this threshold; thus, the biennial recertification policy, when compared with the traditional annual policy, may create an increased financial incentive to move from non-work to part-time work at the minimum wage, and from part-time to full-time work, but not from non-work to full-time work (since this latter change would prompt an increase in TTP). Control group households will also continue to face no limit on the number of interim recertifications when their income declines. Whether tenants view the biennial policy as offering an increased work incentive is uncertain and will be a topic for the evaluation.
Subgroups
The impact analysis will also investigate whether the changes to the rent structure worked especially well for particular subgroups of families. Subgroup impacts can be calculated in several ways,4 and prior to the impact analysis, the evaluation team will finalize the method and prioritize the subgroups that are “confirmatory” and the remainder that are “exploratory.” The confirmatory subgroups will be specified in advance, in order to avoid the potential for data mining and the problem of multiple comparisons.5 Subgroups can be chosen as confirmatory because prior theory suggests program differences by a subgroup dimension, because differences in impacts by a given dimension have been found in prior evaluations, or because a given subgroup is of great policy interest.
MDRC is currently considering several subgroups of interest. Informed by the findings from Work Rewards, MDRC plans to examine impacts by work status and SNAP receipt status at program entry. MDRC will work with HUD to finalize the subgroups, which could include: (a) employed vs. not employed at study entry (Work Rewards), (b) not employed and SNAP recipient vs. employed and non-SNAP (Work Rewards), (c) TANF recipient vs. non-TANF (Jobs-Plus), (d) poor health status vs. good health status (HOPE VI, HOST), and (e) lower- versus higher-income (theory).
Minimum Detectable Effect Size
A sample size of 400 per research group is large enough to detect policy relevant impacts by site and for the pooled sample as well as for key subgroups. However, smaller effects could be detected if the sample size were larger. Currently, the sites have pledged samples of 700-1,000 HCV households per research group.
MDEs indicate the size of statistically significant program impacts that are likely to be observed or detected for a set of outcomes and a given sample size. Since these are estimates, the actual MDEs may be smaller or larger than what is shown here. The estimates shown are likely to be conservative, since they assume that baseline variables are not used in the impact model to improve precision. Pre-random assignment values of key outcomes, such as employment and earnings, are likely to be highly predictive of post-random assignment values of the same outcome. In this case, the increased precision brought about by including these variables in the impact model can reduce the MDEs considerably.
The table below presents MDEs for the proposed site-specific sample sizes and pooled sample sizes, focusing on three outcomes of interest: employment, earnings, and housing hardship. Within each panel, the table presents MDEs for site-specific and pooled sample sizes. The columns represent two different sample size assumptions: lower-bound and pledged. For site-specific analysis, the evaluation could detect effects (increases or decreases) as small as 8.35 percentage points on employment rates in a given year with the minimum required sample size (400 per study group), and would be able to detect effects as small as 5.3 (San Antonio, Louisville and DC) percentage points on employment rates in a given year if the larger sample size (see pledged column) is achieved.
MDEs for earnings are shown in the table but are harder to predict, given the difficulty of predicting the variance of earnings. MDEs for housing hardship are also presented in the table. As shown in the table, smaller effects can be detected for the pooled sample. The pooled MDEs assume two sets of sample size assumptions, including and excluding DC. An assessment of DC’s implementation and the control group’s understanding of the biennial recertification policy (other control groups will be subject to annual recertifications) will determine whether the impact analysis will include DC in the pooled results.
In sum, the proposed sample size is adequate for detecting effects on a range of outcomes that are relatively modest but still meaningful from a policy standpoint for both site-specific analysis and pooled analysis.
Sample size: N = Per control or program group, assuming equal size Assumptions: Control group levels are assumed to be: 44 percent for employment, 20 percent for housing hardship, $7,000 for mean annual earnings, and $7,100 for the standard deviation of annual earnings. MDE calculation for 2-tailed test at 10% significance and 80% statistical power. Calculations assume that the R-squared for each impact equation is .10. Assumes an 80 percent response rate to the follow-up surveys (for housing hardship). |
||||||
|
||||||
A. MDEs for Employment |
||||||
Site |
Lower-Bound |
Pledged |
||||
N |
Percentage Points |
% Chg |
N |
Percentage Points |
% Chg |
|
Lexington, KY |
400 |
8.35 |
18.9 |
700 |
6.3 |
14.3 |
Louisville, KY |
400 |
8.35 |
18.9 |
1,000 |
5.3 |
12.0 |
San Antonio, TX |
400 |
8.35 |
18.9 |
1,000 |
5.3 |
12.0 |
Washington, DC |
400 |
8.35 |
18.9 |
1,000 |
5.3 |
12.0 |
Pooled, with Louisville |
1,600 |
4.15 |
9.4 |
3,700 |
2.75 |
6.3 |
Pooled,without Louisville1 |
1,200 |
4.80 |
10.9 |
2,700 |
3.20 |
7.2 |
|
||||||
B. MDEs for Annual Earnings |
||||||
Site |
Lower-Bound |
Pledged |
||||
N |
Dollars |
% Chg |
N |
Dollars |
% Chg |
|
Lexington, KY |
400 |
$1,186 |
16.9 |
700 |
$895 |
12.8 |
Louisville, KY |
400 |
$1,186 |
16.9 |
1,000 |
$753 |
10.8 |
San Antonio, TX |
400 |
$1,186 |
16.9 |
1,000 |
$753 |
10.8 |
Washington, DC |
400 |
$1,186 |
16.9 |
1,000 |
$753 |
10.8 |
Pooled, with Louisville |
1,600 |
$589 |
8.4 |
3,700 |
$391 |
5.6 |
Pooled, without Louisville1 |
1,200 |
$682 |
9.7 |
2,700 |
$448 |
6.4 |
|
||||||
C. MDEs for Housing Hardship |
||||||
Site |
Lower-Bound |
Pledged |
||||
N |
Percentage Points |
% Chg |
N |
Percentage Points |
% Chg |
|
Lexington, KY |
400 |
6.68 |
33.4 |
700 |
5.04 |
25.2 |
Louisville, KY |
400 |
6.68 |
33.4 |
1,000 |
4.24 |
21.2 |
San Antonio, TX |
400 |
6.68 |
33.4 |
1,000 |
4.24 |
21.2 |
Washington, DC |
400 |
6.68 |
33.4 |
1,000 |
4.24 |
21.2 |
Pooled, with Louisville |
1,600 |
3.32 |
16.6 |
3,700 |
2.20 |
11.0 |
Pooled,without Louisville1 |
1,200 |
3.84 |
19.2 |
2,700 |
2.56 |
12.8 |
NOTE:
1Whether
Louisville will be included in the pooled analysis will depend on
the extent to which members of the new rent policy group opt out of
that group and raise the risk of biased impact estimates.
Sample
Sizes and Minimum Detectable Effects (MDEs)
B3. Maximizing Response Rates and Issues of Nonresponse
As noted above, this OMB submission focuses on the Baseline Information Form (BIF) which will be used to capture information about study participants at the time of random assignment. Given the type of information captured by the BIF, we do not expect participants to refuse completing it. However, participants completing a BIF can choose not to respond to questions they do not want to answer (income, for example). That said, based on past experience, we do not expect to encounter serious item nonresponse. Therefore this section is not applicable.
As part of this task order, we expect to conduct informal conversations with a small number of study participants to get an early read on their awareness and understanding of the new rent rules. MDRC will work with each study site to identify and invite a few tenants for discussion. The protocol for these informal discussions will be included in a future OMB submission.
B4. Pre-Testing
The information for the Rent Reform Demonstration is being collected by MDRC and its subcontractors on behalf of HUD. With HUD oversight, MDRC and its subcontractors (Urban Institute, Quadel, and Bronner), including two academic consultants, Professors Ingrid Gould-Ellen and John Goering, both national experts, are responsible for developing the study documents included in this submission. The statistical aspects of the study were developed by Dr. Steven Nunez in consultation with MDRC senior economist and impact analyst, Dr. Cynthia Miller.
1 See section below for the determinants of the sample size.
2 Housing subsidy and TTP is calculated based on household income, thus this is a household level intervention and random assignment will occur at the household level.
3 Three of the sites – Lexington, Louisville and San Antonio – use the traditional annual recertification schedule. Washington, DC, however, currently uses a biennial recertification policy, where working-age/non-disabled households that increase their anticipated income by $10,000 per year or less do not have their TTPs recalculated until their next biennial recertification.
4 In “split-sample” subgroup analyses, the full sample is divided into two or more mutually exclusive and exhaustive groups, such as single-parent families at the point of random assignment versus two-parent families. Impacts are estimated for each group separately. A related type of subgroup analysis uses regression methods to see if the effects of the intervention vary significantly with a continuous baseline measure (or one that takes on many values), such as initial attendance levels or test scores. Finally, “conditional” subgroup analyses take this idea one step further by controlling for the effect of other baseline characteristics when estimating the relationship between a particular subgroup and program effects.
5 Restricting the analysis to a few confirmatory subgroups does not rule out the possibility of a more exploratory analysis of additional subgroups later in the evaluation. Findings from this analysis would necessarily be more speculative and given less weight in the discussion of program impacts.
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
Author | nunez |
File Modified | 0000-00-00 |
File Created | 2021-01-27 |