RENT REFORM DEMONSTRATION
SUPPORTING STATEMENT B
THE RENT REFORM DEMONSTRATION
SUPPORTING STATEMENT – PART B
OMB CLEARANCE PACKET
June 22, 2015
Submitted to:
U.S. Department of Housing and Urban Development
B. COLLECTION OF INFORMATION USING STATISTICAL METHODS
B1. Respondent Universe, Sampling Selection, and Expected Response Rates
There are four MTW public housing agencies (PHAs) participating in the Rent Reform Demonstration, including at least 2,000 Housing Choice Voucher (HCV) households at three sites and 1,400 HCV households at one site.1 Random assignment procedures are being used to create a New Rent Rules group and an Existing Rent Rules group in each site.
Eligible sample members include voucher holders with vouchers that are administered under the (MTW) demonstration. Non-MTW Vouchers (i.e., Veterans Assisted Special Housing, Moderate Rehabilitation, and Shelter Plus Care), Enhanced Vouchers, and Project-Based Vouchers are excluded from the study. Additionally, the study is focused on work-able populations and will not include elderly households, disabled households, households that will become elderly during the course of the long-term study, or households where all members do not have legal status in the U.S. Lastly, if at the time of random assignment, households are receiving a child care deduction, participating in Family Self-Sufficiency, participating in homeownership programs, or have zero HAP, they will also be excluded from the study.
Although the new rent policy is designed to be applicable to elderly and disabled tenants, and although the housing authorities may choose to apply the policy to them during the period of the demonstration, those tenants will not be included in the research sample. The reason is because we do not hypothesize that the new rent policies will have a substantial effect on work outcomes for this group. Furthermore, given the limited sample sizes indicated above, including elderly/disabled would reduce the sample sizes and statistical power available for estimating the effects on the working-age/nondisabled population for whom the new rent model is likely to have its largest employment effects.
Random assignment and enrollment is currently underway in all four sites, with the expectation that the full sample will be recertified – i.e., have new effective dates – by January 2016 (across the study sites, the recertification process, which renews the household’s housing subsidy, can take from 90 to 180 days following the initial recertification and study enrollment meeting).
Number of MTW housing authorities
The demonstration includes four MTW PHAs: District of Columbia Housing Authority, Lexington Housing Authority, Louisville Metropolitan Housing Authority, , and San Antonio Housing Authority. These PHAs have signed MOUs with MDRC, which cover activities and roles for the term of the evaluation and identify data that the sites will provide to MDRC for the study.
Criteria for PHA Selection
In consultation with HUD, MDRC set out a number of guidelines for assembling a group of research sites. These guidelines gave higher priority to MTW agencies that had larger voucher programs and, thus, larger samples for a randomized trial, and that had not progressed too far in implementing an alternative rent policy of their own. This would allow them to provide a control group that would represent the traditional national 30-percent-of-income rent policy. In addition, we sought agencies that together would reflect important dimensions of the diversity of voucher holders and local conditions found among housing agencies across the country. This is important because one goal for evaluating the alternative rent policy is to determine whether it can be effective when operated for different types of tenants and in different contexts. Thus, we sought to recruit a pool of sites that would reflect some diversity in local housing markets, local labor markets, tenant race and ethnicity profiles, and other local or household characteristics that could present different kinds of challenges in finding work and, hence, tenants’ responses to the work incentives to be built into the alternative rent policy. It was also critical that a housing agency be willing to comply with random assignment and the other research demands of a rigorous demonstration, and to sustain both the alternative rent policy and its existing rent policy through to the end of the demonstration.
Expected Response Rates
As all data on the sample will be derived from administrative records and in-person interviews, we expect high data coverage for both the program and control groups.
B2. Procedures for Data Collection and Statistical Analysis
The study involves randomly assigning the total sample to one of two groups:
The New Rent Rules Group. These individuals have their rent calculated according to the Rent Reform framework adopted by the PHA, which includes that TTP is based on 28 percent of gross income, minimum rent instituted, a simplified utilities policy, and a 3-year recertification period.
The Existing Rent Rules Group (or Control Group). These individuals are subject to the existing rent rules for the duration of the demonstration, for up to six years, into the second recertification period.
Eligible households are informed about the study and the study withdrawal procedures at their regularly scheduled recertification meeting. As discussed in Supporting Statement A, households are asked to complete the Baseline Information Form (BIF) and will be offered a small incentive for completing the BIF.
Participants randomly selected to each group proceed with their usual recertification meeting (or at a specially scheduled meeting). Housing subsidy specialists explain to the New Rent Rules group that the new longer recertification period allows them to increase their earnings without their TTP increasing and explain under what circumstances they will be eligible for an interim recertification and the PHA’s hardship policy. This discussion also touches on the other features of the new rent policy. The control group will still need to report changes in income as specified in the current PHA procedures.
In three sites, random assignment is built into the PHAs’ automated systems and therefore minimizes disruptions to their normal work routines. In the fourth site (DCHA), MDRC is conducting random assignment in two batches in advance for those households approaching recertification. Once an HCV household2 is assigned to the program group, they are provided with information about their respective rent policies. MDRC worked with the PHAs to develop materials that discuss all the elements of the rent reform model, as well as methods to communicate to households the benefits of the longer recertification period in allowing them to keep more of their earnings. Families in the control group are provided with the usual materials that they would receive at their recertification meeting, and are subject to the existing recertification schedule.
Statistical Impact Analysis
The core impact analysis, which includes the full study sample, will assess the overall and independent effects of the Rent Reform demonstration by comparing the key outcomes of the treatment group to the outcomes of the control group. The study will track both the program and the control groups for the period of study, using administrative data to measure outcomes.
The power of the experimental design for the Rent Reform demonstration comes from the fact that random assignment ensures that the treatment and control groups are alike in all aspects of the distribution of observed and unobserved baseline and pre-baseline characteristics. As a result, any post-baseline differences between the two groups can be interpreted as effects of the intervention.
Therefore, the basic estimation strategy is to compare average outcomes for the program and control groups. We will use regression adjustment to increase the power of statistical tests that are performed, in which the outcome, such as “employment during Year 1” is regressed on an indicator for program group status and a range of other background characteristics.
The general form of the regression models which will be used to estimate program impacts is as follows:
Yi = α + βPi + δXi + εi
where
Yi is the outcome measure for sample member i;
Pi equals one for program group members and zero for control group members;
Xi is a set of background characteristics for sample member i; and
εi is a random error term for sample member i.
The coefficient β is interpreted as the impact of the program on the outcome. The regression coefficients, δ, reflect the influence of background characteristics. The functional form and estimation method will depend on the scale of measurement of the outcome for which impacts are estimates; for example, continuous outcomes will be estimated using ordinary least squares (OLS) regression. We can use a more complex set of methods depending on the nature of the dependent variable and the type of issues being addressed, such as: logistic regressions for binary outcomes (e.g., employed or not); Poisson regressions for outcomes that take on only a few values (e.g., months of employment); and quantile regressions to examine the distribution of outcomes for continuous outcomes.
The evaluation will examine outcomes across a number of domains. When multiple outcomes are examined, the probability of finding statistically significant effects increases, even when the intervention has no effect. For example, if 10 outcomes are examined in a study of an ineffective treatment, it is likely that one of them will be statistically significant at the ten percent level by chance. While the statistical community has not reached consensus on the appropriate method of correcting for this problem, we propose to address it by being parsimonious in our selection of outcome variables. In particular, we plan to identify a set of “primary” outcomes and subgroups before beginning the impact analysis. All other outcomes and subgroups will be considered “secondary” and will be used to provide context for the primary impact findings or to generate hypotheses about impacts. Schochet (2008) suggests that this strategy is flexible enough to credibly test the key hypotheses about the program, while at the same time allowing the analyst to examine a range of outcomes in a more exploratory manner in order to uncover policy-relevant information.
Site-specific versus pooled impacts
The core impact analysis of the full evaluation will estimate the model’s impacts for each site separately and for all sites combined. As discussed below, expected sample sizes at each HA should provide adequate statistical power for producing policy-relevant site-specific impact estimates. Such estimates will allow us to test the robustness of the rent model – that is, each site will provide an independent replication test. If the model’s impacts are positive and consistent across the HAs, we would have greater confidence that they were caused by rent reform and are not due to chance or site-specific factors. Such consistency would also indicate that rent reform can succeed under a variety of locations and types of tenants. Large and statistically significant variations in effects across sites may provide an opportunity to understand what local conditions and/or implementation factors influence the model’s effectiveness.
The impact analysis will also pool the housing agency samples to produce impact estimates for all sites combined. Pooling would increase precision of impact estimates, which becomes especially relevant when estimating effects for subgroups of the full sample. We may include DC in the pooled analysis if it is determined that the biennial recertification does not differ significantly from the current traditional one-year policy in terms of work incentives. Although the control group in DC will be on a biennial recertification schedule, if their income increases by more than $10,000 per year, they will still need to adjust their TTP in the interim. Earnings from a full-time job at minimum wage would exceed this threshold; thus, the biennial recertification policy, when compared with the traditional annual policy, may create an increased financial incentive to move from non-work to part-time work at the minimum wage, and from part-time to full-time work, but not from non-work to full-time work (since this latter change would prompt an increase in TTP). Control group households will also continue to face no limit on the number of interim recertifications when their income declines. However, a final decision will depend on how many households appear to be affected by that policy’s $10,000 threshold and on the control group’s understanding of the biennial policy.
As discussed earlier, Program group members in Louisville are being given the option to opt out of the rent reform policy and have their rent calculated according to whatever rent rules would normally apply to them if they were not in the study at all. Whether Louisville will be included in the pooled analysis will depend on the extent to which members of the Program group opt out of the new rent rules and raise the risk of biased impact estimates.
Subgroups
The impact analysis will also investigate whether the changes to the rent structure worked especially well for particular subgroups of families. Subgroup impacts can be calculated in several ways,3 and prior to the impact analysis, the evaluation team will finalize the method and prioritize the subgroups that are “confirmatory” and the remainder that are “exploratory.” The confirmatory subgroups will be specified in advance, in order to avoid the potential for data mining and the problem of multiple comparisons.4 Subgroups can be chosen as confirmatory because prior theory suggests program differences by a subgroup dimension, because differences in impacts by a given dimension have been found in prior evaluations, or because a given subgroup is of great policy interest.
At this stage, priority subgroups are likely to include subgroups defined according to work status at the time of random assignment (e.g., not working, working part-time, working full-time); family size and composition, especially in terms of whether or not the family is headed by a single parent, has more than one working-age adult, and the number and ages of non-adult children; adults’ education levels; and household income levels. MDRC will focus on a limited number of confirmatory priority subgroups for analysis. MDRC will also seek to understand whether a family’s participation in a non-HUD, means-tested program prevented or deterred the family from pursuing increased wages.
Minimum Detectable Effect Size
For the core impact study, a sample size of 400 per research group is large enough to detect policy relevant impacts by site and for the pooled sample as well as for key subgroups. However, smaller effects could be detected if the sample size were larger. Currently, the sites have pledged samples of 700-1,000 households per research group, but it is very likely that one of the PHAs, the Lexington Housing Authority (LHA), will generate smaller research groups (slightly over 500) because of the size of the site’s eligible population.5
MDEs indicate the size of statistically significant program impacts that are likely to be observed or detected for a set of outcomes and a given sample size. The table below shows estimated MDEs. Since these are estimates, the actual MDEs may be smaller or larger than what is shown here. The estimates shown are likely to be conservative, since they assume that baseline variables are not used in the impact model to improve precision. Pre-random assignment values of key outcomes, such as employment and earnings, are likely to be highly predictive of post-random assignment values of the same outcome. In this case, the increased precision brought about by including these variables in the impact model can reduce the MDEs considerably.
The table below presents MDEs for the proposed site-specific sample sizes and pooled sample sizes, focusing on two outcomes of interest: employment and earnings. Within each panel, the table presents MDEs for site-specific and pooled sample sizes. The columns represent two different sample size assumptions: lower-bound and the most recent count of eligible households who will be randomly assigned to the study. For site-specific analysis, the evaluation could detect effects (increases or decreases) as small as 8.35 percentage points on employment rates in a given year with the minimum required sample size (400 per study group), and would be able to detect effects as small as 5.3 (San Antonio, Louisville and DC) percentage points on employment rates in a given year if the larger sample size (see pledged column) is achieved.
MDEs for earnings are shown in the table but are harder to predict, given the difficulty of predicting the variance of earnings. As shown in the table, smaller effects can be detected for the pooled sample. The pooled MDEs assume two sets of sample size assumptions, including and excluding Louisville. An assessment of DC’s opt out rate will determine whether the impact analysis will include Louisville in the pooled results.
In sum, the proposed sample size is adequate for detecting effects on a range of outcomes that are relatively modest but still meaningful from a policy standpoint for both site-specific analysis and pooled analysis.
Sample size: N = Per control or program group, assuming equal size Assumptions: Control group levels are assumed to be: 44 percent for employment and $7,000 for mean annual earnings, and $7,100 for the standard deviation of annual earnings. MDE calculation for 2-tailed test at 10% significance and 80% statistical power. Calculations assume that the R-squared for each impact equation is .10. |
||||||
|
||||||
A. MDEs for Employment |
||||||
Site |
Lower-Bound |
Pledged |
||||
N |
Percentage Points |
% Chg |
N |
Percentage Points |
% Chg |
|
Lexington, KY |
400 |
8.35 |
18.9 |
700 |
6.3 |
14.3 |
Louisville, KY |
400 |
8.35 |
18.9 |
1,000 |
5.3 |
12.0 |
San Antonio, TX |
400 |
8.35 |
18.9 |
1,000 |
5.3 |
12.0 |
Washington, DC |
400 |
8.35 |
18.9 |
1,000 |
5.3 |
12.0 |
Pooled, with Louisville |
1,600 |
4.15 |
9.4 |
3,700 |
2.75 |
6.3 |
Pooled,without Louisville1 |
1,200 |
4.80 |
10.9 |
2,700 |
3.20 |
7.2 |
|
||||||
B. MDEs for Annual Earnings |
||||||
Site |
Lower-Bound |
Pledged |
||||
N |
Dollars |
% Chg |
N |
Dollars |
% Chg |
|
Lexington, KY |
400 |
$1,186 |
16.9 |
700 |
$895 |
12.8 |
Louisville, KY |
400 |
$1,186 |
16.9 |
1,000 |
$753 |
10.8 |
San Antonio, TX |
400 |
$1,186 |
16.9 |
1,000 |
$753 |
10.8 |
Washington, DC |
400 |
$1,186 |
16.9 |
1,000 |
$753 |
10.8 |
Pooled, with Louisville |
1,600 |
$589 |
8.4 |
3,700 |
$391 |
5.6 |
Pooled, without Louisville1 |
1,200 |
$682 |
9.7 |
2,700 |
$448 |
6.4 |
NOTE:
1Whether
Louisville will be included in the pooled analysis will depend
on the extent to which members of the new rent policy group opt
out of that group and raise the risk of biased impact estimates. |
Sample
Sizes and Minimum Detectable Effects (MDEs)
B3. Maximizing Response Rates and Issues of Nonresponse
Administrative data sources, key to the impact study and included in this data collection request, do not rely on participant response.
For the participant interviews proposed under this submission, MDRC will work with each housing agency to select a diverse mix of tenants in the New Rent Rules group to serve as key informants. Using a purposive sampling approach, selection will focus on identifying participants who have experienced various aspects of the policy (for example, grace periods and hardship remedies) and those who reflected different work statuses (employed /unemployed) at the time of study enrollment. Given the small sample size, this approach will allow the research to learn and understand experiences from a diversity of perspectives and circumstances.
During the interview, participants will be encouraged to share their experiences; informed that there are no program consequences (i.e., loss of benefits) for not answering any particular question; and told that their name will not be associated with any information that they provide. Based on MDRC’s experience conducting qualitative research, individuals who participate in these types of interviews are interested in sharing their experiences, making non-response less of an issue.
B4. Pre-Testing
The information for the Rent Reform demonstration is being collected by MDRC and its subcontractors on behalf of HUD. With HUD oversight, MDRC and its subcontractors (Urban Institute, Branch Associates, Quadel, and Bronner) are responsible for developing the study documents included in this submission. The statistical aspects of the study were developed by Dr. Steven Nunez in consultation with MDRC senior economist and impact analyst, Dr. Cynthia Miller and two academic consultants, Professors Ingrid Gould-Ellen and John Goering, both national experts.
1 This site, the Lexington Housing Authority (LHA) is enrolling all eligible households in the study. The final sample size will be closer to 1,100, lower than the potential sample size projected by the site, but adequate to support the site-specific impact analysis.
2 Housing subsidy and TTP is calculated based on household income, thus this is a household level intervention and random assignment will occur at the household level.
3 In “split-sample” subgroup analyses, the full sample is divided into two or more mutually exclusive and exhaustive groups, such as single-parent families at the point of random assignment versus two-parent families. Impacts are estimated for each group separately. A related type of subgroup analysis uses regression methods to see if the effects of the intervention vary significantly with a continuous baseline measure (or one that takes on many values), such as initial attendance levels or test scores. Finally, “conditional” subgroup analyses take this idea one step further by controlling for the effect of other baseline characteristics when estimating the relationship between a particular subgroup and program effects.
4 Restricting the analysis to a few confirmatory subgroups does not rule out the possibility of a more exploratory analysis of additional subgroups later in the evaluation. Findings from this analysis would necessarily be more speculative and given less weight in the discussion of program impacts.
5 LHA’s expected sample size is lower than the projected sample size (1,400), but expected to be adequate to support the site-specific impact analysis.
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
File Title | OMB Package Part B |
Author | nunez |
File Modified | 0000-00-00 |
File Created | 2021-01-24 |