Data Collection and Analysis plan

HUD Rent Reform Evaluation Data Collection and Analysis Plan (DCAP) - 6-30-2014 DRAFT.doc

Rent Reform Demonstration

Data Collection and Analysis plan

OMB: 2528-0306

Document [doc]
Download: doc | pdf

HUD Rent Reform Demonstration Data Collection and Analysis Plan (First Draft)




HUD Rent Reform Demonstration







Data Collection and Analysis Plan

(Demonstration Task 3)





FIRST DRAFT


June 30, 2014













Overview

This document presents a draft Data Collection and Analysis Plan (DCAP) for the Housing and Urban Development Rent Reform demonstration. It builds on the research design paper submitted by MDRC and describes how the demonstration will be implemented, especially the approach to site recruitment, technical assistance, data collection, and data quality control.1 As needed, this document draws on content in the research design paper.

Briefly, as background for the DCAP, the Rent Reform demonstration is structured around a two-group random assignment study. Using this design, at least 7,400 households will be recruited and randomly allocated to the program group or control group, each of which will include at least 3,700 households.


Four housing authorities have agreed to participate in this demonstration project:

(1) Lexington Housing Authority, Kentucky;

(2) Louisville Metro Housing Authority, Kentucky;

(3) San Antonio Housing Authority, Texas; and

(4) District of Columbia Housing Authority, DC.


The DCAP focuses on research activities for the first task order, which has been awarded to MDRC, but also provides a brief overview of some of the research activities that may be covered by subsequent task orders (for example, a survey). The document is structured in three parts. In Part 1, it starts with a description of the site selection process and activities. Next, it describes sample enrollment, random assignment, and the impact analysis plan. Part II turns to technical assistance, and describes the many areas around which the MDRC team will provide technical assistance – design, operations and implementation, and data collection. Part III focuses on data collection activities, with a particular focus on the quantitative data priorities for Task Order 1 (TO1), and offers a detailed description of MDRC’s data quality and data processing steps. The DCAP closes with a brief review of the formal deliverable slated for this task order.


Part 1: Site recruitment, sample, data sources and analysis

Site recruitment

The process of recruiting housing agencies for the demonstration began with joint efforts by HUD and the MDRC team to introduce the study through informational meetings and conference calls with MTW agencies we had identified as potential candidates for the project. These included special informational sessions at conferences sponsored in 2013 by the Public Housing Directors Association (PHADA) and the Council of Large Public Housing Authorities (CLPHA).

Criteria for Housing Authority Selection

MDRC’s original proposal set out a number of guidelines for assembling a group of research sites. These guidelines gave higher priority to MTW agencies that had larger voucher programs and, thus, larger samples for a randomized trial, and that had not progressed too far in implementing an alternative rent policy of their own. This would allow them to provide a control group that would represent the traditional national 30-percent-of-income rent policy. In addition, we sought agencies that together would reflect important dimensions of the diversity of voucher holders and local conditions found among housing agencies across the country. This is important because one goal for evaluating the alternative rent policy is to determine whether it can be effective when operated for different types of tenants and in different contexts. Thus, we sought to recruit a pool of sites that would reflect some diversity in local housing markets, local labor markets, tenant race and ethnicity profiles, and other local or household characteristics that could present different kinds of challenges in finding work and, hence, tenants’ responses to the work incentives to be built into the alternative rent policy. It was also critical that a housing agency be willing to comply with random assignment and the other research demands of a rigorous demonstration, and to sustain both the alternative rent policy and its existing rent policy through to the end of the demonstration.

The Process of Consultation

The key steps in the site selection process are described here.


Step 1: Preliminary data collection on MTW programs


Building on discussions with HUD and MDRC’s own analysis of the 34 agencies with MTW status at the time the RFP was issued, the team initially identified 12 housing agencies selected from a list of 14 that HUD MTW office staff had recommended based on their knowledge of the various sites. Most of those agencies have large HCV populations. At the start of the site- selection process, as agreed with the project’s Government Technical Representative (GTR), MDRC excluded the four new MTW housing agencies that HUD announced in late 2012 because these agencies serve very small numbers of voucher holders.

Step 2: Phone reconnaissance with PHAs


By the end of 2012, following the information sessions at PHADA and CLPHA conferences and a special HUD-initiated conference call with selected housing agencies, the MDRC team and HUD completed a series of one-on-one exploratory discussions by telephone with 11 housing agencies that were considered potentially appropriate for the study. These dealt with their current rent policy reforms and plans and their potential willingness to be part of the demonstration. Based on these calls, we identified a “short list” of eight agencies with which we undertook more in-depth planning activities. These agencies served: Baltimore City, Cambridge, Chicago, Louisville, Massachusetts, San Antonio, Santa Clara, and the District of Columbia.

Step 3: Initial planning sessions


The MDRC team subsequently conducted two separate day-long planning sessions in Chicago with representatives of this group of eight agencies—in February and May, 2013. The HUD GTR participated in person in both sessions, while other HUD Headquarters staff joined by phone. These meetings were used to explore a variety of alternative rent policies and to try to identify a common set of approaches all of the candidate sites might be willing to adopt.

By the May 2013 Chicago meeting, the Santa Clara housing agency withdrew itself from consideration for the demonstration. Because of funding reductions the agency confronted in the face of the federal budget sequestration process, it chose to adopt a different type of rent policy than the one that was gaining support from the other candidate sites. Santa Clara’s new policy would increase households’ share of rent and utilities (to 35 percent of gross income) in an attempt to immediately reduce the agency’s HAP subsidy per household, which it viewed as essential to avoiding a reduction in the number of vouchers it could offer. Moreover, the agency determined that it could not meet its budget reduction goals if it had to maintain the traditional rent policy for a control group.2

Step 4: Housing agency analysis


Over the course of the year, planning efforts included extensive analyses of housing agency and other data of the remaining candidate sites. Several of them subsequently withdrew themselves from consideration. Baltimore (HABC) was contending with vacancies in key leadership positions for the HCV program and its officials believed they did not have the capacity to take on the requirements of the demonstration. The Chicago Housing Authority had advanced its plans to introduce a variety of MTW reforms and believed that adding the new rent policies to the mix would interfere with a smooth implementation of these other reforms. The Massachusetts DHCD eventually declined participation because it was devoting attention to a transformation of its utilities policy—a transition that would demand large amounts of time from the same agency staff who would also have to be responsible for rent reform. And, finally, the Cambridge Housing Authority withdrew after it determined it did not have sufficient staff capacity to take on a rent reform project in the face of the major capital planning and resident relocation challenges it would need to address as a new Rental Assistance Demonstration (RAD) site.

In the face of these withdrawals, MDRC and HUD initiated conversations with housing agencies in: Columbus, GA; Lexington, KY; Philadelphia, PA; and Pittsburgh, PA. Preliminary data analyses were conducted for Columbus and Philadelphia, but those agencies did not join the demonstration. An agreement was reached with Lexington to join the planning effort and the demonstration, along with Louisville, San Antonio, and Washington, DC.



Sample and study enrollment

Sample Selection and Sample Size


All eligible households with upcoming recertification dates during the study’s enrollment period will be selected for the sample, up until the target sample size is reached. The original vision for this study, outlined in MDRC’s proposal, relied on a sample comprised of current and new voucher holders. However, since few new vouchers are being issued by the participating housing authorities, the study will include new voucher holders who are coming in for their initial income certification as well as the larger number of existing voucher holders coming in for recertification. Sample selection will occur monthly with random assignment.


The sample eligibility criteria are:


  • The head of household must have legal working status in the U.S. 

  • The household is not disabled. 

  • The household is not elderly and the head of household is 56 or younger.3 

  • The household’s voucher is not:

    • A non-MTW voucher (i.e., Veterans Assisted Special Housing, Moderate Rehabilitation, and Shelter Plus Care)

    • An Enhanced Voucher

    • A Homeownership Voucher

    • A Project-Based Voucher

  • The household is not participating in the Family Self-Sufficiency Program.


Three sites will aim to provide a total of 2,000 households each, and one site will aim to provide 1,400, for a total of 7,400 households.


Study enrollment


Random assignment procedures will be used to allocate eligible households to either the alternative or the traditional rent policy. Enrollment in the demonstration is mandatory (i.e. households will not be allowed to choose which of these policies will apply to them), but households have the option to opt out of the research study.


The random assignment process will involve the following steps (see Figure 1):


  1. Identify the pool of eligible voucher households who are scheduled for recertification during the study’s enrollment period.

  2. Conduct batch random assignment, allocating tenants to the alternative or traditional rent policy in advance of their recertification interview.

  3. Distribute information about the demonstration, the evaluation, and random assignment status in recertification packets given to them prior to the meeting.

  4. Verify port-out status (this is likely to happen when tenants come in for recertification, but the extent to which this can be known in advance of random assignment, such tenants would be excluded from the eligible target pool for the demonstration).

  5. Use MDRC’s web-based system to collect baseline information.

  6. Offer gift cards and the evaluation information sheet (which will include opt-out information) to all sample members in the intervention and control groups.4

  7. Conduct the recertification interview and review rent policy.

  8. Complete income verification and confirm the new rent amount (this final confirmation will be mailed to families at least 30 days in advance of the recertification anniversary).


At the time of this writing, the software vendor for three of the PHAs (Emphasys) will conduct batch random assignment for three of the sites, and MDRC will conduct batch random assignment for one. Batch random assignment will be conducted monthly for all eligible households starting the recertification process that month. The following sections describe the process for MDRC doing batch random assignment and for the housing authority doing batch random assignment using their software.


Informing participants about study participation


MDRC applies federal regulations governing the protection of human subjects to all its research projects. The regulations and these principles are enforced by MDRC’s Institutional Review Board (IRB). Before the commencement of any new research project, MDRC's Human Subjects Administrator reviews the proposed research and assesses the extent of IRB review required.


Early design discussions weighed the tradeoffs of using a voluntary or mandatory enrollment process. It was determined that a voluntary process would create a substantial risk that households volunteering for the alternative policy would not adequately represent the larger population of eligible voucher holders to whom this policy is intended to apply. Conversely, certain types of households may be reluctant to sign up for the new policy. For example, larger single-parent households might fear the loss of the dependent allowances and child care deductions, and non-working households might worry about the minimum rent requirements. For that reason, the MDRC team proposed a mandatory enrollment process. This will help ensure that the evaluation includes a representative sample of working-age, non-disabled voucher holders, and that the findings from the evaluation are broadly generalizable to the qualifying populations of the participating housing agencies (i.e., that the evaluation has strong external validity). This will strengthen the assessment of the alternative rent policy as a possible national model.


MDRC’s Institutional Review Board (IRB) and HUD have determined that obtaining tenants’ informed consent to be in the evaluation is not required under certain exceptions to the federal Privacy Act, as long as safeguards are in place to protect research subjects’ privacy. While tenants will not be asked for their informed consent to be in the research, procedures will be made for them to opt out of the research if they wish to do so – although they will not be permitted to opt out of their assignment to one or the other rent policy.


The demonstration has mandatory enrollment; all eligible voucher-holders will be identified and randomly assigned prior to their recertification. PHA staff will inform eligible voucher holders about the study and their study enrollment group, explain to them the risks and benefits of participation in the study, provide them with a study information sheet, which will include MDRC contact information if they decide to opt out, and complete the baseline information form (BIF), described later in this document.



MDRC batch random assignment


The process for MDRC conducting monthly batch random assignment is as follows:


  1. The PHA will filter out ineligible households.

  2. The PHA will send MDRC a file with all eligible households that are starting the recertification process that month.

  3. MDRC will review the file for any data quality issues and to confirm that the eligibility criteria were applied correctly.

  4. The MDRC random assignment unit will generate random numbers for all households in the file and assign each household to either the alternative policy or the current policy based on the value of the random number.

  5. MDRC will send the file back to the PHA with households’ random assignment statuses included in the file.

  6. The PHA will merge the random assignment status information back into their system and send out the recertification packets corresponding to each household’s random assignment status.


PHA batch random assignment


For PHAs where they are conducting batch random assignment themselves using their software, the software will automate:

  1. Identifying households starting the recertification process in that month;

  2. Filtering out ineligible households;

  3. Generating random numbers for all eligible households in the file and assigning each household to either the alternative policy or the current policy based on the value of the random number;

  4. Adding the random assignment status information to their MIS while preserving any historical data already in the system on that household; and

  5. Generating recertification letters corresponding with each household’s random assignment status


Prior to conducting random assignment, the software company will set up a test environment for MDRC so that MDRC may confirm that random assignment is being conducted correctly.

Periodic Sample Buildup Tables and Charts


The RA Manager will routinely create BIF Data Collection Reports. At a minimum, the report is a table showing the number of sample members randomly assigned to the program and control (if included in BIF data collection) groups by site, cumulatively and month by month. Within each group the reports will show, for example, the number that:


  • Completed a BIF

  • Declined to complete a BIF

  • Ported out or exited from the Housing Choice Voucher program before the household’s first post-RA reexamination.

  • None of the above (missing).


New monthly cohorts of study participants will be added to the tables following the month in which most cohort members are scheduled for their next reexamination.


Periodically, as needed, the Rent Reform Data Manager will forward to site administrators electronic lists of Rent Reform study participants who did not complete a BIF and are missing information as to the reason why not. The MDRC team will also track the number participants dropping out of the study.


Sample build-up plan

The duration of the enrollment process is likely to vary across sites depending on the number of vouchers each administers. Larger agencies should be able to reach their sample goals more quickly than smaller agencies because they conduct a larger number of redetermination interviews (the point at which the new policy will begin for a household) each month. Overall, the goal is to complete the enrollment process in no more than one year at each site.




Analysis: Measuring site-specific, pooled, and subgroup impacts


The power of the experimental research design will come from the fact that, with an adequate sample size, random assignment ensures that the intervention and control groups will be similar in terms of the distribution of observed and unobserved baseline and pre-baseline characteristics. Thus, post-baseline differences between the two groups can be interpreted as effects of the intervention. (The study design paper addresses issues of sample size and statistical power.)


The basic estimation strategy used here is quite analogous to the methodology MDRC and other social science researchers have used in social experiments over the last few decades to generate credible results. The analysis will compare average outcomes for the intervention and control groups, and will use regression adjustments to increase the precision of the statistical estimates that are performed. In making these adjustments, an outcome, such as “employed” or “moved” is regressed on an indicator for intervention group status and a range of other background characteristics. The following basic impact model would be used:


Yi = α + βPi + δXi + εi


where: Yj = the outcome measure for sample member i; Pi = one for program (or intervention) group members and zero for control group members; Xi = a set of background characteristics for sample member i; εi = a random error term for sample member i; β= the estimate of the impact of the program on the average value of the outcome; α=the intercept of the regression; and δ = the set of regression coefficients for the background characteristics.


A linear regression framework or a more complex set of methods could be used, depending on the nature of the dependent variable and the type of issues being addressed. For example, logistic regressions could be used for binary outcomes (e.g., employed or not); Poisson or Negative Binomial regressions could be used for outcomes that take on only a few values (e.g., months of employment); and quantile regressions could be used to examine the distribution of outcomes for continuous outcomes.


Multiple measures. As shown in the table below, the full evaluation will examine many outcomes across a number of domains. When multiple outcomes are examined, the probability of finding statistically significant effects increases, even when the intervention has no effect. For example, if 10 outcomes are examined in a study of an ineffective treatment, it is likely that one of them will be statistically significant at the 10 percent level only by chance. While the statistical community has not reached consensus on the appropriate method of correcting for this problem, we would address it by identifying a set of primary outcomes versus secondary outcomes and give priority to statistically significant findings that are part of a pattern over those that appear to be isolated statistically significant effects.


Site-specific and pooled impacts. The impact analysis will estimate the effects of the alternative rent model for each site separately and for all sites combined. As discussed later, the expected sample size at each housing authority should provide adequate statistical power for producing policy-relevant site-specific impact estimates. Site-specific estimates will allow the analysis to test the “robustness” of the alternative rent model; that is, each site will provide a type of independent replication test. If the results show that the model’s impacts are positive and consistent across these locations, it would provide evidence that the model can succeed under a variety of locations and for different types of tenants. Alternatively, if large and statistically significant variations in sites’ impacts emerge, it will be important to try to understand what local conditions and/or implementation factors may be generating that variation in the model’s effectiveness. Even though it would be impossible to identify those causes definitely, it may be possible to generate empirically grounded hypotheses about the possible causes, and to rule out certain explanations.


The impact analysis will also pool the housing agency samples to produce impact estimates for all sites combined. Pooling would increase precision of impact estimates, which becomes especially relevant when estimating effects for subgroups of the full sample. The Washington, DC, site may be excluded from the pooled estimates because its biennial recertification policy differs importantly from the annual policy that the control group will face in each of the other sites, and also because it differs from current national policy. However, a final decision will depend on how many households appear to be affected by the control group’s understanding of biennial policy, for example.


Subgroup impact estimates. Both theory and findings from other evaluations of similar programs (e.g., those that tested work incentives for low-income populations and for voucher recipients in particular), suggest that changes to the rent structure may have different effects for different types of families. For example, the alternative rent model may have larger effects on tenants who are not employed at the time of their recertification interview, or working part time, since it is often easier for individuals to increase their hours in work than for those already working to advance to higher-wage jobs. The new policy may also have different effects depending on a tenant’s barriers to work or preparation to work.


The evaluation will thus investigate whether changes in the rent structure have more pronounced or different effects for particular subgroups. Subgroup impacts can be calculated in several ways, and prior to the impact analysis, the evaluation team will finalize the method and prioritize the subgroups that are “confirmatory” and the remainder that are “exploratory.”


The confirmatory subgroups will be specified in advance, in order to avoid the potential for data mining and the problem of multiple comparisons. Subgroups can be chosen as confirmatory because prior theory suggests program differences by a subgroup dimension, because differences in impacts by a given dimension have been found in prior evaluations, or because a given subgroup is of great policy interest. As part of the Task Order 1 design work, we will work with HUD to define the subgroups of interest, using data collected from the BIF and 50058 form and/or administrative records data.



Part II. Technical assistance

As part of Task Order 1, MDRC’s technical assistance effort will include the following areas: design of alternative rent policy, guidance around software modifications to support implementation of new rent rules and tracking and reporting requirements, new policy implementation, research data collection, and general monitoring.

General monitoring


MDRC proposes to deliver technical assistance by (1) assigning one or more MDRC team member to each study site to serve as a general liaison and point of contact throughout the evaluation5, and 2) drawing on the expertise within our partnership, which includes housing authority experts, to address specific implementation and operational issues.


The MDRC site liaison will monitor PHA activities during each phase of the study both in Task Order 1 and into the future task orders, in the event they are awarded to MDRC. Program monitoring activities would be structured around regular check-ins with management staff to review decisions and discuss progress of the initiative. More frequent check-ins will be organized in the design and early implementation stages, as needed, with administrators and frontline staff for more detailed design and implementation discussions (as evident by the process to-date). The need for more frequent follow-up will also signal where more specialized assistance on operational issues from the team might be necessary. Our monitoring plan also includes periodic site visits for direct interaction with and observation of site staff. In general, expertise within the MDRC partnership will be tapped to address issues concerning the design and operation of the alternative rent model, the implementation of random assignment, tenant recruitment, and educating tenants on the new rent rules, as described below.


Whether and how rent reform influences voucher holders’ behavior depends in part on what tenants actually understand about the new rent rules. As part of Task Order 1, the MDRC team will interview approximately 15 participants per site, largely in the form of focus groups, to examine their understanding of the rules. These interviews will be conducted as part of our technical assistance work and will help us provide instructive formative feedback to the HAs on their communication strategies. 6


If subsequent task orders support longer-term technical assistance, future rounds of site visits to each of the PHAs would continue to include additional observations of staff efforts to implement reforms and interviews with key program staff to learn about the operational strategies, challenges, and lessons about implementing the rent reforms.



Program design and local approvals


As noted in an earlier section, the MDRC team has worked closely with HUD and the local housing agencies to design an alternative rent policy. As part of the design process, the MDRC team, HUD, and sites together reviewed a range of possible rent reform ideas and the design team has defined an alternative rent model that includes several core features that all housing agencies would implement, while leaving some room for housing agency discretion in adapting those features to local conditions

The MDRC team has also worked with each housing authorities to develop appropriate descriptions for their respective MTW activity plans. As necessary, MDRC will assist the sites with other documents that may be required to obtain local and HUD approvals.


System modification / enhancements


The alternative rent policy will require a rent calculation and tracking system that is designed to support the new set of rules around which the new rent policy has been designed. Along with the housing authority, the MDRC team will work with the software vendors for the study HA systems (Emphasys and Yardi) to identify all the system requirements or modified or supplemental functionality to administer the Rent Reform activities from income reporting; rent calculation; recertification periods and hardship tracking in a manner consistent across all study sites.


As part of this effort, MDRC will work with each HA to:


  • Finalize software requirements

  • Review scope, time, and costs

  • Coordinate procurement process with HUD, if necessary

  • Review system support requirements

  • Monitor software development

  • Assess implications of the development effort on the study roll-out schedule

  • Test and validate relevant modules / functionalities

  • Review documentation - end user documentation/technical manuals, trainings


Study enrollment process


MDRC and the HA partners have developed a strategy for conducting batch random assignment, details of which are described in a section above. MDRC team members will work closely with housing authority staff to implement the agreed-upon enrollment process.


Communication tools and materials


How the alternative rent model is explained and communicated to voucher holders will also be fundamental to tenant’s understanding of the study and the new rent rules – their understanding of the new rent rules is also key to whether they change their work behaviors in response to the incentives established by those rules.


As part of getting sites ready to implement the new rent rules, MDRC and HA staff will review, develop, and update a variety of materials:


  • Recertification packets. These packets, which include voucher renewal information and forms, will need to be revised for households assigned to the new rent policy group (for instance, given the use of retrospective income for calculating rent, households assigned to the program group will need to bring in additional documentation to complete their recertification). MDRC is working with the sites to review the changes and ensure revised recertification packets are available in time for distribution.


  • Messaging. The MDRC team will also assist PHAs with the development of point-of-contact messaging (recertification interviews and briefings), written materials (newsletters, mailings), and other less traditional approaches (events, on-hold telephone messages). Working with HA staff, MDRC has identified various types of communication materials that will be necessary to educate various stakeholders (voucher households, landlords, program staff, and other key stakeholders in the community) about the demonstration, the new rent rules, and the related study. In addition, staff conducting orientation sessions with tenants will use a PowerPoint presentation to go over the key features of the new rent policy. Graphics and other visuals displaying the rent structure and how a family’s employment and income can affect their rent will be an important part of this effort.


  • Ongoing communication. Communication efforts will not end once families are enrolled in the study – ongoing communication will ensure that tenants understand the new rent policies and their implicit incentives, and that tenants are reminded of the opportunities, as well as their responsibilities, associated with those new policies. This effort will need to continue into the later evaluation phases/task orders.


The MDRC team will thus monitor the implementation of local communication strategies and assess whether the messages are getting across to residents. If the messages are not being understood, or if tenants are raising the same sets of concerns, we will provide guidance to the HA to improve communication plans. Technical assistance will also address the need for all communications to be appropriate for the audience and adult learning styles. Data to inform this early assessment will come from information gathered during the technical assistance team’s observations of program operations and interactions with PHA staff. In addition, it will come from preliminary research evidence collected from a small number of in-depth interviews with tenants across the sites.

Program Implementation


Along with the other operations tasks described above, the MDRC team will provide technical assistance to each of the housing agencies to help train their staff on the implementation of alternative rent policy and on the procedures necessary for the evaluation. The MDRC team will also monitor the sites’ experiences in implementing the new policy to help ensure that the new rules are being correctly applied, and that the differentiation in treatment between the intervention and control groups is maintained.


In most demonstration projects, the TA teams support PHA staff through the final phase of the program to ensure that the delivery of the treatment remains strong. If MDRC is selected to lead subsequent task orders, a future update to the DCAP will more fully describe this role.



Baseline data collection7


The technical assistance and data teams will work closely with the HA staff so that they understand their roles and responsibilities with respect to randomization and the baseline data collection. Leading up to the launch of the demo, MDRC will work with the sites to integrate the random assignment process into their program operations (this part of the work is also described under the section on System Modification/Enhancements). MDRC will coordinate and conduct an on-site training on the study, the alternative rent policies, and the baseline data collection process. During these trainings, the evaluation coordinators and agency staff will have the opportunity to practice describing and answering questions regarding the study, random assignment, and the new rent rules. Upon completion of training, the agency staff will understand how to identify each enrollee’s research status and the next steps for each program and control group member.


MDRC will produce a user-friendly manual with step-by-step baseline data collection procedures. After random assignment begins, the TA team will closely work with the PHA to assess progress and provide additional training as needed. MDRC will produce periodic reports on sample characteristics, and confirm that there are no systematic differences between research groups.


Specific details of this process are provided below.


Set up the BIF application for data collection

As described above, the Rent Reform project will randomly assign households in batches, using monthly rosters of eligible households that are in the scheduling process for their next reexamination. Collection of Baseline Information Form (BIF) data will take place during households’ reexamination meeting a few weeks or months following random assignment.

The procedures, training, and preparation for collecting BIF data for Rent Reform will be similar in many ways to MDRC projects that collect BIF data immediately preceding random assignment (RA) to a program or control group. Housing authority staff members will log in to a slightly modified version of MDRC’s online RA application, which will be programmed either to forgo generating random assignment results or programmed to return the same result for each household as it already received from the previous batch random assignment procedure. MDRC technical and Operations administrators and staff members who normally set up and monitor use of the MDRC’s combined RA and BIF application will also perform these tasks for the Rent Reform online BIF application.

The evaluation team is presently finalizing the content of the Rent Reform Baseline Information Form. Once the form is finalized and approved for this evaluation, it will be used to design the online data entry screens and underlying data entry rules, including the sequence of data items, the designation of required and optional fields, and programming of skip patterns.

The system will include a data entry screen for collecting contact information for study participants and one or more relatives or close friends of study participants. Similar screens have been used for several other projects and will be adapted for use by Rent Reform.

Before conducting training of housing authority staff members on BIF data collection procedures, the Rent Reform evaluation team will test the BIF data entry screens and underlying programming code by entering records of fictitious study participants. As usual during testing procedures, team members will enter data correctly for some records. For other records, they will attempt to make as many mistakes as possible in data entry to test system responses. The Rent Reform Data Manager will log issues raised by the testing and work with the RA Manager to update the system to address these problems. Retesting will then take place until all problems are resolved.

Work with site administrators and Intake staff members to set up online RA procedures and training

Prior to the start-up of random assignment for Rent Reform, members of MDRC’s Rent Reform Data and Operations teams will work with the RA Manager, Data Transfer Specialist, and Data Security Officer to set up our online application for the project. The team will also work with housing authority administrators and designated technical staff members to conduct the following preparatory steps for using the application:


  • Make sure that the site has a sufficient number of computers with Internet access and equipped with Internet Explorer and adequate virus protection software. Specified computers will be in a no/low-traffic private area to ensure that the participant’s personal information is kept confidential.

  • Select staff members who will log into MDRC’s application. Forward their names and contact information to MDRC’s RA Manager. Only designated staff will conduct random assignment using MDRC’s BIF application on appropriately secured computers.

  • Test the connection to MDRC’s BIF application address (https://secure.mdrc.org);

  • Decide on the days of the week and hours of the day during which the site will collect BIF data.

  • MDRC will provide designated staff with unique usernames and passwords to log in to MDRC’s secure site to access the BIF application.

    • When staff log in to the system the first time, they will have an opportunity to change their password.

    • Username and passwords should not be shared with anyone.


MDRC’s Rent Reform team members will prepare and distribute to all staff members involved in collecting BIF data a BIF Data Collection Procedures Manual. The BIF Data Collection Manual will include the following core components:

  1. Detailed step-by-step instructions for looking up study participants in the system and then collecting BIF data for them.

  2. A copy of the paper or online BIF and explanations of measures and values, if needed.

  3. Detailed scripts for site staff to use in different situations when interacting with study participants, e.g., Q&A’s for explaining why BIF data in general and particular BIF questions are needed by the study.

  4. Screen shots of the BIF application and reports.

  5. A copy of the Rent Reform study explanation document and instructions for households to request that they can opt out of participation in the research.

  6. Explanations of data security requirements.

  7. Procedures for recording which households

    • Declined to complete a BIF.

    • Ported out or exited the Housing Choice Voucher program before their next scheduled reexamination.

  1. Procedures for determining when the head of household responds to the minimum number of questions to designate the BIF as complete and the household as eligible to receive a gift card.

  2. Procedures for issuing a gift card to BIF completers and recording who received the card.

  3. One-page “cheat sheets” for site staff.


MDRC’s Rent Reform Operations team will work with the Rent Reform Project Manager, Data Manager, and Operations team members to organize formal training sessions for housing authority administrators and staff members. In preparation for conducting the training sessions, MDRC’s RA Manager will issue passwords and login instructions to authorized housing authority staff members. Following completion of training sessions, authorized housing authority administrators and staff members will be encouraged to practice logging into and recording data in MDRC’s online BIF application, using made-up data.

During the training sessions, MDRC staff will:

  • Introduce everyone’s role in the BIF data collection process.

  • Review the contents of the BIF Data Collection Manual and its purpose.

  • Walk through each step within the manual.

  • Practice reading each script with the staff.

  • Read through the Study Explanation (“opt out”) form data.

  • Practice completing the BIF online or on paper.

  • Practice completing tracking forms on BIF completion or decision to forgo completion.

  • Review data security policies and what should be done to prevent data breaches.

Following completion of the training, staff members will be encouraged to continue testing the BIF Data Collection system by recording data on fictitious households and to report problems in data entry or system response. As during MDRC’s internal testing of the online BIF, the Rent Reform Data Manager will log issues raised by the testing and work with the RA Manager to update the system to address these problems. Retesting will then take place until all problems are resolved.

Set up back-up procedures for collecting BIF data

MDRC’s online RA application is usually up and running throughout the day, although it is occasionally disabled as part of a general system shutdown, typically to facilitate installation of new network hardware. More often, sites lose their connection to the Internet. When connection to MDRC’s website is lost, housing authority staff members and participating heads of household may complete a paper BIF. The housing authority staff member would then input the data from the paper BIF into MDRC’s RA database, when the connection is restored. Some problems in data entry may occur if the head of household is no longer present when the staff member enters the data from the paper BIF (for example, from outlier values or failure to follow skip patterns). MDRC’s online system will automatically flag these problems and require correction before saving the data. At that point, the staff member will need to attempt to discern appropriate answers from the paper BIF or change responses to missing values (“No Answer”). The Rent Reform BIF Data Collection Manual will provide guidance to staff members for dealing with these problems when entering data from paper BIFs. These issues will also be discussed during staff training.

Monitoring BIF data collection

MDRC’s Rent Reform team will work with the RA Manager to monitor random assignment and collection of RA-related forms and data files. The main components to monitoring RA are:


  • Monitoring the sample buildup using tables and charts

  • Checking the tracking data (which households completed the BIF, chose to forgo completion)

  • Tracking issuance of gift cards to BIF completers

  • Accessing, checking, and processing BIF data

  • Tracking the number of households dropping out of the study

Once random assignment begins, MDRC Rent Reform Operations staff will set up weekly or bi-weekly check-in meetings with designated site staff (e.g., the site liaison or research coordinator). These meetings will be conducted over the phone and serve to monitor the BIF data collection process and trouble-shoot any issues that may arise. During these calls, MDRC Rent Reform Operations team members may learn of exits and new hires among site Intake staff. Rent Reform Operations team members will forward this information to the RA Manager, who will, in turn, deactivate the passwords for departing site staff members and issue new passwords for new site staff members.


Part III: Data Sources, Requirements, and Quality Control


Data Sources8


The Rent Reform Demonstration will rely on multiple data sources. Using these data, the evaluation will include a careful assessment of the implementation, impacts, and benefit-cost results of the new policy.


Baseline Information Form data: Under Task Order 1, MDRC will create an online system to collect responses to the Baseline Information Form (BIF). Briefly, the BIF data will include information such as family composition, income, employment status, perceived barriers to employment, and education levels. Households will be given a $25 gift card for completing the BIF.


HUD 50058 data and other housing data: MDRC will collect PIC data recorded from HUD MTW 50058 forms directly from the housing agencies as part of Task Order 1. All voucher households enrolled in the study will complete or update a 50058 form as part of their initial or redetermination interview at the beginning of the study. Where possible, we will use 50058 data from 1-3 years prior to random assignment to supplement data collected at random assignment and to describe voucher household characteristics and their monthly rent to owner and estimated TTP. Data from later extracts will be used to track changes in tenants’ reported income, rent to owner, estimated TTP, and receipt of vouchers over the course of the follow-up period. In addition to the 50058 data, we will also collect data from the PHA’s internal reporting systems that are not available in the 50058 data, such as total HAP, actual TTP, and reason for termination. MDRC is working with the housing authorities to identify changes that will be required to their existing software.


Administrative Records: In future task orders, administrative data on employment and earnings will be collected. Other data sources, such as administrative data on TANF, SNAP and Medicaid receipt, may also be considered. The data collection methods will vary in form, intensity, and duration, based on (1) how each agency organizes and extracts the data; and (2) whether MDRC has had prior experience in collecting these data for previous and ongoing studies.

The data acquisition steps include:


  1. Negotiate data sharing agreements

  2. Work with agency technical staff members to:

    1. Gather technical information about database organization; key fields and data values for record matching and for recording required outcomes; and number of months and years of pertinent data contained in the database and available for extraction

    2. Collect appropriate database documentation: codebooks, data dictionaries, record structures, and technical manuals

    3. Document the agency’s process for matching study participant identifiers to administrative records databases—specifically, whether the match will be

      1. Direct: For example, SSN of participant (collected at random assignment) matches to SSN of UI Wage database to collect UIW records; or

      2. Indirect: If needed, this match would entail matching the SSN of a participant to an agency’s Cross-Reference file to collect agency CaseNumber. The CaseNumber matches to the agency’s public assistance database to collect SNAP/food stamp and TANF records. Indirect matches are more prone to matching errors. Accordingly, MDRC’s data team would need to implement extra QC checks on the source data to infer whether the matching process worked as expected.

    4. Agree upon strategies for data matching and extraction

    5. Agree upon extract file contents and data formats

    6. Agree upon a data delivery schedule

  3. To enable agency data extraction, send agency a request file containing required participant identifiers. Send the agency additional files over time as new participants enter the study.

  4. To facilitate QC checks on the extraction process, ask the agency to create cumulative extract files or files that overlap for several months or quarters with previously created extract files. Verify that most overlapping data remain the same across compared files.

  5. To facilitate QC checks on source data, ask the agency to send with data deliveries accompanying aggregate reports of record counts, means, and/or sums of outcome data.


MDRC has extensive experience processing administrative data and has received data in many different record structures and file formats. Thus, though agency administrative data will need to be readable, MDRC won’t try to impose a specific record structure or data format for agency extract files.


The exhibit below summarizes which data source is used for each evaluation topic, and under which task order it falls. As described in the research design paper, the evaluation will examine results from the perspectives of both the housing agencies and voucher holders.



INCLUDE DATA SOURCES TABLE HERE




Survey


It is likely a future task order will include a tenant survey. The survey subcontractor will attempt to complete interviews with at least 80 percent of the fielded sample.


Achieving high response rates will be critical for this study. If those who respond to the survey differ from those who do not, net-impact estimates derived from survey data may be misleading. Based on Work Rewards and other national experience, MDRC understands the challenges of contacting HCVP participants, some of whom may be reluctant to participate in surveys. Thus, as discussed below, the survey assumes significant field locating efforts, essential to achieving high response rates for both program and control group members. Additional locating assistance will be sought from the friends and relatives listed as alternative contacts by the sample members on the BIF, as well as from the PHAs who may have contact information on the respondents.


Survey respondents will be offered a gift card (typically $25-$30) for completing the survey. Based on extensive survey experience, MDRC expects the first 60 percent of respondents to receive this incentive payment after completion of the interim survey interview. Further, MDRC expects the final 20 percent, also the more hard-to-reach cases, will be offered a higher incentive payment.


To facilitate their work, MDRC will forward to the survey contractor (using a secure, encrypted data transmission application) the following data sources:


  • Participant address and phone numbers recorded at BIF data entry

  • Additional contact information (for other family members and friends) recorded at BIF data entry

  • Unit Address and Mailing Address (if different) from matches to PIC 50058 data – collected around the time of participants’ random assignment and periodically thereafter


Tracking


Because of the mobility of the population to be studied and the need to ensure high and comparable response rates for both the control and program groups, tracking will be a critical component to ensure the success of the subsequent survey effort. The tracking efforts will occur between an individual’s random assignment and their re-contact for the follow-up survey, approximately three years later. During this period, tracking activities will include those that will help maintain up-to-date data for participants. Changes will be carefully documented in a database, tracking the history of changed fields to prevent reversions to out-of-date information and maximizing the amount of information available for future tracking activities. In the event this work is funded, a later document will describe the tracking effort in more detail.


Survey Scope and Timing


The survey will give priority to collecting information that is unique and that complements, rather than duplicates, the kinds of information obtained from administrative records and program data. In addition, to the extent appropriate, we will draw from the survey instruments that we created and fielded to program and control group members in the Work Rewards and Family Self Sufficiency evaluations. These instruments cover many of the same topics that are critical for the Rent Reform study. Key topics for the Rent Reform survey might include: job characteristics; income; material hardship and family well-being; savings, debt, and financial behaviors; household demographics; housing circumstances; and, for the program group, understanding and perceptions of the new rent rules (also see exhibit in the Data Sources section).


While the exact scope of the survey will be determined in a future task order, coverage of topics could take about 40-50 minutes to complete. As with other MDRC surveys, sample members will be informed that they can choose not to participate in the survey or not to answer certain questions. The survey subcontractor will use a mixed-mode data collection strategy, combining Computer-Assisted Telephone Interviews (CATI) with field-initiated cell phone interviews. The sample will initially be released to the survey firm’s CATI center for approximately one month before the data collection manager releases portions of the sample to the field, except for cases with invalid telephone numbers, which will immediately be released to tracking and the field as necessary. Mixed-mode (CATI/field) administration is projected for four months. While in the field, survey staff will locate the sample and send updated contact information to the CATI center or have the respondents call into the CATI center, using their dedicated cellular phones, to complete the survey. Thus, all completed interviews will be completed in the CATI center. Additionally, while the sample is being worked in the field, the CATI interviewers will continue to track, locate, and call sample members to complete additional interviews.


The research team will work with the survey contractor to develop a codebook and accompanying CATI data entry instructions for the survey. In developing these products, MDRC and the contractor will determine the format of survey responses (character, numeric, date…); the range of valid values; strategies for recording missing values (don’t know, refused…); and, for certain types of measures, create alternative measures for approximating values when the respondent does not know or won’t report actual values (for example, reporting total monthly household income as “between $1,001 and $1,250” instead of reporting an actual amount.)


The survey contractor will store data from survey responses in the contractor’s proprietary database and then extract the data to an agreed-upon file format (most commonly, SAS or SPSS system file, Excel spreadsheet, or .csv file). As usual, survey data are standard across sites, although may vary slightly for Spanish-language and English-language surveys.


Time Study


A time study is another potential data source for this study. If implemented as part of a future task order, the time study would be one way to measure employee burden in conducting rent calculations. In Task Order 1, MDRC would collect basis organizational and staffing information that would be needed in order to identify appropriate staff to participate in the study, and to prepare the instrument that would be administered.


A time study would entail asking PHA program staff to maintain a timesheet on which they would record using a special set of codes how much time they spent on pre-specified activities, including key components of the rent calculation process, using a special set of codes over the course of each day during a short period, preferably about two weeks. Ideally, it would be administered at least two points in time. The codes would make it possible to determine how much time during a typical day the staff spent on specified activities or functions, such as the extent to which PHA staff have contact with clients and the characteristics of those contacts (for example, client characteristics, who initiated contact, topics covered in interaction) and the types of activities occupying PHA staff when they are not having contact with clients. This method would allow us to obtain estimates of staff time allocations from a large number of staff over many days, and in a standardized form across sites, allowing for statistical summaries and comparisons. If funded to implement this task, MDRC will develop a more complete plan for the time study.


PHA Financial Data


In order to build accurate estimates of the costs incurred by PHAs to administer the new rent system, and to estimate the savings relative to the current system, the full evaluation will need substantial information on PHAs’ staffing structures and time use (as described above), operating procedures, information systems, and expenditures. Over the course of the evaluation, we would propose to develop cost estimates for the administration of the current rent system and the alternative rent system through interviews with PHA staff and analyses of FASS and PIC data, taking into consideration, at a minimum, the following factors: (1) organizational structure and number of staff to number of vouchers, (2) business processes (eligibility through face-to-face interviews or mail, paper or electronic data management system, etc.), (3) volume of unscheduled activities (moves, interims, turnover, etc.), (4) quality control and SEMAP ratings, and (5) time-task analysis for key, routine activities.


MDRC expects to begin collecting information on these items under Task Order 1, as PHAs initiate the changes in their rent systems. After becoming more familiar with the PHAs’ current approaches and likely adaptations of the alternative rent model through our TA work, we would prepare a check-list of relevant documents for each PHA to provide to the research team. Although analyses of this material would have to wait for a subsequent task order, MDRC will begin compiling relevant materials as part of the first task order, recognizing the difficulty of obtaining accurate information retrospectively.


Quality-Control Procedures


MDRC has extensive experience in performing quality control (QC) checks on administrative records and survey data. We have developed standard cross-project protocols and SAS code for identifying the most serious and most common data quality issues, including programming errors by MDRC staff, which we will utilize for the Rent Reform study.


Before conducting QC checks, MDRC’s Rent Reform data team members will review pertinent data documentation and other information about source data acquired from providers. Based on our knowledge of provider procedures in data collection, database management, and data extraction and our knowledge of the Rent Reform calculation rules for TTP and HAP, MDRC’s Rent Reform data team members will develop expectations about how incoming source data should appear. Expectations concern file sizes, numbers of records, content of files and records, and likely ranges of values for key outcomes. We also incorporate knowledge about the background characteristics of study participants and expectations about their likely outcomes into our expectations about data quality.


For the most part, QC checks involve ways of searching for anomalies in the data, defined as patterns in the data that do not fit our expectations. Once we identify an anomaly in the data, we perform follow-up checks to try to determine the cause of the problem. We summarize our findings about data problems in technical memos. Where needed, we contact technical staff members at the providers, share results of our QC checks (through secure data exchange), work together to discover the source of the problem, and work out a reasonable solution. Typically, solutions to data QC problems involve providers extracting an updated version of one or more source files. Upon receipt of updated files, MDRC data team members repeat the series of QC checks to determine if problems were fixed. In extreme situations, we repeat this process more than once until we discern that the data are acceptable for program monitoring and for research.


QC checks with random assignment data


The Rent Reform project will conduct random assignment using monthly rosters of eligible households who are scheduled for a reexamination. For three sites, MDRC will receive extracts of random assignment data from Emphasys after random assignment occurs, and in the fourth site, MDRC will conduct random assignment using rosters received from the housing authority.

MDRC will conduct the following QC checks on random assignment data:

  • Check fields on random assignment records that each household has a valid SSN and meets the study’s eligibility criteria.

  • Per agreement with Emphasys and the housing authorities, all three parties will produce and compare counts of the number of households randomly assigned to each research group each month.

  • We will also compare the number randomly assigned to housing authority reports of the number of eligible households in order to determine if some households should have been randomly assigned but weren’t.

  • Check for duplicate SSNs, Household ID Numbers (EntityIDs). This check will also detect households that were accidentally randomly assigned twice.

  • Checks on the randomization process

    • Review random numbers and sort sequence generated by the randomization programs

    • Check for large differences by research group in participant and household characteristics

Following random assignment, we will also check housing authority Household (TTP calculations) data to determine if any households are being treated as members of the opposite research group by mistake.


QC checks with Housing Authority administrative data


Implementing the new Rent Reform rules for calculating TTP and HAP requires a fair amount of system redesign as well as training of housing authority staff members to use the system correctly. As with any new system, there are several kinds of problems that all parties involved need to watch out for:

  • System errors. Examples:

    • Incorrect calculations of retrospective and current/prospective income, TTP, HAP, etc.

    • Wrong decisions made for setting TTP, when choosing among the two income calculations and the minimum rent.

    • Failure to identify a hardship situation.

    • Incomplete system design. HCV tenants report ambiguous situations that affect income and TTP calculations and which cause the system to make inconsistent or unpredictable calculations or decisions.

    • Errors in transforming unique Rent Reform fields and values to required PIC fields and values for reporting to HUD.

  • Human error. Examples:

    • Using TTP calculations for the wrong research group.

    • Using income sources for the wrong months when calculating retrospective or current/prospective income.

    • Including benefits when calculating retrospective income when the household currently does not receive them.

    • Including income from a household member who is no longer present in the household when calculating retrospective income.

    • Increasing TTP as a result of an interim reexamination.

    • Conducting two or more interim reexaminations for income changes during the same year.

The effort to minimize Rent Reform calculation errors begins with pre-rollout scenario testing by the software developers, the Research team, and housing authority administrators and staff members. Testing procedures will include multiple examples of common and relatively rare situations for income and TTP calculations. Staff involved will calculate TTP and HAP using a spreadsheet or calculator and pen and paper and will compare results with the system’s calculations. If initial testing reveals problems in the system design or in team members’ instructions to developers or reveals an ambiguity that was not accounted for, developers will update the applications and we will retest. System rollout will occur after all issues are dealt with.

In addition, the applications will include QC modules, which will save calculations for a sample of households. Periodically, housing authority administrators and staff members and members of the Research team will inspect the data in the QC modules to determine if the system is making the correct calculations. Research team members will also periodically conduct QC site visits, during which they will (for a sample of households) inspect documentation that households provide for reexaminations; simulate the income, TTP, and HAP calculations; and compare the results with the calculations recorded in the online system.

MDRC data team members will perform an additional QC step when receiving extracts of housing authority data on a regular basis. They will recreate income, TTP, and HAP calculations, using fields that store the components of these calculations and compare the results with the total income, TTP, and HAP that the system calculated for the household.

Finally, MDRC data team members will likely perform at least one match to HUD PIC data for the Rent Reform study participants and examine whether Rent Reform calculations were correctly translated to standard PIC fields and codes.


Additional QC checks with housing administration data and with other administrative data (UI Wages, TANF, SNAP)


MDRC’s data team will examine the extent to which the following types of problems are present in the administrative records files employed for this evaluation:


  1. Missing Data: One of the most common errors made by data providers is providing an incomplete data file. The file could be missing sample members, variables, values, or follow-up data. These issues may have occurred for several reasons including:

  • programming error by housing authority technical staff members;

  • an error in the extraction of the data, including data filtered or extracted from the wrong data source;

  • incomplete extraction of data – e.g., extracting data from January 2011 onward when MDRC’s data request was for data from January 2010 onward;

  • an error in the creation of the data file for MDRC (for example, outputting data with over 65,000 records to an Excel file in a version prior to 2007);

  • a data provider using an old and superseded MDRC request file (data not extracted for the most recently added study participants);

  • a misunderstanding on the data that were requested;

  1. Data conversion issues

Data providers are often required to export data from their systems into a different format like Excel or text files that MDRC could use. Problems include

  • Transforming key identifiers for matching from character to numeric or from numeric to character format in one or more files.

  • Extracting data into a file format that is inappropriate for the data – e.g., to comma-delimited (.csv) for data with fields that contain embedded punctuation marks. This problem causes the software for reading in the data to misinterpret the values of affected fields and, worse, all fields that follow.

  1. Duplicate Records

Duplicate records, if saved to a file by mistake, can artificially inflate the values of key outcome measures and possibly bias results. In addition to running checks for exact duplicates of records, we run checks for partial duplicates--two or more records with the same data in several, but not all, fields. If we identify partial duplicates we perform additional checks and, if necessary, follow-up with providers to determine if we should retain both records, as containing unique information, or delete one or both records (as a true duplicate or as a record that was superseded by an updated version).

  1. Unexpected and Out-of-Range Values

Unexpected and out-of-range values may occur for a variety of reasons, including (1) data entry errors (e.g., omitting the decimal point when recording values in dollars and cents); (2) database management errors (e.g., failing to delete duplicate records); (3) data extraction errors (e.g., extracting data from the wrong field); (4) data matching errors (e.g., matching to the wrong person’s data); and problems inherent in the data (e.g., two or more people “share” a Social Security Number).

MDRC’s Rent Reform data team will run initial QC checks to identify records with anomalous values, based on our expectations for ranges of likely values for each type of record. Next, team members would investigate the likely source of unexpected and out-of-range values and work with provider technical staff members as needed to address the problem. We would rerun these QC checks on updated files to verify that the problem was fixed. In some instances, out-of-range values cannot be fixed because the provider does not know the true value of the measure for one or more individuals or households. In these situations, MDRC’s Rent Reform data team would consult with our corporate Quantitative Methods Group experts and then implement an appropriate data imputation strategy or, if recommended, drop the record from some calculations of outcomes.

  1. Data processed in one file that are unexpectedly changed in a later file.

Some updates to administrative data result from corrections to previously erroneous values in records or specific fields within records. Other times, updates, especially changes to a large number of records, signal errors in the provider’s database management or in data extraction.

  1. Inconsistencies across data sources

MDRC’s Rent Reform data team members will compare values of outcomes for related data from different sources of administrative records, including

  1. Same type of data from different sources (for example., earnings from UI Wage records and earned income recorded in housing authority or PIC data; public assistance records collected from statewide human services records and public assistance receipt recorded in housing authority or PIC data; and [for study participants who enter the FSS program] service use data collected from housing authorities compared with self-reported service data from survey responses).

  2. Values for different data types that should be related positively or inversely (for example, the values of quarterly earnings and monthly SNAP and TANF benefits).

It should be acknowledged that inconsistencies across data sources may not be fixed (except possibly through imputations), if researchers suspect that inconsistencies occurred because of underreporting of income by recipients of public assistance or housing vouchers. (Exposing a study participant to extra scrutiny by provider administrators would violate MDRC’s promise to protect participants’ confidentiality). Accordingly, these types of QC comparisons may only be used to document limitations in the accuracy of the data.

  • Run and examine counts of source data by calendar month or quarter (small totals or unusual variations may signal data extraction problems).

  • Compare totals and means of dollar values by calendar month or quarter (same principle; also, unusually high totals or means may signal presence of duplicate records or records with out-of-range values).

  • Compare the proportion of study participants with data extracted to the expected proportion (data matching or data extraction problems).

  • Compare range of months or quarters of data to expected range (data extraction problem).

  • Compare contents (field names and field format) of records to expected content, based on data documentation (data extraction problem).

  • Select a QC subsample and examine individual records for subsample members in source data and through each stage of processing.

  • Examine means and frequency distributions: search for unusual numbers of records with missing values or out-of-range values.

  • Create 1/0 indicator variables for specific types of data problems; run frequency distributions of these problem indicators; use indicators to select and examine records for study participants with specific data problems.

  • Run means and frequencies for selected subgroups and compare values to expected values (for example, mean benefit levels by family size)

  • Compare records for the same study participants and for the same time period extracted over time to two or more files

  • Run cross tabulations of related variables (for example, UI Wages: ever employed in 2011 x housing authority data: reported earned income in 2011)

As in other MDRC projects, Rent Reform data team members will summarize their findings from QC checks of administrative data in a series of detailed technical memos, which we will share with members of the larger Rent Reform team. We will write additional technical memos that discuss the data team’s measure creation strategies, including procedures for imputing or top-coding data or for compensating for data problems in other ways. MDRC’s Quantitative Methods Group has issued guidelines for imputations and for top-coding data, which the Rent Reform data team will follow.


QC checks with self-reported data from Baseline Information Forms


As noted above, Rent Reform program staff members will record Baseline Information Form (BIF) data directly into MDRC’s online BIF database. Completion of the BIF is voluntary. Therefore, MDRC and housing authority staff members will track study participants’ decisions to respond to the BIF or not. That way, MDRC could distinguish between voluntary refusals to complete a BIF and truly missing BIF records (for example, when the provider loses the connection to MDRC’s website). In addition, MDRC’s Rent Reform data team will work with our corporate Random Assignment team in advance of conducting random assignment to test the data entry safeguards (required field designators, range limits for responses, and automated skip patterns) programmed into the system. We will make sure that these safeguards are working properly before starting random assignment. Finally, our BIF Procedures Manual and training will include guidance to program staff members for recording BIF data items. The training will also focus on efforts to minimize the issues discussed below.


Typically, some data problems encountered on the BIF in prior MDRC studies include

  • Low item response rates

  • Inaccurate information due to respondents’

    • Misunderstanding of the question

    • Recall errors

    • Exaggeration

    • Efforts to hide information

    • Especially with attitudinal questions, efforts to give the interviewer the perceived “correct” response

  • Inconsistent answers to related questions (for example, reporting enrollment in employer’s medical plan but also answering “no” to a later question on having medical coverage)

As with administrative records and program services data, MDRC’s Rent Reform data team will perform the following QC checks on BIF data: run SAS procedures (frequency distributions, crosstabs, and means), review online printouts of selected records, and create 1/0 indicators that highlight data problems. We consider as problematic measures with non-response rates of 10 percent or higher (noted with caveats, if at all, in reports and papers) and measures with non-response rates of 20 percent or higher as unlikely to be used in analysis (for example, in selecting subgroups). We will also run cross tabulations of related measures to determine the rate of inconsistent answers. For some measures, MDRC will create composite versions that combine answers to two or more questions (for example, a 1/0 measure of medical coverage for the study participant that equals 1 (covered) if the participant responded “yes” to either the question on enrollment in employer’s medical plan or “yes” to the more general question about having medical coverage.

MDRC’s Rent Reform data team will also run SAS procedures that look for inconsistencies across data sources, including:

  • BIF data and housing authority: Participant and household characteristics, household composition, income sources, rent levels, use of housing vouchers at random assignment and during 1 to 3 years prior to random assignment, depending on data availability.

  • BIF data and administrative records data: Employment and benefits receipt at random assignment and during 1 to 3 years prior to random assignment, depending on data availability.

  • BIF and program services MIS data: Where possible, use of pre-employment or services that participants began prior to random assignment

  • BIF and Survey data: Employment and educational attainment at random assignment

As in other MDRC projects, Rent Reform data team members will summarize their findings from QC checks of survey in a series of detailed technical memos. Additional technical memos will discuss the data team’s measure creation strategies, including procedures for imputing or top-coding of BIF data or for compensating for data problems in other ways.

Following guidelines set by MDRC’s corporate Quantitative Methods Group, Rent Reform data team members will impute missing values for BIF data items included as covariates in regression procedures that estimate program effects. However, when analyzing outcomes and program effects for a specific subgroup, the analysis will be limited to study participants with valid responses to BIF measures used to define the subgroup.


QC checks with Follow-Up Survey responses


A survey contractor will interview study participants, record survey responses into a CATI-system database, convert the data to a format that can be read into SAS, and extract the data to files for MDRC. This sequence of file creation introduces potential data problems that are

  • Similar to problems with data extracted from administrative records- or program services MIS records

    • Missing records

    • Duplicate or partial duplicate records

    • Missing or unreadable fields in extract records

    • Data for the same respondent changed in successive files

  • Similar to problems with data recorded in Baseline Information Forms, but on a much larger scale because of the greater length and complexity of the Follow-Up Survey interview

    • CATI database design errors

      • Designating questions as required incorrectly—wrong questions or wrong respondents (for example, asking control group members questions reserved for program group respondents)

      • Setting the range of valid responses incorrectly

      • Programming skip patterns incorrectly

    • Low item response rates

    • Inaccurate information due to respondents’

        • Misunderstanding of the question

        • Lack of knowledge about the subject (for example, respondents may not know their current balances in savings or checking accounts)

        • Recall errors, especially with start and end dates of service use, employment or other outcomes that occurred a year or more before the survey interview

        • Exaggeration

        • Efforts to hide information

        • Especially with attitudinal questions, efforts to give the interviewer the perceived “correct” response

        • Inconsistent answers to related questions (for example, reporting employment in the Employment module but no earned income in the Income module)

Follow-up surveys may also be subject to item response bias, defined as research group differences in item response rates or in the accuracy of the responses to particular questions. For example, despite interviewers’ confidentiality assurances, control group members facing annual reexaminations may be more reluctant to answer questions about earnings, income, and financial assets – or my purposely understate their value – than program group members who know that increases in income won’t affect their TTP until the next triennial reexamination.

Moreover, survey respondents may demonstrate differences in recall errors, related to differences in the length of time between random assignment and the survey interview date. (Some respondents take longer to find or longer to convince to be interviewed.) This problem may also introduce item response bias if, on average, members of one research group are interviewed sooner after random assignment than members of the other research group.

Finally, more so than with BIF data, many survey outcome measures require processing of two or more component measures, including (1) measures of duration of employment and total earnings over time across several jobs; (2) measures of total income, financial assets, monthly expenses, and debts; (3) composite measures of financial hardship or well-being; and (4) attitudinal scales. These types of composite measures are subject to data problems (such as missing or out-of-range values) that may not appear when component measures are considered individually.

A more fundamental problem with follow-up surveys, survey response bias, can occur because of large differences by research group in the likelihood of completing an interview or when survey respondents in one research group differ considerably in background characteristics from survey respondents in the other research group.


It should be noted that the Rent Reform data team and MDRC’s Survey Unit have extensive experience in working with similar follow-up survey data, for the Work Rewards study and for SaveUSA, a random-assignment study involving provision of matched savings accounts to low-to-moderate income tax filers. For these survey efforts, data team members worked extensively with the survey contractor to review and test the programming of the contractor’s CATI system (including ranges of valid responses for each question and skip logic) before fielding of the survey begins. MDRC will repeat this process for Rent Reform. Moreover, the survey contractor will be required to send MDRC weekly survey tracking reports by site and research group; reports on item response problems; and codebooks and additional data documentation. Finally, the survey contractor will be required to send MDRC at least one interim file or data dump within a few weeks of starting fielding of the survey. That way, MDRC’s Rent Reform data team can run a series of QC data checks. All of these pre-fielding or early-fielding activities should greatly reduce the odds of MDRC encountering problems in CATI database design and management or in data extraction for MDRC.


The Rent Reform data team will program and perform similar types of QC checks with the survey data as with administrative data and BIFs – including comparisons of related responses (1) within and across survey modules; and (2) between data sources (for example, comparisons of employment recorded from survey responses and from UI Wage data. In addition, we will run:


  • Crosstabs and means by research group to look for signs of item response bias in response rates or in incidence of unexpected and out-of-range values.

  • Crosstabs and means by indicators of length of time between random assignment date and survey interview date to look for signs of differential recall based length of time to interview.

  • Crosstabs of actual and expected sample sizes among measures that are affected by the same skip pattern to check that the correct respondents were asked follow-up questions based on their responses to previous questions.

  • Frequency distributions and means of composite measures to look for missing or out-of-range values in individual measures.

  • Frequency distributions and means of composite measures to look for missing or out-of-range values that only appear when measures are combined.

  • Create 1/0 and other indicators of data problems for composite measures (for example, counts of missing values for composite measures of an attitudinal scale).

As in other MDRC projects, Rent Reform data team members will summarize their findings from QC checks of survey in a series of detailed technical memos, which will be shared with members of the larger Rent Reform team and with the survey contractor. Typically, follow-up with the contractor occurs within 2-3 weeks of receiving a file, which usually gives the survey contractor time to take corrective actions, if any. Often, these memos include lists of respondents with specific data problems that may have resulted from data entry errors (for example, recording $800 in the pay amount field and “(1) per hour” in the pay period field). The contractor is requested to review audio recordings of the interviews for these respondents and then document required updates to the survey to fix data entry problems. Depending on the timing of the QC checks and contractor follow-up, MDRC may receive an updated file with corrected data or MDRC Rent Reform data team members will hard-code the corrections in SAS.


Plans for corrective actions, including imputations or top-coding for missing or out-of-range data, will be documented. A member of MDRC’s Quantitative Methods Group will review the proposed plan prior to implementation. The technical memo will be updated to reflect the proposed data correction plans.


As with BIF data, particular data items will be considered to be problematic (noted with caveats, if at all, in reports and papers) if 10 percent or more of respondents do not answer the question when answers were expected. It is unlikely that MDRC will report a finding based on a measure with missing responses from at least 20 percent of respondents expected to answer the question. Similarly, MDRC’s data team members would consider as problematic any data item with a research group difference of at least 5 percentage points in rates of survey response or indicators of unexpected or out-of-range or other data problems. We would be unlikely to report any finding with a research group difference of at least 10 percentage points in the incidence of non-response or data problems.


As in other MDRC projects, a response-bias analysis will be conducted and will focus on the following:


  1. Comparisons of survey response rates by research group;

  2. Formal (regression-based) and informal (crosstabs and means) analyses of differences in background characteristics among program- and control-group respondents;

  3. A sensitivity analysis of program effects that combines reported outcomes from survey respondents with imputed outcomes for non-respondents from each research group; and

  4. Comparisons of estimates of program effects on outcomes calculated with administrative data (for example, total earnings after random assignment, calculated with UI Wage data) for the

  1. Survey respondent sample,

  2. Survey fielded sample (respondents and nonrespondents combined), and, if applicable,

  3. Study participants who were initially eligible to interview (selected to the fielded sample and eligible, but not selected combined).


Most likely, these analyses will be included in a technical appendix to the report with the survey findings, including recommendations for interpreting the results. Researchers and policymakers may have greater confidence in outcomes and program effects estimated from survey data when levels of response bias are low or moderate but should consider findings with caution when levels of survey response bias are high. On rare occasions when levels of survey response bias were extremely high, MDRC researchers abandoned an impact analysis of survey data and limited reporting of survey results to descriptive analyses of outcomes for program group members.


Data Processing Plan


The evaluation team will follow standard MDRC data management protocols for creating analysis files for research using data collected from the various data sources discussed above: Baseline Information Forms (BIFs), administrative records, and the survey. For each data source, the MDRC data team will read into SAS source data extracted to a variety of file formats: ASCII; comma-delimited (.csv); Excel spreadsheet; Access or SQL Server database table; or SPSS or SAS system file. The data team will use several of MDRC’s standard SAS programs and cross-project SAS macros that aggregate data, create key outcome measures, and transform data files to facilitate analysis of outcomes and program effects.

MDRC’s standard analysis file creation protocols involve running source data through several data processing steps, which include data QC checks (discussed above) and checks that the SAS code is working as planned. The data processing steps lead first to the creation of a rectangular SAS system file, derived from a single data source, with one record per household, with series of measures of (if available) pre-random assignment history and post-random assignment outcomes. Where necessary for research, each record of these single-source analysis files will contain series of measures aggregated to the household level and separate series of measures aggregated to the person level. The Rent Reform data team will repeat the steps of analysis file creation as new data arrive.


The end products of the data processing steps are a series of “Master Files,” described later in this section, which are rectangular SAS system files with one record per household that merge data from the single-source SAS analysis files. For some Master Files, MDRC’s data team will merge a subset of measures from single-source analysis files—dropping “test versions” of outcome measures or variables that were created to facilitate or check the creation of outcome measures but are no longer needed once the final versions are created. The data team members will run most SAS procedures for calculating outcomes and estimating program effects with data from the Master Files.

The data team will de-identify the single source SAS analysis files and SAS Master Files to protect the confidentiality of study participants. To facilitate this process, the data team will create a project “Cross Reference File,” described later in this section, which will store study participants’ personally identifiable information (PII) together with MDRC’s randomly generated person- and household ID numbers that will stand in for PII when merging files or selecting records.


The following outline provides more details on steps involved in creating single-source Analysis Files and Master Files.


File creation steps for administrative records and PHA data


  1. Raw Data File creation step

  • Read in source data and convert source data to SAS system file data

  • If necessary, transform source records that contain multiple months or quarters of data to records of a single payment or quarter of earnings per job for one study participant

  • Run the source data through several QC checks

  • Eliminate duplicates and other unneeded records

  • Save a SAS system file that contains personally identifiable information (PII) and multiple records per participant

  • Records maintain source data’s field names and field formats

  • At this processing step, households or study participants without source data have no records on the file


  1. Update Data File step

  • Read in the current and previous Raw Data Step Files

  • For study participants with data in the current and previous files

  • Append records with additional pre-random assignment history or post-random assignment follow-up

  • Where necessary, replace superseded records with corrected or updated data

  • Eliminate duplicates and other unneeded records

  • Append records for study participants who entered the research since delivery of the previous file

  • Save a SAS system file that contains personally identifiable information (PII) and multiple records per participant

  • Records maintain source data’s field names and field formats

  • At this processing step, households or study participants without source data have no records on the file


  1. Summary Data File creation step

  • Read in Update Data File

  • Merge data with selected records from the Rent Reform Cross-Reference File.

  • De-identify data by replacing all personally identifiable information (PII) with the randomly generated MDRC SampleID Number and randomly generated MDRC HouseholdID Number.

  • Copy additional fields from Cross-Reference File (site, random assignment date [in UI Wage example: RADate], household person number [in UI Wage example: suffix of SampleID] that are needed for later processing

  • To facilitate later processing: For study participants with a record in the Cross-Reference File but no administrative data, create a single summary file record with month or quarter of random assignment in the date field and $0 in the dollar amount field.

  • Create summary measures for the selected level of aggregation by calendar month or quarter. Each record includes

  • Fields that indicate the level of aggregation [in UI Wage example: Outcome]

  • Date field (week, month, or quarter) [In UI Wage example: ErnDate]

  • Total amount per selected unit of time [In UI Wage example: TotalErn]9

  • Save a SAS system file without PII and with MDRC HouseholdID number as the primary key for sorting and merging data files and MDRC SampleID Number as the secondary key that contains personally identifiable information (PII). File contains multiple records per participant.

  • Transform state- or site-specific fields to standard cross-site fields and field formats

  • At this processing step, each study participant has at least one record in the file.


  1. Analysis File creation step

  • Read in and append Summary Step Files from all states or sites

  • Run standard MDRC SAS macros that transform the data from multiple records per study participant into “rectangular” files with one record per household with individual series of measures per study participant [in UI Wage example: Ern1…/ Emp1… and Ern2…/Emp2 series].

    • Using the date field and each study participant’s date of random assignment, construct series of outcome measures organized by month or quarter relative to the study participant’s date of random assignment (Month 1=month of random assignment; Quarter 1=quarter of random assignment).

  • Create additional outcome measures, as needed. [in UI Wage example: Emp… series from Ern… series]

  • Create series summary measures at the household level [in UI Wage example: HHErn…/HHEmp…

  • Save a SAS system file (de-identified)


  1. Master File creation step

  • Depending on the analysis required, merge needed data from single-source Analysis Files by MDRC Household ID Number into two or more “Master Files.”

  • Create additional outcome measures, as needed—for example, Total earnings per follow-up year; total measured income per follow-up quarter and per follow-up year; earnings as a percentage of income; measured income as percentage of poverty level.

  • Save a SAS system file (de-identified)


See below for additional information on the likely content of Master Files for the Rent Reform Demonstration.


File creation steps for BIF and survey data


Fewer processing steps will be needed to create Analysis Files with BIF and survey data because the source data files read into SAS will initially have a rectangular structure and contain only one record per household. BIF records will have fields with information at the household level and fields with information for each study participant. The survey will contain records for a single respondent per household. In addition, these source data files contain records for study participants from all sites in a single cross-site file. Finally, the files are typically preprocessed (BIF files by MDRC’s Random Assignment Unit; and survey files by the survey subcontractor) into cumulative files that supersede previously created files.


Accordingly, MDRC will run these data through the following steps:


  1. Raw Data File Creation Step

  • Read source data into SAS

  • Perform QC checks

  • Save a SAS system file


  1. Analysis File Creation Step

  • Read in Raw Data File

  • Address data problems, as needed, following recommendations of MDRC’s corporate Quantitative Methods Group

    • Recode (top-code) out-of-range values

    • Impute values for missing

  • Code values that were implied by skip patterns—for example, reset 1 “employed at interview from missing to 0 (no), if the question were skipped because respondent previously reported never working for pay

  • Create additional measures

  • Merge data with selected records from the Rent Reform Cross-Reference File.

    • De-identify data by replacing all personally identifiable information (PII) with the randomly generated MDRC SampleID Number and randomly generated MDRC HouseholdID Number.

  • Save a SAS system file


  1. Master File creation step

  • Depending on the analysis required, merge needed data from single-source analysis files by MDRC Household ID Number into two or more “Master Files.10

  • Create additional outcome measures, as needed

  • Save a SAS system file


Data Management and Documentation


  1. File naming conventions


MDRC projects use standard file naming conventions to facilitate tracking and use of current and no-longer current source data files and MDRC-created files. A likely naming system for Rent Reform data files would include a sequence of codes for

  1. Project

  2. State or site

  3. Data type

  4. Data processing step

  5. File creation date (YYMMDD)


In the above example, the sequence of UI Wage files created in consecutive days for the Rent Reform Demonstration might be called:


Source files: RRDCUIRAW141201.txt and RRDCUIRAW150501.txt (if in ASCII format)

Raw Step files: RRDCUIRS141202.sas7bdat and RRDCUIRS150502.sas7bdat

Update Step file: RRDCUIUS150503.sas7bdat

Summary Step file: RRDCUISS150504.sas7bdat

Analysis File: RRXSUI1S150505.sas7bdat [XS=cross site, 1S=Analysis File]11

Administrative Records Master File: RRXSARMA150506.sas7bdat [AR=administrative records; MA=Master File]


  1. Program naming conventions


Standard SAS program naming protocols will also be followed. For example, the program RRDCUIRS141202.sas creates the file RRDCUIRS141202.sas7bdat. Other programs will have names that include letter codes that denote, the site, data source, and sample included and the purpose of the program.


For example, RRXSARFSDS150601.sas would be a program that reads the cross-site administrative records Master File [XSAR] and ran descriptive statistics [DS] for the full sample [FS]. RRXSARFSIM150601.sas would read the same file and ran regression procedures to estimate program effects or impacts [IM].


  1. Variable naming conventions


The Rent Reform data team will also follow standard MDRC practices for naming variables, which include consistent use of prefixes, stems, and suffixes to provide information about the measure. In the UI Wage example, the stems Ern and Emp denote the type of measure; the prefix [HH] denotes that the measures that summarize earnings or employment at the household level; and the suffixes [PQ4…Q4] denote the time period relative to each household’s date of random assignment.


  1. File logging and tracking


MDRC’s corporate Data Librarian receives and records all incoming data deliveries in a corporate Data File Log. The Librarian works with the project Data Manager to track the location of each file within project network directories. Per agreement with funders and data providers, at the end of the project (or earlier, as needed), the project Data Manager deletes most source data and interim data files and forwards a list of names of deleted files to the Data Librarian. The Data Librarian then updates the File Logging database with the status of deleted files.


  1. File organization


Following standard MDRC practice, MDRC’s Rent Reform Data Manager will work with MDRC’s network administrators to create a detailed network directory structure for storing Rent Reform data files, programs, and output. The structure will use the following organizing principles:


  1. Store data and output with PII in …\RRSecure… directories and data and output without PII in …\RRData directories.

    • Apply strict, “need-to-know” access controls on all …\RRSecure directories and slightly more inclusive access controls on all …\RRData directories

    • All source data and MDRC’s Cross-Reference file will be stored in …\RRSecure directories

  1. Within …\RRSecure… and RRData… folders, nest additional folders by

…\[Site]

…\[Site]\[Data source]

…\[Site]\[Data source]\[File type: SAS system file; program/output; tables; memos]


  1. Data and program documentation


In keeping with standard MDRC practice, for administrative data the MDRC Rent Reform data team will collect available technical documentation on incoming source data, including agency data entry guides, codebooks, record structures, and data dictionaries. We will strongly encourage agency data providesr to accompany data deliveries with simple aggregate data reports—for example, counts of UI Wage records and total dollars per calendar quarter. For survey data, we will require the survey contractor to accompany data deliveries with a codebook, frequency distribution and means output, and a technical memo on data collection issues for the survey. MDRC’s Rent Reform data team will use the documentation to check our processing of the data through the sequence of file creation steps.


MDRC’s SAS programs include corporate SAS macros that store information about each SAS run in a cumulative project data documentation file.


After file processing, MDRC Rent Reform data team members will write technical memos that describe in detail

  1. The file processing steps involved in creating single source Analysis Files

  2. Data validation steps and data issues (see Quality Control)

  3. Technical decisions concerning measure creation

  4. Descriptions of key variables created for analysis


  1. Data Validation


As noted in the QC overview, MDRC’s standard macros that we embed within our file creation programs include extensive checks on data quality and on the accuracy file creation code.


Part IV: Reports


The MDRC team will produce one formal deliverable as part of Task Order 1. The report would describe the design of the alternative rent policy and it’s adaptations in the four sites, the experience of enrolling voucher holders in the demonstration, and early implementation experiences. It would present and explain tables on the characteristics of the sample based on the 50058 and BIF data. If HUD agrees, we would also include preliminary observations of the PHAs’ experiences in administering the new rent model and communicating it to tenants, and tenants’ early reactions to the rent reforms.


As specified in the RFP, our team would produce two drafts for HUD review and the final report document incorporating HUD’s comments. This process is in keeping with MDRC’s standard publication review process. In addition to reviews by HUD, several internal reviews with the most senior research, operations, and publications staff at MDRC and its partner organizations (including an early storyline review, a first draft review, and a final report review) help ensure that the report is written clearly, addresses the perspectives of policymakers and practitioners who may make operational and other decisions based on the design recommendations, and is methodologically accurate. Reviews with external committees of academic and other policy researchers also frequently ensure that the recommendations in a given report reflect current research methods and results. The major steps include: (1) conducting an initial consultation with HUD and submission of an outline and table shells that lay out the structure and content of chapters and proposed measures, (2) creating a storyline (similar to an outline) that lays out emerging findings which we have proposed as a briefing under Task 4, (3) writing a draft report and executive summary; (4) revising the draft based on HUD feedback; and (5) producing an edited report in accordance with PD&R’s guidelines. Once final, the report will be available via MDRC's website and report findings will be presented at conferences as well as at meetings with key stakeholders.



Rent Reform Demonstration

Evaluation Topics and Data Sources by Task Order

TOPIC

DATA SOURCE AND TIMING

Housing Agency’s perspective

Task Order 1

Future Task Order

Changes in types and levels of staff burden in calculating rents and administering the simplified utility policy

Ongoing TA observations and monitoring; staff interviews

Implementation research

Time study

HA records

Changes in the number and time required to process interim recertifications, lease changes, and household composition changes


Implementation research

HA records

Time study

Changes in the number of hardship cases and staff time and effort to administer the new hardship policy


Implementation research

Time study

Changes in error rates, disputes over rents, IG investigations, etc.


Implementation research

HA and HUD records

PHA administrative costs/savings due to alternative policies


Cost-analysis data


Changes in tenant lease-up rates and port-outs


HA records and administrative data

Changes in tenant turnover rates, reasons for exiting the voucher system


HUD 50058

HA administrative data

Tenant survey

Changes in HAP expenditures


HA administrative data

Staff efforts to explain and market the work incentive offered by the new policy

Ongoing TA observations and monitoring; staff interviews

Implementation research

HA records

Staff perspectives on the new policy and views of its pros and cons; perceived changes in relationships with tenants

Ongoing TA observations and monitoring; staff interviews

Implementation research








(continued)

Rent Reform Demonstration

Evaluation Topics and Data Sources by Task Order


Households’ perspective



Task Order 1


Future Task Order

Baseline characteristics of participating households

BIF


Understanding, knowledge, awareness of rent reform; perceptions of and relationship with PHA

In-depth interviews / focus groups with tenants

Ongoing in-depth interviews / focus groups with tenants

Tenant survey

Changes in household composition and structure


HUD 50058

Tenant survey

Changes in employment and earnings


UI data

Changes in job characteristics


Tenant survey

Changes in household income and use of income supports


Tenant survey

HUD 50058

TANF, SNAP, and Medicaid data

Changes in assets and financial behaviors


Tenant survey

HUD 50058

Credit scores

Changes in rent burden, rent arrears, evictions, and housing stability


Tenant survey

HUD 50058

Changes residential mobility patterns, neighborhood conditions and safety, and housing quality


Tenant survey

HUD 50058

Neighborhood data

Changes in health outcomes


Tenant survey

Medicaid

Changes in material hardship and homelessness


Tenant survey

Changes in child outcomes


Tenant survey

SABINS administrative data

Counterfactual service context

Field research; site selection data

Interviews with PHA staff; PHA data on self-sufficiency initiative participation (where appropriate); and the tenant survey



1 The Research Design summarizes the alternative rent model and overall evaluation plan. The final vision for the Rent Reform demonstration varies somewhat from the design described in MDRC’s proposal for this work. As described later in this document, the final model was developed in collaboration with HUD and the study partners. Where necessary, this document highlights some of the key differences from the proposal.

2 MDRC has recommended to HUD that it consider funding a separate evaluation of Santa Clara’s rent reform policy using a comparative interrupted time-series design, given the policy importance of that agency’s very different approach to rent reform and the uncertainty of the effects of that reform on tenants’ earnings and, consequently, on housing authorities HAP expenditures.

3Combined, the study excludes elderly households and those households where the head will become elderly over the course of the study. 

4 The study budget does not include the cost of gift cards, but MDRC has flagged the importance of offering a small token of appreciation to study participants for their time. This gift card will be provided to study participants at the time they complete the baseline information form. This additional cost will be reflected in MDRC’s modified budget.

5 MDRC’s proposal included on-site evaluation support, especially to help housing agency staff during the enrollment phase. Now that the demonstration has mandatory enrollment, all eligible voucher-holders will be identified and randomly assigned prior to their recertification. This change reduces the need for MDRC on-site presence during study enrollment.

6 Looking beyond the first task order, more systematic implementation research on this topic would be a valuable component of the longer-term evaluation. Tenants’ understanding of the new model and its implicit incentives will inform how they make labor market and housing choices. Using qualitative research methods, we would explore whether tenants understand the new rules, and the “frames” they use in interpreting them, such as whether they believe that “extra work is penalized.”


7 MDRC has raised with HUD the possibility of using alternative data (HUD 50058) to construct baseline measures. While this issue is under consideration, this section describes how BIF data collection will be implemented.

8 The description includes data or data-related activities that will be initiated under Task Order 1. Those that are fully assumed under a later Task Order are not discussed here – for example, the use of aggregate neighborhood outcomes data to describe the neighborhood conditions voucher holders experience.

9 We can also use these programs to summarize other types of data—for example, count number of days of attendance per month in education or training programs.

10 For instance, the administrative records master file will include data from the RA-BIF (selected measures for regression covariates and subgroup analyses), UI Wage data, and PIC data. The survey master file will include BIF and administrative records data.

11 1S is an MDRC convention for naming Analysis Files. With some types of data (e.g., follow-up surveys), most variable creation occurs on files in Analysis File format with one record per participant or household. These later files would be called 2S, 3S… With other types of data, additional processing occurs during creation of the Master File.



File Typeapplication/msword
Authornunez
Last Modified ByNandita Verma
File Modified2014-07-01
File Created2014-07-01

© 2024 OMB.report | Privacy Policy