DHAP OMB Part B Revised final 10.14.11

DHAP OMB Part B Revised final 10.14.11.doc

Data Collection of the Disaster Housing Assistance Program Incremental Rent Transition Study

OMB: 2528-0256

Document [doc]
Download: doc | pdf









Cambridge, MA

Bethesda, MD

Durham, NC

Atlanta, GA



Disaster Housing Assistance Program (DHAP) Incremental Rent Transition (IRT) Study





Contract #

C-CHI-01032, CHI-T0001



OMB Package



Part B: Statistical Methods



October 4, 2011





Prepared for

Marina Myhre

U.S. Department of Housing and Urban Development

Office of Policy Development & Research

451 Seventh Street, Room 8120

Washington, DC 20410


Prepared by

Abt Associates Inc.

55 Wheeler Street

Cambridge, MA 02138-1168


Contents


Part B: Statistical Methods 1

B1 Potential Respondent Universe 1

B2 Statistical Methods 2

B2.1 Sampling Plan 2

B2.2 Analysis Plan 6

B2.3 Justification of Level of Accuracy 8

B3 Maximizing Response Rates 8

B4 Tests of Procedures or Methods 10

B5 Statistical Consultation and Information Collection Agents 10





Part B: Statistical Methods

The U.S. Department of the Housing and Urban Development (HUD) is administering an outcome evaluation of the Disaster Housing Assistance Program Incremental Rent Transition (DHAP IRT). The follow-up survey covered by this reinstatement request will collect information on the housing and self-sufficiency outcomes of DHAP recipients and their perceptions of case management services received as part of the program. The sample for this survey is respondents to the baseline survey data collection (conducted under OMB control number 2528-0256) that was completed in October 2009.


B1 Potential Respondent Universe

The potential respondent universe for the follow-up survey is the set of DHAP recipients who responded to the baseline survey. It is therefore defined by the potential respondent universe, the sampling frame, and the actual response patterns of the earlier baseline survey. DHAP assistance was administered through public housing agencies (PHAs). Participation information was tracked in the Disaster Information Service (DIS) system. The baseline survey data collection relied on enrollment of study participants and the administration of a survey by participating DHAP PHAs.


The December 1, 2008 update of the FY 2008 Disaster Information System (DIS) data included 275 DHAP PHAs that reported at least one DHAP client in FY 2008. The 275 agencies served 31,870 clients in 2008. To limit the burden of the study on small PHAs, recruitment and enrollment for the DHAP IRT study was limited to PHAs that served at least 600 clients annually. While this cutoff point was determined subjectively, it was established to ensure that the sampled agencies would have a sufficient volume of clients to enroll in the study without creating an overwhelming burden on the resources of smaller PHAs. Of the 275 agencies, only 11 DHAP PHAs served 600 or more DHAP clients through December 1, 2008. These 11 agencies account for 23,986 DHAP clients, or 75 percent of all DHAP clients as of December 1, 2008. The number of DHAP clients at these nine agencies ranges from 615 to 6,661 clients, with an average of approximately 2,181 DHAP clients per PHA.


The clients served by these nine PHAs were divided into two groups: Phase I clients and Phase II/III clients. In DHAP, Phase I clients were charged a stepped-up rent starting at $50 and increasing by $50 each month, unless a hardship exemption was approved, whereas the Phase II/III clients were charged no rent ($0) until the program ended. Because the participants in these two groups were subject to different rent structures and comparing the outcomes of the two groups is an important part of the analysis, we decided to split the sample evenly between Phase I and Phase II/III clients. A sample of 3,000 clients—1,500 from Phase I of the program and 1,500 from Phase II/III—was drawn from the set of clients served by these nine PHAs. Within each of the two Phase groups, we selected a simple random sample of participants. When weighted to represent the differential probability of selection across the two Phase groups, this sample is representative of all DHAP participants in December 2008 at the nine largest administering PHAs.


Beginning in January 2009, participating PHA staff and case managers distributed the consent form and baseline questionnaire to sampled clients. Beginning March 1, 2009, clients were transitioned from DHAP to the Disaster Housing Assistance Transitional Closeout Program (DHAP TCP). Under DHAP TCP, case management was no longer required, and households’ rent contributions accelerated by $100 per month. A consent form and a baseline questionnaire were mailed to the remaining sample of clients who had not completed a questionnaire during January or February 2009. HUD took over the administration of the telephone-based follow up to the mail survey following the transition to DHAP TCP.


The potential respondent universe for the follow-up survey is the set of clients who responded to the baseline survey. Exhibit B-1 shows the final response rates by PHA and Phase. Of the 3,000 DHAP participants sampled in December 2008, 1,438 completed the baseline survey —a 48 percent response rate.1


Exhibit B-1. Final Response Rate for DHAP IRT Baseline Survey: by PHA and Phase

 

LA001

LA003

LA013

LA187

LA889

LA996

LA997

LA999

TX005

TX009

TX441

TOTAL

 

HANO

HA of East Baton Rouge

HA of Jefferson Parish

St. Bernard Parish Govt.

Pilgrim Rest CDA

New Orleans (Phase II/III only)

Slidell (Phase II/III only)

Jefferson Parish (Phase II/III only)

Houston Housing Authority

Housing Authority of the City of Dallas

Harris County Housing Authority

 

Phase I Sample

 

 

 

 

 

 

 

 

 

 

 

 

Total Sample Size

527

55

172

14

60

0

0

0

132

50

490

1500

Total Forms Received

256

35

78

7

26

0

0

0

55

22

205

684

Percent of Sample Responded

48.6%

63.6%

45.3%

50.0%

43.3%

0.0%

0.0%

0.0%

41.7%

44.0%

41.8%

45.6%

 

 

 

 

 

 

 

 

 

 

 

 

 

Phase II/III Sample

 

 

 

 

 

 

 

 

 

 

 

 

Total Sample Size

307

78

87

0

355

350

86

152

64

17

4

1500

Total Forms Received

161

54

37

0

168

184

41

75

23

9

2

754

Percent of Sample Responded

52.4%

69.2%

42.5%

0.0%

47.3%

52.6%

47.7%

49.3%

35.9%

52.9%

50.0%

50.3%

 

 

 

 

 

 

 

 

 

 

 

 

 

TOTAL SAMPLE ALL PHASES

 

 

 

 

 

 

 

 

 

 

 

 

Total Sample Size

834

133

259

14

415

350

86

152

196

67

494

3000

Total Forms Received

417

89

115

7

194

184

41

75

78

31

207

1438

Percent of Sample Responded

50.0%

66.9%

44.4%

50.0%

46.7%

52.6%

47.7%

49.3%

39.8%

46.3%

41.9%

47.9%


B2 Statistical Methods

B2.1 Sampling Plan

The sample for the follow-up survey is the 1,438 respondents to the baseline survey.

As described in Section B1, the baseline survey sample was randomly selected from households participating in DHAP on December 1, 2008 and receiving services from one of the nine agencies that served at least 600 DHAP-IRT clients in FY 2008. However, because the response rate to the baseline survey was 48 percent, we conducted non-response analysis to determine if there are observable differences between respondents and non-respondents.


In our non-response analysis, we used administrative data—program and case management data from DIS and the Tracking-at-a-Glance (TAAG) system—to compare respondents and non-respondents based on demographic characteristics, sources of income, DHAP unit characteristics and program-use patterns.


The analysis indicated that there are several statistically significant differences between respondents and non-respondents to the DHAP-IRT Baseline Survey (See Exhibit B-2). The overall finding is that respondents are older and more disadvantaged than non-respondents. The key findings are:


  • 33.4 percent of respondents are in highest need tier compared to 29.9 percent of non-respondents;

  • 88.3 percent of respondents continued to receive assistance in March 2009 (i.e., were in the Transitional Closeout Program) compared to 81.3 percent of non-respondents;

  • 9.8 percent of respondents are elderly compared to 7.1 percent of non-respondents;

  • respondents are more likely to be female, disabled, receive SSI, and receive Food Stamps; and

  • respondents are less likely than non-respondents to be from the administering PHAs in Texas (Houston and Harris County).

There are not statistically significant differences in race, employment, household size, educational degrees, or unit characteristics (i.e., number of bedrooms in unit, rent-to-owner, or self-reported high crime in neighborhood).


Exhibit B-2: Comparison of Respondents and Non-Respondents to DHAP-IRT Baseline Survey

Characteristic

Respondents (n=1,438)

Non-Respondents (n=1,562)

Difference (Respondents minus Non-Respondents)

Household Head Characteristics

Average Age

44.3

41.4

+2.9 years*

Age 62 or older

9.8%

7.1%

+2.7 percentage points (pp)*

Female

67.4%

62.7%

+4.7 pp*

Race: Black or African-American

88.7%

86.7%

2.0 pp

Disabled

6.0%

4.4%

+1.6 pp*

High School Degree or GED

74.1%

75.8%

-1.7 pp

College Degree

10.9%

13.1%

-2.2 pp

Sources of Income (Household Head)

Employment

56.2%

59.9%

-3.7 pp

Food Stamps

20.2%

16.7%

+3.5 pp*

Supplemental Security Income (SSI)

11.2%

6.7%

+4.5 pp*

Program Characteristics and Use

Administering PHA

See response rates by PHA in Exhibit B-1

*

Months on DHAP/TCP

14.3

14.1

+0.2 months

Receive DHAP-TCP Assistance (i.e., on program in March 2009 or later)

88.3%

81.3%


+7 pp*

Household Characteristics

Household Size

2.39

2.35

+0.04 members

Highest Need Tier (Tier 4)

33.4%

28.9%

+3.5 pp*

Ever owned a Home

22.8%

23.3%

-0.4 pp

Unit Characteristics

# of Bedrooms in Unit

2.03

2.02

+.01 bedrooms

Self-reported, High-crime Neighborhood at Initial assessment

21.0%

22.0%

-1.0 pp

Rent to Owner in DHAP

$893

$908

-$15

Notes: * indicates significant at the 5 percent level. T-Test used for tests of significant differences for continuous variables and chi-square test used for proportions and other categorical variables. Characteristics with a greater than 10 percent missing rate in the administrative data are: need tier (11.7 percent), ever own home (11.7 percent), # of household members (12 percent), HS degree (12.2 percent), college degree (13.3 percent), and perception of crime in neighborhood (13.5 percent).


We also compared the respondents and non-respondent characteristics by Phase. These comparisons show fewer significant differences between respondents and non-respondents, but also show that some of the overall differences in characteristics are driven by one Phase or the other. Exhibit B-3 shows the subset of variables that have significant differences in the entire sample by Phase.



Exhibit B-3: Statistically Significant Differences between Respondents and Non-Respondents by Phase


Difference Between Respondents and Non-Respondents

Variable

Entire Sample

(n=3,000)

Phase I Sample

(n=1,500)

Phase II/III Sample

(n=1,500)

Average Age (of HH head)

+2.9 Years*

+3.2 Years*

+2.1 Years*

Elderly (%)

+ 2.7 percentage points (pp)*



Female (%)

+4.7 pp*


+7.5 pp*

Disabled (%)

+1.6 pp*



Food Stamp Receipt (%)

+3.5 pp*


+4.7 pp*

SSI Receipt (%)

+4.5 pp*

+5.7 pp*

+3.7 pp*

Administering PHA (%)

* A


* b

Time on DHAP/TCP (months)


+0.6 months*


In TCP (%)

+7.0 pp*

+11.2 pp*


Highest Need Tier (%)

+3.5 pp*

+5.0 pp*


Ever Owned a Home (%)



+5.2 pp

Number of Bedrooms in Unit


+0.09 bedrooms*


Monthly Rent to Owner in DHAP ($)



-$57*

Notes: Entire sample includes 1,438 respondents and 1,562 non-respondents; Phase I Sample has 684 respondents and 816 non-respondents; Phase II/III sample has 754 respondents and 746 non-respondents.

* indicates statistically significant difference at 5 percent level.

A See Exhibit B-1 for response rates by Administering PHA.


Our overall assessment is that the observable differences between respondents and non-respondents are significant, but can be mitigated by developing non-response weights. The objective of the non-response weighting is to ensure that the weighted respondents look more like the overall population to be represented by the survey. After completion of the follow-up survey, we will conduct non-response analysis like was done here. Assuming the non-response patterns are similar, our plan is to use a cell-based non-response adjustment. The cells will be based on age categories and need tiers and will be done separately for Phase I and Phase II/III. These adjustments are expected to not only make the respondents look more like the entire targeted sample on these characteristics, but also on the characteristics associated with age and need such as disability, receipt of means-tested assistance, and participation in the DHAP-TCP program. After the non-response adjustment, we will compare the weighted respondent sample to the initial sample to ensure that they look alike in both personal/household characteristics and the administering PHA to determine if further adjustments need to be made. It is important that the distribution of the administering PHAs in the weighted sample is similar to the distribution of the full sample because the administering PHA could affect outcomes based on how or what services were provided by the PHA, differences in the local housing and employment markets, and distance from the pre-storm home and community networks. If the cell-based adjustment procedure does not produce a weighted sample that looks like the initial sample, we will try a more complex weighting procedure such as raking ratio estimation, which is an iterative technique that matches multiple variables in developing weights to create a close match between the respondents and initial sample.2



B2.2 Analysis Plan

The survey data will be used to document DHAP participant outcomes, analyze the factors that contribute to those outcomes, and assess how the outcomes changed after DHAP housing assistance ended. The survey data will be merged with administrative data from the DIS, TAAG, and the HUD PIC system on housing assistance received through other subsidy programs such as the voucher program and public housing. Here we briefly summarize the analysis approach.


Updating the Non-response Analysis


As described earlier, we conducted a non-response analysis on the response to the interim survey conducted under a previous phase of the DHAP IRT Study. After completing the follow-up survey, we will update the non-response analysis and make adjustments to ensure the respondents look more like the entire targeted sample. Once the non-response analysis has been updated, we will begin our assessment of outcomes.


Outcomes Analysis


Analysis of the follow-up survey data will consist of tabulations of information on clients, services, and outcomes. The goal of this analysis will be to shed light on the circumstances in which incremental rent transitions and case management services are more likely to result in favorable participant outcomes. Our analysis approaches for the key outcome domains are described briefly below:


Housing situation: In the followup survey in this phase of the study, we will collect additional information on the number of places survey respondents have lived since the storms. We will tabulate survey responses about the affordability and quality of current housing and whether respondents have had difficulty paying their rent or mortgage payments. We will also report whether surveyed households experienced homelessness recently (in the previous 12 months). This will help us assess the extent to which households were successful transitioning to stable housing they could afford.


In addition to the rental assistance participants received, DHAP case managers may also have played a role in helping participants transition to stable and affordable post-DHAP housing. Case managers may have helped participants access benefits to repair pre-storm dwellings, apply for subsidized housing through a PHA or other housing provider, or secure employment to help the household afford housing after DHAP ended. We will report the extent to which respondents received assistance or referrals for these services, whether they accessed the services, and whether they found the services helpful.


Housing location and neighborhood characteristics: We will assess how long survey respondents have been living in their current neighborhoods and whether they have moved to a different neighborhood from the neighborhood where their DHAP unit was located. We will also tabulate responses on respondents’ perceptions of the safety of their current neighborhoods. If feasible, we may link geocoded respondent addresses to Census data to assess the demographic characteristics of respondents’ post-DHAP neighborhoods, such as racial composition, income levels, and poverty rates.

Time and type of assistance received: We will use administrative data to tabulate the number of months respondents received DHAP and DHAP TCP assistance, and the amount of the DHAP and DHAP TCP assistance payment. To supplement the administrative data on amounts and duration of assistance, we will tabulate survey responses on how helpful respondents found the DHAP rental assistance and case management services to be in helping them get back on their feet after the hurricanes.

Self-sufficiency and financial situation: We will tabulate survey respondents’ reported income levels and employment status as well as their perceptions of the stability of their financial situations (whether they are able to pay for essential expenses and how their current financial situation compares to their situation when they started receiving DHAP assistance).

Factors that contribute to outcomes: The study will provide an overall description of post-DHAP outcomes for DHAP participants and compare their current situation to their situation at the time of the interim survey, as their DHAP participation was coming to an end. The study will also compare the outcomes of participants in Phase I versus Phase II/III to understand whether the different rent structures affected the length of participation in the program and subsequent outcomes. However, a simple comparison of means or frequencies alone would mask the effects of the different observable characteristics of Phase I and Phase II/III participants on the outcomes, so we will also conduct multivariate analysis to determine the role of demographic characteristics and case management services in explaining outcomes, both to explore the role of each factor by itself and to control for these factors as we assess the effect of rent structure on outcomes.


Understanding the differences in program use and outcomes under different rent structures and for different uses of case management services are important to the Department for responding to future disasters; however, any findings of differences will need to have caveats because the participants were not randomly assigned to the rent structure or case management services and thus there could be unobservable differences between participants in each group that affect their program use and outcomes.





B2.3 Justification of Level of Accuracy

Exhibit B-4 shows the 95 confidence interval for the population proportion based on an estimated proportion equal to 0.5 or 50 percent. In other words, if the estimated value of the proportion from a simple random sample of the size indicated is 0.5 or 50 percent, we are 95 percent confident that the true population proportion is contained in the interval 50 percent plus or minus the confidence interval shown. For example, if 410 Phase 1 respondents complete the baseline survey, the 95 percent confidence interval will be from 45.2 percent to 54.8 percent. Because we will need to adjust the sampling weights for non-response, the actual confidence intervals may be larger.


Exhibit B-4. Expected Sample Sizes and 95 Percent Confidence Intervals for the Follow-Up Survey

Group

Baseline

Sample Size

Respondents to Baseline Survey (Sample for Follow-Up Survey)

Expected Respondents to Follow-Up Survey

Expected Confidence Intervals

Phase I

1,500

684

410

4.8 percentage points

Phase II/III

1,500

754

452

.4.6 percentage points

Notes: Expected number of respondents to follow-up survey is based on 60 percent response rate of the sample for the follow-up survey. The 95 percent confidence interval shown is for a prevalence estimate of 50 percent based and assumes that the respondents in each Phase have the same weight.


B3 Maximizing Response Rates

The follow-up survey will be administered by telephone and is estimated to begin by November 1, 2011. It is important to achieve a high response rate to the follow-up survey, because the sample for this survey (respondents to the baseline survey) is a subset of the original baseline sample. The follow-up survey has a target response rate of 60 percent. Even if we achieve our target response rate, the response rate from the original sample of 3,000 households will be 28.7 percent (0.6 times 0.479 equals .287). Because of this, we plan to follow the non-response analysis and non-response weighting adjustment plan described in Section B2.1 even if the response rate is above 60 percent for the follow-up survey.


We believe the 60 percent response rate is a realistic target even though the baseline survey obtained a 48 percent response rate. First, HUD has hired a contractor with extensive telephone survey data collection and tracking expertise to conduct the survey. The contractor has the facilities and technology to make multiple telephone calls to respondents, automatically varying the time and day of the calls. The PHAs that initially conducted the baseline survey did not have such capabilities, as survey efforts were not part of their regular operations. In fact, the baseline survey was intended to be a self-reported, mail survey. Telephone data collection did not occur until the end as a means to increase the response rate, and there was no use of CATI and no resources for weekend calling. Furthermore, this survey is targeting only respondents to the baseline survey. Previous participation indicates that the sample members are not inherently opposed to participating in surveys. It also indicates the respondent understands the survey is legitimate and the promised incentive payment will be provided. Moreover, after respondents complete the follow-up survey, they will receive a $20 incentive payment in appreciation for their time.3 Participation in the follow-up survey is voluntary.


It is important to note, however, that the population for this study is likely to be extremely mobile, which may make them more difficult to locate. The tracking efforts for this study have been designed to ensure good contact information at the start of the survey period and to obtain updated contact information for people we do not find with the initial contact information. A brief contact data collection effort, conducted in early 2010 (OMB control number 2528-0256) by our contractor, reached over half the respondents (54 percent) to verify or obtain updated contact information. The field period for this effort was short—only five weeks—so the contractor was not able to pursue all the leads for contacting sample members that they could have in a longer field period.4 The field period for the follow-up survey will be more than twice as long (12 weeks), allowing the survey team time to follow up on all leads. Furthermore, the follow-up survey will have a much more extensive tracking effort than the brief data collection effort.


After OMB clearance but prior to the start of the field period, the survey team will be conducting passive tracking activities (such as change of address and credit card database searches). These activities will include locating respondents using the Accurint and NCOA (National Change of Address) database to ensure that advance letters are sent to the most up-to-date addresses. These advance letters will provide immediate feedback on which contact information may not be accurate. Furthermore, the longer follow-up period will allow time for emailing non-respondents with working email addresses as well as following up with secondary contacts from the baseline survey. Advanced locating efforts involving a more comprehensive search will be conducted in the event that a respondent still cannot be located in order to provide more detailed information that may include email addresses, additional secondary contacts, likely neighbors, and cell phone numbers. Finally, in-person tracking will be utilized for people we are unable to contact based on information from other sources. Interviewers will be trained to conduct in-person locating and interviewing efforts at each of the three sites in Texas, Louisiana, and Mississippi. In sum, tracking efforts will be thorough and will include passive tracking as well as more advanced tracking efforts.


The success in reaching over half of the sample for the follow-up survey in early 2010 with a short field period and limited tracking activities is the primary reason we believe a 60 percent response rate is realistic for the follow-up survey with a longer field period and more extensive tracking efforts, which will include passive tracking, dialing secondary contacts, advanced locating efforts, and in-person tracking.


B4 Tests of Procedures or Methods

The follow-up survey developed for this data collection relies on questions from the baseline survey as well as questions from other HUD-sponsored studies of the Alternative Housing Pilot Program and the Welfare to Work programs. We are confident in the survey questions because there were very few respondents who answered “don’t know” or “refused” to answer the questions on the baseline survey, indicating that respondents did not have a difficult time answering the questions. Similarly, our experience with responses to similar questions posed as part of other HUD-sponsored studies of the Alternative Housing Pilot Program and the Welfare to Work programs indicate that respondents did not have difficulty answering those questions.


The draft of the follow-up survey was developed by our contractor, Abt Associates, and reviewed by methodological experts in their survey research group. HUD staff involved in administering and studying DHAP have also reviewed the instrument to ensure that it is clear, flows well, and is as concise as possible.

B5 Statistical Consultation and Information Collection Agents

Marina L. Myhre, Ph.D., a Social Science Analyst in HUD’s Office of Police Development and Research, Program Evaluation Division, led the initial part of the study and continues to provide guidance as the Government Technical Representative (GTR). Her supervisor is Ms. Carol S. Star. Dr. Myhre and Ms. Star are together responsible for the statistical aspects of the initial survey design and can be contacted at (202) 402-5705 and (202) 402-6139, respectively. Staff at their contractor, Abt Associates Inc., conducted the non-response analysis and led the survey design for the follow-up survey. HUD’s contractor will also lead the data collection effort and analysis of the survey data.

1 Although the baseline survey field operations ended in October 2009, a small number of completed surveys continued to trickle in after that date by mail. All completed surveys received by December 15, 2009 were included in the baseline completes and thus the sample for the follow-up survey.

2 For more on raking ratio estimation for developing weights, see Battaglia et al. (2009). “Practical Considerations in Raking Survey Data.” Survey Practice.

3 Those who complete the interview with a cell phone will receive an additional $10 to help defray the cost of additional minutes used.

4 Data collection was suspended at the end of February because of the decennial Census “blackout” period, which began March 1.


Abt Associates Inc. Table of Contents 2

File Typeapplication/msword
File TitlePart B: Statistical Methods
AuthorGretchen Locke
Last Modified Byh17138
File Modified2011-10-14
File Created2011-10-14

© 2024 OMB.report | Privacy Policy