OMB Part B 6-15-2011

OMB Part B 6-15-2011.docx

Web Survey of Homelessness Prevention and Rapid Rehousing Program Grantees and Subgrantees

OMB: 2528-0270

Document [docx]
Download: docx | pdf


Homelessness Prevention Study



Task Order Number C-CHI-01086/CHI-T0001

GSA Contract Number GS-23F-8198H



OMB Paperwork Reduction Act Submission for Web Survey of Homelessness Prevention and Rapid Re-housing Program Grantees and Subgrantees



Part B: Statistical Methods



February 16, 2011


Prepared for

Anne Fletcher, HUD/GTR

Office of Policy Development and Research, Program Evaluation Division

U.S. Department of Housing and Urban Development

451 7th Street, SW – Room 8120

Washington, DC 20410


Prepared by

Mary Cunningham

Martha Burt

Molly Scott

Kassie Dumlao

Urban Institute


Larry Buron

Gretchen Locke

Abt Associates

PART B: STATISTICAL METHODS




PART B. STATISTICAL METHODS

The U.S. Department of Housing and Urban Development has contracted with the Urban Institute and its subcontractors—Abt Associates Inc., Cloudburst Group, Inc., and Vanderbilt University—to conduct an online survey of a representative sample of Homelessness Prevention and Rapid Re-Housing Program (HPRP) grantees and subgrantees. The goal of the survey is to learn how communities are using their HPRP funding to design and implement programs to prevent homelessness. To address these questions, we plan to survey a nationally representative sample of 100 HPRP grantees and 400 subgrantees. Our methodology for selecting the sample is described below.

B1. Respondent Universe, Sample Selection and Expected Response Rates

B1.1. Respondent Universe

The respondent universe consists of all grantees associated with HPRP-funded prevention projects that were funded in 2009 under the American Recovery and Reinvestment Act and all subgrantees of those HPRP grantees that are sampled to participate in the web survey.

B1.2. Sample Selection

The sample selection for the web survey will take place in two distinct stages. In the first stage, we are selecting a sample of 100 grantees from the universe of 540 grantees. In the second stage, after obtaining an updated list of subgrantees from each of the grantee respondents, we are sampling 400 subgrantees from these grantees. The following sub-sections lay out the specific procedures that will be used in both stages of the sample selection process.

B1.2.a. Selection of Grantees

Our first step for selecting a sample of 100 grantees is to divide the population of grantees into 12 strata based on the level of government entity (three types) and Census region (four regions). Dividing the sample into strata is a standard sampling technique for obtaining more precise national estimates when there are likely to be large differences in the variables of interest across the grantees in different strata. It also ensures the exclusion of the “outlier samples” that are possible with simple random sampling like a sample with all 100 grantees being small cities in the West Region.

The strata have been created by the cross-classification of three levels of government receiving the grant—state, county, and city and four Census regions— East, Midwest, South, and West. The first variable of stratification is pivotal because of differences across government entities in: the amount of the grants; the size, diversity, and population-density of their jurisdictions; and the likely differences in direct provision of homeless services in their jurisdictions. The regional stratification is necessary to reflect regional differences in job opportunities, the housing market, and possible differences in their approach to addressing homelessness.

Some grantees have received very large HPRP grants relative to other government entities of the same type. Therefore, these grantees are selected with certainty to ensure that a large portion of the HPRP grant funding is represented by the selected grantees in the sample. The certainty selections include the four states with grants over $25 million and the seven counties and cities with grants over $10 million. The twelfth certainty grantee is the Commonwealth of Puerto Rico ($20.8 million grant) because of its uniqueness and the belief that it would not be well represented by the states if it were not selected, and it also received too large a grant to exclude from the study. However, we have excluded the territories of American Samoa, Guam, the Northern Marianas, and the Virgin Islands because they would also not be well represented by states and would need to be selected with certainty to be represented in the sample. Given their small grant amounts relative to states, we have determined that it is better to exclude them from the study and use the sample slots for additional non-certainty grantees. This reduces the total number of grantees covered by this study from the 540 that were awarded HPRP grants in 2009 to 536.

A grantee selected with certainty will only represent itself in national estimates. Since we selected 12 grantees with certainty that leaves 88 grantees to represent the remaining group of 524 (535 grantees in the universe minus 12 certainty grantees equals 524). The certainty grantees within each stratum and the selection of non-certainty grantees within each stratum is discussed below.

  • States (12 of 51): Of the 51 “state” grantees (50 states plus the District of Columbia), we selected 12 for the sample. Four states were selected with certainty because their grants were over $25 million: California ($44.5 million), Texas ($41.5 million), Ohio ($26.2 million), and New York ($25.5 million). The eight non-certainty states were selected by region and grant amount to represent the other 39 states. For the selection of non-certainty states, the population of states within each region was divided into two strata. The first stratum contained all state grantees receiving more than $10 million and the second stratum all state grantees receiving less than $10 million. One state grantee was selected within each stratum within the region with equal probability

  • Puerto Rico (3 of 25): Puerto Rico has one commonwealth grantee and 24 municipal grantees. As discussed earlier, we selected the commonwealth with certainty. We also selected two municipalities to represent the other 24 grantees. We randomly selected one grantee with a grant over $1 million and one with a grant less than $1 million.

  • Counties (27 of 151): Of the 151 county grantees, a sample of 27 counties was selected. Only Los Angeles County ($12.2 million) had a grant over $10 million, so it was the only county selected with certainty. The county strata have one-third of the Non-certainty County and city grantees, so we allocated one-third of the remaining non-certainty sample slots, or 26 counties, to the county strata. We allocated the non-certainty sample to each region in proportion to the number of counties in the population in that region. Within the region, a systematic sample of counties was selected. In systematic sampling, we first determine the sampling interval that is obtained by dividing the number of counties on the list in the stratum by the number of counties to be selected. A starting point between 1 and the sampling interval is randomly generated. For example, if the desired sample size is “n”, then (n-1) numbers are generated by adding the sampling interval successively to the first randomly generated number. The list of counties was sorted by grant amount and numbered. All counties with listed numbers that match the randomly generated numbers are included in the sample. Systematic sampling after sorting ensures that there is a representative distribution of counties by grant amount.


  • Cities (58 of 309): Of the 309 city grantees, we selected six certainty cities and 52 non-certainty cities. The six certainty cities with grants of more than $10 million are New York City ($73.9 million), Chicago ($34.4 million), Los Angeles ($29.4 million), Philadelphia ($21.5 million), Detroit ($15.2 million), and Houston ($12.4 million). The city strata have two-thirds of the Non-certainty County and city grantees, so we allocated two-thirds of the non-certainty sample slots, or 52 cities, to the county strata. We allocated the non-certainty sample within each city-region stratum in proportion to the number of city grantees in that region. Within the region, we systematically sampled the cities after ordering by grant amount similar to the method described for the selection of counties.



Exhibit 1 shows the universe and number of sample grantee and subgrantees for each of the strata.

EXHIBIT 1. Grantee Universe and Sample by Strata

Region

Grantees in Universe

Sample Size

Certainty

Non-Certainty

Total

State Grantees

East

9

1

2

3

Midwest

12

1

2

3

South

17

1

2

3

West

13

1

2

3

Sub-Total

51

4

8

12

County Grantees

East

39

0

7

7

Midwest

26

0

5

5

South

55

0

9

9

West

31

1

5

6

Sub-Total

151

1

26

27

City Grantees

East

76

2

13

15

Midwest

68

2

11

13

South

80

1

14

15

West

85

1

14

15

Sub-Total

309

6

52

58

Puerto Rico Grantees

Commonwealth

1

1

0

1

Municipalities

24

0

2

2

Sub-Total

25

1

2

3

Total

536

12

88

100

Note: Universe does not include the four grantees from American Samoa, Guam, the Northern Marianas, and the Virgin Islands.

B1.2.b. Selection of Subgrantees

We will also select 400 subgrantees for the survey. This sample will be selected from among the subgrantees that are providing homelessness prevention services for the grantees included in the first stage of the sample. From sampled grantees, we will obtain a list of subgrantees providing prevention services, the amount of their subgrant, and contact information for the subgrantee.1 The sample selection method for the 400 subgrantees will be the same as for the grantees. We will first review the grant amounts to determine if they are any particularly large programs (in terms of dollar grants) that we should select with certainty. We will then allocate sample slots to each grantee based on their share of all subgrantees at the 100 grantees. Then, within each grantee we will select a random sample using the systematic sampling technique described earlier.

Weighting the Estimates to be Nationally Representative

Weighting the sample is straightforward, because each selected grantee has a known probability of selection. The sampling weight will simply be the inverse of that probability. For example, if a grantee has a 0.05 (or 5 percent) probability of being selected, the sampling weight would simply be 20 (1 divided by 0.05). Certainty grantees have a probability of 1.0 (or 100 percent) of being selected, so their sampling weight is 1. This indicates the certainty grantee only represents itself in national estimates. The probability of a non-certainty grantee being selected is simply the number of non-certainty grantee in their sample that were selected, divided by the number of non-certainty grantee in their stratum. For example, two out of the eight non-certainty states in the East were selected for the sample, thus the probability of such a grantee being selected is 0.25 (two divided by eight). The weight is the inverse of that selection probability, which is equal to 4.

The weights for the subgrantees have to take into account both the probability of the grantee being selected and then the probability of the subgrantee being selected conditional on the grantee being selected. If one out of 10 subgrantees of a state in the East are selected (i.e. a probability of 0.1 conditional on the state being selected), then the total probability for that subgrantee being selected is 0.25 (the probability of the state being selected) multiplied by 0.1, which equals 0.025. The inverse of 0.025 is equal to 40, so the sampling weight for that grantee would be 40. If a subgrantee receives funding from multiple grantees in the grantee sample, the probability of selection for that subgrantee has to take into account that the subgrantee had multiple opportunities to be selected for the sample. Overall the probability of selection for such a subgrantee is the probability of selection under the first grantee plus the probability of selection under the second grantee minus the probability of selection under both grantees. Like the other subgrantees, the weight for that grantee is the inverse of the probability of selection.

The sampling weights will have to be adjusted to account for non-response to the survey to create analysis weights. Our plan is to adjust the weights within stratum. For a subgrantee, this would mean adjusting the weights of the respondent subgrantees of each of the grantees to account for the non-respondent subgrantees of that grantee. For example, if two of the ten sample subgrantees for a grantee do not respond to the survey, the weights of the eight respondent grantees would be multiplied by 1.25 (10 divided by eight) to create the analysis weights. The result is that the sum of the analysis weights of the subgrantees is equal to the sum of the sampling weights, which ensures that all subgrantees are accounted for in the weighting. We will also compare non-respondents to respondents on observable characteristics to understand if there is any pattern, and thus potential bias, from not having a 100 percent response rate.

B1.3. Expected Response Rates

A high rate of response is expected for this data collection effort, 75 percent. This high expectation is based in part on the fact that prospective respondents have an interest in advancing knowledge of HPRP. Many grantees and homelessness service providers also routinely gather and report detailed information about their programs to HUD and consider data collection an integral part of their own internal processes. HUD will also send an advance letter to the selected grantees/subgrantees explaining the importance of the study and encouraging their participation and will also communicate generic encouragement of participation to HPRP grantees and others through various media (e.g., list servs), which is likely to have a positive effect on response rates. Furthermore, the data collection procedures (described in Section B.2) will entail e-mail, telephone, and mail reminders to complete the survey. Non-respondents to the web survey will also be given an opportunity to complete the survey by phone or on a self-administered paper form.

Reliability of estimates

The confidence intervals for the possible analysis samples are shown in Exhibit 2. The 95 percent confidence intervals are shown for a range of possible prevalence rates (i.e., percentage rates) assuming a simple random sample was selected and that 75 percent of the selected sample complete the survey. For a sample of 350 respondents—the expected sample size for subgrantees plus grantees that provide direct services—the second column of Exhibit 2 shows that a prevalence estimate of 10 percent or 90 percent will have a 95 percent confidence interval of plus or minus 3.1 percentage points. This means that for a prevalence estimate of 90 percent from a sample of 350 respondents, we are 95 percent confident that the true population prevalence rate is between 86.9 percent and 93.1 percent. Our actual confidence intervals may be higher than the estimates in the exhibit because of the complex two-stage sampling procedure for subgrantees described earlier (first selecting grantees, then subgrantees of the selected grantees). However, our stratification prior to sampling will partially offset this increase.





EXHIBIT 2. 95 Percent Confidence Intervals

Sample Size (completed surveys)

Analysis Description

Prevalence Estimate

10% or 90%

20% or 80%

30% or 70%

40% or 60%

50%

75

Grantee-only analysis

+/- 6.3 percentage points (pp)

+/- 8.5 pp

+/- 9.7 pp

+/- 10.4 pp

+/- 10.6 pp

300

Subgrantee only

+/- 3.4 pp

+/- 4.5 pp

+/-5.2 pp

+/- 5.5 pp

+/-5.7 pp

350

Subgrantee and direct service provider grantees

+/- 3.1 pp

+/- 4.2 pp

+/- 4.8 pp

+/- 5.1 p

+/- 5.2 pp

Notes: Sample sizes assumes 75 percent response rate of grantee and subgrantee sample and 50 of the 75 grantees are also direct service providers. Confidence intervals assume simple random sampling and confidence interval for grantees reflect finite population correction.

In addition to sampling error, research results may be subject to other types of error, including issues of non-response or measurement relating to question wording or respondents’ ability to recall factual information or articulate answers. Note that there is no standard or statistical means to measure the consequences of these types of effects.

B2. Procedures for the Collection of Information

Abt Associates’ survey group, Abt SRBI, will be responsible for survey programming, administration and management. We expect most respondents will access and complete the web-based survey on the Internet. However, to maximize response rates and provide flexibility for respondents, respondents will also be able to complete the survey by mail or by telephone. Upon receiving OMB approval for the survey, Abt SRBI will program the instruments for web administration, with mail and telephone versions prepared for those respondents who choose not to complete the survey via the Internet. Abt SRBI maintains a staff of executive interviewers in its telephone center who have extensive experience conducting surveys of the types of professional populations our grantee and subgrantee respondents will be. The interviewers are skilled in working with gatekeepers, are sensitive to demands on respondent time, and recognize that respondents may be asked to participate in many surveys each month. They will be trained on the fine points of this specific web survey.

Our goal is to achieve a rate of response of 75 percent for the survey. We have established a number of procedures to help us reach this goal. Exhibit 3 describes our step-by-step approach to ensure a high response rate. In particular, we want to make sure we reach the most knowledgeable respondents – that is, those who are most familiar with the HRPR-funded activities we are asking about - as efficiently as possible. To do this, Abt SRBI staff will first contact grantees by telephone based on contact information in HUD performance reports. During these calls, we will confirm the name and telephone and email contact information for the most appropriate respondent for the grantee survey. We will explain the purpose of the survey, provide details of the timing, and explain that the survey is voluntary. This information will also be repeated in an advance letter from HUD’s Office of Policy Development and Research that will be mailed to the identified survey respondents. During the contact verification calls, we will also confirm the subgrantees associated with the sampled HPRP grant.

Once the appropriate survey respondents have been identified, Abt SRBI will send advance letters to survey respondents with specific information about the web survey including the URL address, a unique password for accessing the survey, and a toll-free telephone number to call for additional information, assistance linking to the survey, or an option to complete the survey by telephone. If respondents have not completed the web survey within five days of receiving the notification letter, staff from Abt SRBI will send a series of email invitations to respondents. Those who do not complete the survey online will be contacted by telephone to complete the survey by phone. Those who still do not complete either online or by telephone will be mailed a survey packet with the survey, a postage-paid return envelope, and a letter encouraging their participation.

The survey field period will last roughly 8 to 10 weeks. Abt SRBI will prepare regular updates on survey completions during the field period. We plan to prepare preliminary tabulations on the data once we reach approximately a 50 percent completion rate. This will allow us to look at early patterns in the survey data and to begin to identify potential sites for the second round of site visits, which will take place after the survey data collection.

B3. Methods to Maximize Response Rates and to Deal with Issues of Non-response

The following methods will be used to ensure attainment of target response rates.

EXIBIT 3: Steps to Maximize Survey Responses While Minimizing Cost

1

Contact verification: Begin with IPR and IDIS information on grantee and subgrantee contacts. Verify their mail and email contact information and ask screening questions to get to the most appropriate respondent.

2

Advance/pre-notification letter: An advance letter with the signature of the PD&R Assistant Secretary and HUD’s logo will be mailed to all verified respondents. It will explain the survey’s purpose and uses of the data, the time needed to complete the survey; and that the survey is voluntary and their responses will remain confidential.

3

Web survey: The advance letter invites the respondents (Rs) to visit the website and complete the survey, giving a URL address, a password unique to the R, and a toll-free telephone number to call for additional information, assistance with linking to the web survey, or an option to complete the survey over the telephone. Rs who have not completed the survey via web after 5 days will receive the first of three email invitations to visit the web and complete the survey. Rs who have not completed the web survey following 3 follow-up emails will be contacted to complete the survey over the phone.

4

Telephone follow-up: 5 days after the first email invitation has been sent, telephone interviewers will contact Rs who have not completed the survey to confirm receipt of the email or resend if needed, after verifying contact information. Sample updates and re-contacts will be made daily, up to a total of four, and one message will be left providing a toll-free number for respondents to call to get more information or conduct the survey as an interview.

5

Mail survey: Rs who, after follow-up emails and attempts to reach by phone, still have not completed a survey will be mailed a survey packet with a letter urging participation. The packet includes a paper survey booklet and a business reply envelope for returning the completed survey.

6

Final mail attempt: will be made to all respondents who have still failed to participate.



B4. Pre-testing of Procedures and Methods

The survey instrument will be pre-tested with no more than nine respondents to ensure the questions are clear and to confirm survey length. We will conduct the pretest with a paper and pencil version of the survey so that revisions can be made before programming is undertaken. Pre-testers will be selected from among grantees and subgrantees that are not selected for the survey sample. We may include one or two HUD desk officers as pre-testers because they are very familiar with grantee and subgrantee activities. Once the survey is programmed for online administration, the instrument will be thoroughly tested to make sure instructions are clear and the survey functions properly.

B5. Individuals or Contractors Responsible for Statistical Aspects of the Design

  • The agency responsible for receiving and approving contract deliverables is:

Office of Policy Development and Research, Program Evaluation Division

U.S. Department of Housing and Urban Development

451 7th St, SW – Room 8120

Washington, DC 20410


Person Responsible:


Anne Fletcher, HUD/GTR, (202) 402-4347, [email protected]


  • The organization responsible for administering the online survey of grantees and subgrantees is:

Abt Associates Inc.

55 Wheeler Street

Cambridge, MA 02138


Persons Responsible:


Ms. Gretchen Locke, Abt Project Lead, (617) 349-2373, [email protected]

Ms. Julie Pacer, Abt SRBI Project Manager, (312) 529-9708, [email protected]


  • The organization responsible for statistical design of data that will be collected is:

Abt Associates Inc.

55 Wheeler Street

Cambridge, MA 02138


Persons Responsible:


Ms. Gretchen Locke, Abt Project Lead (617) 349-2373, [email protected]

Dr. Larry Buron, Senior Associate (301) 634-1735, [email protected]

Dr. K.P. Srinath, Sampling Statistician, (301) 634-1836


The Urban Institute

2100 M Street, NW

Washington, DC 20037


Persons Responsible:


Ms. Mary Cunningham, co- Principal Investigator, (202) 261-5764, [email protected]

Ms. Martha Burt, co-Principal Investigator, (202) 261-5551, [email protected]


  • The organization responsible for analyzing all data to be collected is:

The Urban Institute

2100 M Street, NW

Washington, DC 20037


Persons Responsible:


Ms. Mary Cunningham, co- Principal Investigator, (202) 261-5764, [email protected]

Ms. Martha Burt, co-Principal Investigator, (202) 261-5551, [email protected]


Abt Associates Inc.

55 Wheeler Street

Cambridge, MA 02138


Persons Responsible:


Ms. Gretchen Locke, Abt project lead, (617) 349-2373, [email protected]

Dr. Larry Buron, Abt task leader for analysis, (301) 634-1735, [email protected]




1 This information is available in the HUD performance reports. If the reliability of the data is in question, we will need to contact each of the grantees to obtain the information needed.

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File TitleHomeless Prevention Study
AuthorDumlao, Kassie
File Modified0000-00-00
File Created2021-02-01

© 2024 OMB.report | Privacy Policy