Final - Supporting Statement B

Final - Supporting Statement B.docx

The Community Choice Demonstration

OMB: 2528-0337

Document [docx]
Download: docx | pdf

OMB Clearance Number: 2528-0337

Expires: XX/XX/XXXX

Supporting Statement for Paperwork Reduction Act Submissions

Community Choice Demonstration

(OMB # 2528-0337)



B. Collections of Information Employing Statistical Methods


The agency should be prepared to justify its decision not to use statistical methods in any case where such methods might reduce burden or improve accuracy of results. When statistical methods are involved, the following documentation should be included with the

Supporting Statement A to the extent that it applies to the methods proposed:


  1. Describe (including a numerical estimate) the potential respondent universe and any sampling or other respondent selection methods to be used. Data on the number of entities (e.g., establishments, State and local government units, households, or persons) in the universe covered by the collection and in the corresponding sample are to be provided in tabular form for the universe as a whole and for each of the strata in the proposed sample. Indicate expected response rates for the collection as a whole. If the collection had been conducted previously, include the actual response rate achieved during the last collection.


The Community Choice Demonstration (OMB control # 2528-0337, Expiration: June 30, 2025), referred to herein as the Demonstration, is a large, eight-site randomized experiment, which aims to enroll approximately 15,250 Housing Choice Voucher (HCV) program families over a five-year enrollment period—of which approximately 14,350 will be existing voucher families and approximately 900 will be waitlist families. Enrollment will occur in two phases. During the first two years of enrollment (Phase 1), the Demonstration will randomly assign approximately 5,100 families to either (a) a single treatment group or (b) a control group that will not receive any mobility-related services. The treatment group will be offered a comprehensive set of mobility-related services (CMRS). During the following three years of enrollment (Phase 2), beginning in fall of 2024, the Demonstration will add a second treatment group, selected mobility-related services (SMRS). During Phase 2, families will be randomly assigned to one of three groups: (a) a group offered CMRS, (b) a group offered SMRS, or (c) the control group that will not receive any mobility-related services. Phase 1 of the study is evaluating whether the offer of CMRS helps families with children access and remain in opportunity areas and exploring which services appear to be most effective and cost-effective. Phase 2 will evaluate the effectiveness of SMRS and compare the outcomes of CMRS and SMRS to ascertain the most cost-effective methods of getting HCV families to move to and stay in opportunity areas. Services are only assigned to families randomly assigned to the treatment group. If a family decides they do not want to receive particular services, they may abstain from any or all services, but the full set of services is offered to everyone in the treatment group regardless of race, ethnicity, disability or any other protected class.

This Information Collection Request (ICR) seeks approval for data collection that begins during Phase 1 and extends into Phase 2. The assessments will be limited to families in the group offered CMRS and the control group. As noted in Supporting Statement A, there are three types of data collection planned for this Demonstration during this phase of the evaluation and covered by this ICR—the Home Assessment, the Child Assessment, and the Obesity and Type II Diabetes Risk Assessment—which will enable us to examine the impact of moves to opportunity areas on child health. Although the main Community Choice Demonstration (CCD) study was not originally designed to examine improvements in child health, these additional assessments provide an opportunity to do so.

Sampling Plan

The following is a description of the Sampling Plan for the three Assessments: 1) The Home Assessment (administered to families at 2 of the 8 PHAs participating in the Demonstration); 2) The Child Assessment and 3) The Obesity and Type II Diabetes Risk Assessments (administered together, as part of a single coordinated visit, to families at 3 PHAs). The evaluation contractor has ensured that the 2 PHAs selected for the Home Assessment differ from the 3 PHAs selected for the Child Assessment and Obesity and Type II Diabetes Risk Assessment. This will minimize the burden on participating families. Eligible participants include all families enrolled in the CCD study who are located at one of the 5 sites where the assessments are being conducted. They will receive targeted outreach to participate in these assessments. Each assessment will involve a convenience sample of CCD households who opt to participate in them.

Because these assessments involve a very targeted sample – households eligible for vouchers that are participating in the CCD – there is likely to be less variation between participants. Nonetheless, in the event that there is substantive variation between the characteristics of participants who are randomly assigned to receive CMRS and those who are not, the analysis strategy will involve adding weights so that treatment and control groups are equivalent at baseline.

Home Assessment

For the Home Assessment, the evaluation contractor will select a convenience sample consisting of the heads of household of an estimated 570 families. Selection of the Home Assessment sample will begin after families enroll in the main Demonstration. The evaluation contractor will conduct outreach for the Home Assessment to all families who enroll in the main Demonstration and are randomized to either the CMRS Group or the Control Group in the period beginning 45 days before launch of the Home Assessment through 15 months following the launch date. The evaluation contractor will send families a letter introducing the Home Assessment. They will then follow-up with a phone call to explain the study and ask to set up an appointment to visit the family at home. During the home visit, the evaluation contractor will explain the nature of the Home Assessment and the risks and benefits of participating to the head of household. The evaluation contractor will then request the head of household to provide their informed consent in writing to participate. Recruitment will stop when a sample of 570 consented families has been obtained or when the end of the recruiting period noted above is reached.1 The evaluation contractor estimates it may need to send initial letters to up to 814 families in order to recruit 570 families to participate in the Assessment.

The evaluation contractor will conduct a Home Assessment at baseline and then again approximately 12 months later consisting of three components:

  • Direct measurements of pest allergens and indoor air quality in the home.

  • Observations by the evaluation contractor member.

  • Brief survey administered to the heads of household.

Child Assessment

For the Child Assessment, the evaluation contractor will administer validated child and parent/guardian assessments at baseline (shortly after random assignment) and a two-year follow-up to a convenience sample of families enrolled in the Demonstration at the Cuyahoga County, Nashville, and Pittsburgh Metro Demonstration sites. Families are eligible for participation if they have at least one child between the ages of 2 and 15 years old at baseline, as indicated on the family’s Demonstration baseline information form. All families with a child in that age range that enroll in the Demonstration at these sites over the course of a two-year enrollment period and are randomized to either the CMRS Group or the Control Group will be contacted shortly after random assignment in the Demonstration and invited to participate in the Child Assessment and Obesity and Type II Diabetes Risk Assessment. (Note, however, that we will exclude families where the parent or guardian of the focal child is pregnant at baseline.)2 The evaluation contractor will send families a letter introducing the Child Assessment and the Obesity and Type II Diabetes Risk Assessment. Because data collection for these two assessments will occur during the same home visit, these two assessments will be branded as one study—the Mobility Opportunity Vouchers for Eliminating Disparities (MOVED) study—in materials provided to prospective families. The evaluation contractor will then follow-up with a phone call to set up an appointment to visit the family at home. During the home visit, the evaluation contractor will explain the nature of the assessments, and the risks and benefits of participating, to the parent of a pre-selected focal child, and then request that they provide informed consent in writing to participation by them and a focal child. Children 10 and up will also be asked to assent to their participation. Recruitment for the Child Assessment will stop when 837 adults complete the Child Assessment, 900 adults complete the Obesity and Type II Diabetes Risk Assessment, or when the end of the two-year recruiting period is reached, whichever happens first.3 The evaluation contractor estimates it may need to send initial letters to up to 1,285 families in order to recruit 900 families to participate in the Obesity and Type II Diabetes Risk Assessment and 837 to participate in the Child Assessment.

Baseline data will be collected from one randomly selected focal child and the child’s parent or guardian in an estimated 837 families. These families will be evenly split between families in the group offered CMRS and the control group. These families will also be participating in the Obesity and Type II Diabetes Risk Assessment as part of the MOVED study, described in more detail below. The evaluation contractor will randomly select a focal child from each participating family to be the subject of data collection in both the Child Assessment and the Obesity and Type II Diabetes Risk Assessment. Data collection for the Child Assessment will include two components:

  • Survey about child (questions asked of parent or guardian).

  • Direct child assessment and survey.

The Obesity and Type II Diabetes Risk Assessment

For the Obesity and Type II Diabetes Risk Assessment, the evaluation contractor will administer data collection at baseline and a two-year follow-up at the Cuyahoga County, Nashville, and Pittsburgh Metro Demonstration sites (the same sites as the Child Assessment). The sampling procedures for the Obesity and Type II Diabetes Risk Assessment will be the same as for the Child Assessment, described above, with one exception. The Obesity and Type II Diabetes Risk Assessment is not limited to families with a child aged 2-15, so all families who enroll in the main Demonstration during the two-year recruitment window in either the CMRS or control group, regardless of the age of their children, will be invited to participate in the Obesity and Type II Diabetes Risk Assessment and asked to complete informed consent. Recruitment for the Obesity and Type II Diabetes Risk Assessment will stop when 900 adults have consented to participate or when the end of the two-year recruiting period is reached, whichever happens first.4

Baseline data will be collected for an estimated 900 families (approximately half of whom will be in the group offered CMRS and the other half will be in the control group) from one parent or guardian and one child recruited from families that enroll in the main Demonstration over the two-year recruitment period. (An estimated 837 of these families will also be participating in the Child Assessment, as described above). For questions or measurements related to children, the focal child for the Obesity and Type II Diabetes Risk Assessment will be the same child as the evaluation contractor selected for the Child Assessment. Data collection at baseline and follow-up will include the components noted below. Participants will participate in some or all of these data collection activities as some activities apply only to a sub-sample of participants, as noted:

  • Adult survey.

  • Anthropometric assessment (adult).

  • Anthropometric assessment (child).

  • Blood spot sample (adult).

  • Home Observations/Housing Assessment.

  • Accelerometers (adult). A subset of 400 (about half from the group offered CMRS and half from the control group) of the adults.

  • Accelerometers (child). A subset of 400 (about half from the group offered CMRS group and half from the control group) children.

  • Blood pressure readings (adult).

  • Semi-structured interviews (adult). Only at follow-up, a sub-set of 75 adults, 25 from each site (oversampling the treatment group relative to the control group 2:1).

  • Tracking Contacts:

    • Tracking Emails/Texts (adult).

    • Tracking Calls (adult).


Exhibit B-1 shows the maximum sample size and projected response rates at baseline and follow-up for each data collection activity under the Home Assessment, Child Assessment, and the Obesity and Type II Diabetes Risk Assessment. The response rate for the baseline assessments is 100% since it reflects only those families who consent to participate in the study. The evaluation contractor is aiming to achieve an 80 percent response rate for the follow-up assessments. This can be challenging to achieve, but it represents our target as a high response rate helps reduce non-respondent bias. The evaluation contractor has achieved response rates of 80% or higher for follow-up surveys in other studies. For example, the HUD study, Impact of Housing and Services Interventions for Homeless Families, achieved an 81% response rate for the 18-month follow-up and 79% response rate for the 36-month surveys and the Social Security Administration’s Benefit Offset National Demonstration (BOND) had an 85.6% response rate for the 24-month follow-up and 81% response rate for the 36-month follow-up surveys.

Exhibit B-1 Sample Sizes and Projected Response Rates by Specific Component

Information Collection Component

Number of Respondents

Response Rate at Baseline

Response Rate at Follow-up

Enrollment in main Demonstration

15,250

N/A

N/A

The Home Assessment

Consent for Assessment

570

100 percent (570)

80 percent (456)

Direct Measurements

570

100 percent (570)

80 percent (456)

Interviewer Observations

570

100 percent (570)

80 percent (456)

Survey

570

100 percent (570)

80 percent (456)

The Child Assessment

Consent for Assessment

837

100 percent (837)

80 percent (670)

Survey about child (asked of parent/guardian) and parent/guardian's presence during direct Child Assessment

837

100 percent (837)

80 percent (670)

Direct Child Assessment

837

100 percent (837)

80 percent (670)

The Obesity and Type II Diabetes Risk Assessment

Consent for Assessment

900

100 percent (900)

80 percent (720)

Adult Survey

900

100 percent (900)

80 percent (720)

Anthropometric assessments (adult)

900

100 percent (900)

80 percent (720)

Anthropometric assessments (child)

900

100 percent (900)

80 percent (720)

Anthropometric assessments (child, but accounting for parent's time)

900

100 percent (900)

80 percent (720)

Blood spot samples (adult)

900

100 percent (900)

80 percent (720)

Home Observations/Housing Assessment

900

100 percent (900)

80 percent (720)

Accelerometers (adult)

400

100 percent (400)

80 percent (320)

Accelerometers (child)

400

100 percent (400)

80 percent (320)

Blood Pressure Reading (adult)

900

100 percent (900)

80 percent (720)

Consent for Semi-Structured Interviews*

75

N/A

100 percent (75)

Semi-Structured Interviews*

75

N/A

100 percent (75)

Tracking Emails/Texts (9 and 15 months)**

900

N/A

20 percent (180)

Tracking Calls (6,12, and 18 months)**

900

N/A

20 percent (180)

* Semi-structured interviews are only performed at follow-up.

** These response rates are expected to be 20 percent because the contractor only expects a response if the participant’s contact information has changed and because, generally, only a small share of families respond to tracking emails or calls.

  1. Describe the procedures for the collection of information including:


  • Statistical methodology for stratification and sample selection,

  • Estimation procedure,

  • Degree of accuracy needed for the purpose described in the justification,

  • Unusual problems requiring specialized sampling procedures, and

  • Any use of periodic (less frequent than annual) data collection cycles to reduce burden.


This section describes the procedures for the collection of information, followed by the analysis and estimation procedures. Procedures are provided first for the Home Assessment data collection. Procedures are then provided for the Child Assessment data collection and Obesity and Type II Diabetes Risk Assessment data collection, which will happen at the same time. The section concludes with statements regarding unusual problems requiring specialized sampling and the use of periodic data collection cycles to reduce burden.

Procedures

Home Assessment

Beginning in January 2024 (pending OMB approval), recruitment efforts will begin with PHA staff providing an informational flyer (Attachment A.1) describing the Home Assessment to families at the end of their enrollment meeting for the main Demonstration. Recruitment will occur in each of the two sites selected for participation in the Home Assessment (Minneapolis and Rochester). The recruitment flyer will describe the Home Assessment and its data collection activities. The recruitment flyer will also alert families that the evaluation contractor will reach out to tell them more about the study. Families participating in the main Demonstration are not required to participate in the Home Assessment. A decision to participate or not participate in the Home Assessment will not impact their participation in the main Demonstration in any way.

Local interviewers in each site will then use a multi-mode approach to recruit families who have enrolled in the Demonstration into the Home Assessment, including phone calls, e-mails, and letters to connect with each family. The evaluation contractor will send families an advanced letter (Attachment A.2) and then a follow-up email (Attachment B.1) notifying them about the study. Once the interviewers contact a family by phone (Attachment B.2), they will explain the purpose of their call and attempt to schedule an in-person visit. Interviewers will record all scheduled home visit appointments or phone call-backs in their electronic record of calls (ROC). The ROC is embedded in the computer automated personal interviewing (CAPI) system.

Once the home visit is scheduled, interviewers will visit the home of each family that expresses an initial interest in participating. During the visit, interviewers will obtain informed consent of the head of household to participate in the Home Assessment. The interviewer will review the consent form with the head of household and address any questions that they have. The head of household will then be asked to sign the form if they consent to participate. The Consent form developed by the evaluation contractor, in consultation with HUD and the contractor’s Institutional Review Board, is included as Attachment C.

Once the interviewers obtain informed consent from the head of household, they will proceed with data collection components. First, the interviewer will conduct direct measurements of pest allergens and indoor air quality in the home and just outside the home, assessing for 1) temperature and relative humidity, 2) carbon dioxide, 3) carbon monoxide, 4) mouse and cockroach allergens, 5) particulate matter, and 6) volatile organic compounds (VOCs—chemicals that enter the air from paints, cleaners, etc.) Next, the interviewer will observe the home focusing on risk factors for asthma and respiratory conditions, including sources of indoor air pollutants and allergens, and housing and neighborhood quality. Interviewers will record all data electronically in their tablet through the data collection platform that provides for offline data collection that is later synced when interviewers connect to their Wi-Fi. Finally, interviewers will provide families with a tablet for the head of household to self-administer a brief survey to obtain information on risk factors for asthma and other respiratory conditions and child health conditions. Interviewers will follow these same procedures to conduct the follow-up data collection activities two years later. See Attachments D, E, F.1 and F.2 for copies of the data collection instruments to be used in the Home Assessment.

Data collection will be performed by trained interviewers overseen by a field director for each site. The local interviewers will conduct the enrollment and baseline data collection for the Home Assessment data collection, respond to inbound contacts from participants, and administer all follow-up data collection. Interviewers will complete comprehensive training prior to conducting data collection. The training will provide a study overview, guidelines for obtaining cooperation, purpose of each data collection instrument, and ample time to conduct mock interview sessions. Interviewers will receive a set of Frequently Asked Questions (FAQs) and supplemental materials to aid them in their data collection efforts. Interviewers and field managers will be in close contact throughout the study period to proactively monitor activities and strategize on solutions for contacting hard-to-reach participants and getting them to respond.

Child Assessment and Obesity and Type II Diabetes Risk Assessments

To minimize the burden on prospective families, the evaluation contractor will conduct the data collection for the Child Assessment and Obesity and Type II Diabetes Risk Assessments during the same visits to families recruited from those who enrolled in the main Demonstration at the three participating sites (Cuyahoga County, Nashville, and Pittsburgh Metro). To prospective families, these two assessments will be branded as the MOVED study.

The evaluation contractor has designed a flyer to describe the Child Assessment and the Obesity and Type II Diabetes Risk Assessment (Attachment J.1), and the activities associated with these assessments. Beginning in January 2024 (pending OMB approval) PHA staff at the three participating sites will provide this flyer to families who enroll in the main Demonstration and let the families know that an interviewer will be in touch to tell them more about the study.

Local interviewers in each site will use a multi-modal approach to recruitment, including phone calls, e-mails, and letters to connect with each family. Like the Home Assessment, families will be sent an advanced letter (Attachment J.2) and then a follow-up email (Attachment K.1) notifying them about the study. Once the interviewers contact a family by phone (Attachment K.2), they will explain the purpose of their call and attempt to schedule an in-person visit. Interviewers will record all scheduled home visit appointments or phone call-backs in their electronic record of calls. The call records are embedded in the computer automated personal interviewing (CAPI) system. Families participating in the main Demonstration are not required to participate in the Child Assessment or the Obesity and Type II Diabetes Risk Assessment.

Interviewers will visit the homes of families that express an initial interest in participating. At the start of the enrollment visit, interviewers will obtain informed consent from adults and child assent. Both forms will include an overview of the study and the risks and benefits of participating, the voluntary nature of participation, how the data will be used and who to contact with questions. The consent form to participate in the Obesity and Type II Diabetes Risk Assessment and Child Assessment (Attachment L) will explain each of the data collection components. Once the interviewer reviews the consent form with the family, the adult will be asked to sign the form to consent to their own participation and give permission for the participation of their child. Children 10 and up will also be asked to assent (Attachment G).

Once informed consent has been obtained, interviewers will proceed with the data collection components. Interviewers will have the flexibility to begin with either the child data collection components or the adult data collection components—depending on the families’ preferences. Assuming the families want to complete the child components first, so that they can leave the room, the data collection process will work as follows:5 Using their tablets, interviewers will first conduct the Child Assessment (Attachments I.1, I.2, and I.3) followed by the child Anthropometric portions of the Obesity and Type II Diabetes Risk Assessment (Attachment O). Interviewers will next conduct the adult data collection pieces, including the Adult Survey (Attachments M.1 and M.2)and the Survey on Child Outcomes (Attachments H.1 and H.2from the Child Assessment, which is directed at the parent/guardian). These two modules will be administered together so that they feel like one interview to the respondent.6 The interviewer will also complete the adult Anthropometric assessment (Attachment N), Blood spot sample (Attachment T, adult), Accelerometers with a sub-set of adults (Attachment R), Accelerometers with a subset of children (Attachment S), and Blood pressure readings (Attachment P, adult) with those who consented to participate in each of those components of the Obesity and Type II Diabetes Risk Assessment. At the end of the visit, the interviewer will complete the Home Observations (Attachment Q) using their tablet. This will be done once the interviewer completes the other data collection components.

Participants in the evaluation will be informed that arrangements are available to assist individuals who require assistance with auditory or other disabilities or disorders. The evaluation contractor, PHAs, and HUD will ensure communication with individuals with disabilities is as effective as communication with individuals without disabilities throughout this study. The evaluation contractor will make the survey available in English and Spanish and will work with HUD to accommodate interested families whose primary language is neither English nor Spanish to ensure meaningful access to individuals with Limited English Proficiency.

Interviewers will follow these same procedures to conduct the follow-up data collection activities two years later.

Data collection will be performed by trained interviewers and overseen by a field director for each site. The local interviewers will conduct the enrollment and baseline data collection for the Child Assessment and the Obesity and Type II Diabetes Risk Assessment data collection, conduct tracking outreach, respond to inbound contacts from participants, and administer all follow-up data collection. Interviewers will complete comprehensive training prior to conducting each round of data collection (enrollment and baseline data collection, tracking, and follow-up data collection activities). Training will provide a study overview, guidelines for obtaining cooperation, purpose of each data collection instrument, and ample time to conduct mock interview sessions. Interviewers will receive a set of Frequently Asked Questions (FAQs) and supplemental materials to aid them in their data collection efforts. Interviewers and field managers will be in close contact throughout the study period to proactively monitor activities and strategize on solutions for contacting hard-to-reach participants and getting them to respond.

For the Semi-Structured Interviews for Obesity and Type II Diabetes Risk Assessment (Attachment V), the contractor will recruit 25 adults from each site (75 total) to participate in semi-structured in-depth interviews at 2-year follow-up after completion of their follow-up survey. Interviewers will ask participants if they would be willing to participate in an in-depth interview about their moving experience. Interviewers will obtain consent from participants prior to conducting these interviews (Attachment U). Interviewers will securely provide the Johns Hopkins University research team with the contact information of interested participants. They will contact participants (Attachments W and X) and conduct the interviews. The contractor will oversample individuals from the treatment group relative to the control group in a 2:1 manner, given the desire to understand the role of neighborhood change. The evaluation contractor will then identify potential participants based on the following criteria: (a) success in moving to opportunity areas (treatment group only), (b) sociodemographic characteristics, and (c) obesity (at baseline and reductions at follow-up). The evaluation contractor will establish a systematic approach to recruitment, generating lists of participants who meet the criteria (strata) and agreed to be contacted, randomly selecting participants for interviews from each strata. Participants will be offered an option to complete their interviews in-person, via Zoom or phone and interviews will follow the semi-structured interview guide.

Statistical Methodology for Stratification and Sample Selection

Home Assessment

Beginning in January 2024 (pending OMB approval), families who have enrolled in the Demonstration and been randomly assigned to the CMRS or control groups between 45 days prior to January 2024 and 15 months following will be eligible for recruitment to the Home Assessment Study. The Home Assessment Study will be conducted in two Demonstration sites (Minneapolis and Rochester). Families will be recruited into the Home Assessment Study on an ongoing basis as they are enrolled in the Demonstration until the Home Assessment Study reaches a baseline sample size of 570 or the recruitment window closes.

Child Assessment and Obesity and Type II Diabetes Risk Assessments

Beginning in January 2024 (pending OMB approval) all families who enroll in the main Demonstration and are randomly assigned to the CMRS or the control groups are eligible for recruitment to the Obesity and Type II Diabetes Risk Assessment studies. The first 900 families enrolled will be part of the Obesity and Type II Diabetes Risk Assessment.

Among these 900 families, those with at least one child between the ages of 2 and 15 years old are eligible for inclusion in the Child Assessment. The study team expects that approximately 837 of these 900 families will have at least one eligible child, an estimate based on the age distribution of children in families who have already enrolled in CCD to date Should more than 837 families have an eligible child, the first 837 families enrolled will be a part of the Child Assessment.

Families who are eligible for the Child Assessment will have one focal child between the ages of 2 and 15 years at baseline randomly selected using simple random sampling (e.g., if a family has three potentially eligible focal children between the ages of 2 and 15 years, one of the three would be randomly selected with each child having a 1 in 3 chance of selection). For the Obesity and Type II Diabetes Risk Assessments, the same focal child selected for the Child Assessment will be assessed.

In the Obesity and Type II Diabetes Risk Assessment, accelerometry data will be collected for up to one adult and up to one focal child in a sub-sample of the families enrolled who have an eligible focal child between the ages of 5 and 15 selected as part of the Child Assessment, with a target enrollment of 400 families. The evaluation contractor will invite eligible families to participate, with approximately the first 133 families in each site enrolling being a part of the accelerometry data collection.

Estimation Procedure

Home Assessment

To generate causal estimates of the impact of offering CMRS services on indoor air quality, allergen exposure, and self-reported health, the evaluation will use an intent-to-treat framework to generate regression-adjusted impact estimates. In general, the impact estimation approach closely follows the one used for the main Demonstration evaluation, with two differences: 1) a baseline measure of the outcome will be included as a covariate, in addition to a parsimonious set of family and origin-neighborhood characteristics; and 2) because participant recruitment for the Home Assessment study will occur after random assignment into the main Demonstration evaluation and potential sample loss at follow-up, the evaluation contractor will investigate whether the CMRS and Control groups have equivalent baseline outcome exposures. For the analytic sample (those with collected follow-up data), the evaluation contractor will develop a set of analytic weights to ensure that the weighted assignment groups are similar to each other on pre-randomization characteristics (which are collected during enrollment into the main Demonstration study). These weights will address any differential enrollment into the Home Assessment sample across assignment groups and any differential non-response to follow-up Home Assessment data collection.

The Home Assessment study’s estimation model will take the form of:

(1)

where is an outcome for family , is the estimated impact of being offered CMRS rather than the control condition, is a vector of family-level characteristics measured at baseline including characteristics of origin neighborhood and a baseline measure of , is a site-level dummy (fixed effect; with the other Home Assessment site serving as the reference group), is a family-level residual, and , , and are other parameters.

The evaluation contractor will estimate this model using ordinary least squares (OLS) and use Eicker-White robust standard errors for hypothesis testing. The application of robust standard errors will help address potential heteroscedasticity caused by non-normality in , such as when is binary and (1) takes the form of a linear probability model. Judkins and Porter7 show that least squares estimation performs well with binary outcomes in sample sizes smaller than found in this study. Therefore, the contractor will use least squares for both continuous and binary outcomes. Consistent with the approach detailed in the main Demonstration impact analysis, the level of statistical significance for hypothesis testing will be 0.10.

The research questions for the Home Assessment are not part of the confirmatory research questions for the main Demonstration study. The CMRS intervention was not specifically designed to impact family exposures to allergens or indoor air pollutants. Thus, no adjustments for multiple comparisons are planned, and results will be indicated as exploratory in the reporting.

Child Assessment

For the Child Assessment, the evaluation will analyze the data using an intent-to-treat framework, with a focus on understanding the average impact of the pairwise comparison of being assigned to CMRS versus the control group. In general, the impact analysis model is based on the estimation model procedures used in the main Demonstration study with three differences: (1) analysis weights will be used for child outcomes to account for the probability of selection as a focal child, so that analyses will equally represent all eligible children ages 2–15 in the study sample; (2) because participant recruitment for the Child Assessment study will occur after random assignment into the main Demonstration evaluation and potential sample loss at follow-up, the evaluation contractor will investigate whether the CMRS and Control groups have equivalent baseline outcome values; and (3) baseline measurement of the outcome will be included as a covariate, along with a select set of child and family demographic characteristics and study site. For the analytic sample (those with collected follow-up data), the evaluation contractor will develop a set of analytic weights to ensure that the weighted assignment groups are similar to each other on pre-randomization characteristics (which are collected during enrollment into the main Demonstration study). These weights will address any differential enrollment into the Child Assessment sample across assignment groups and any differential non-response to follow-up Child Assessment data collection.

Following standard practice, the evaluation will use regression adjustment to increase the precision of the impact estimates8. The evaluation contractor will estimate the impact on an outcome Y (e.g., child anxiety symptoms) of being offered CMRS rather than the control condition using estimating equation (2). The estimation equation is:

Where:

= outcome Y for focal child i at the two-year follow-up,

= indicator variable that equals one if family i was assigned to CMRS,

= estimated impact of being assigned to CMRS rather than the control condition,

= the baseline value of the outcome Y for family i,

= a vector of background characteristics of family i,

= indicator variable for site k for family i,

= residual for child i (assumed mean-zero and i.i.d.),

= a constant term, and

, = other regression coefficients.

In this model, the evaluation assumes that the true impact of CMRS relative to the control condition is homogeneous across sites. The impact parameter will be a weighted average of the point estimates of site-level impacts, with each site-level impact weighted by the analysis-weighted number of families in the site.

The evaluation will plan to estimate the model above using least squares with survey non-response weights (described below). This estimation assumes that the outcome data have a normal distribution (i.e., form a bell-shaped curve) with a common variance (i.e., are homoscedastic). There is no reason a priori to expect homoscedasticity, however, since some types of families could have higher variability in their outcomes than other families and the different interventions could themselves influence this variability. Furthermore, some of the outcomes are binary; applying least squares to such binary outcomes (i.e., using the “linear probability model”) induces heteroscedasticity.9

To address the potential of heteroscedasticity, the evaluation will compute robust standard errors (i.e., Eicker-White robust standard errors). Judkins and Porter show that least squares estimation performs well with binary outcomes in sample sizes smaller than found in this study.10 Therefore, the evaluation will use least squares for both continuous and binary outcomes.

For each report, the evaluation contractor will perform balance tests of the pooled sample and key subgroups (existing voucher families, waitlist families, and site) to confirm that randomization and the post-randomization enrollment into the Child Assessment sample resulted in assignment groups with similar household and neighborhood characteristics at baseline. To address survey non-response, the evaluation contractor will prepare a set of non-response weights based on family characteristics measured in the baseline survey that attempt to adjust for nonresponse.11 These weights will be used in estimating impacts for all family and adult outcomes. The evaluation will develop separate analysis weights for the parent report on child outcomes and the direct child assessment. The parent report on child outcomes will incorporate a family-level non-response weight and the inverse probability of being selected as a focal child; the direct child assessment outcomes will additionally incorporate a child non-response weight. The aim of this analysis is to equally represent all eligible children in all families that enroll in the Child Assessment for study outcomes. Therefore, focal children in families with more children receive more weight in the analyses than focal children from families with fewer children.

The research questions for the Child Assessment are not part of the confirmatory research questions for this study. A key reason is that the intervention is not specifically designed to affect child outcomes. Thus, no adjustments for multiple comparisons are planned, and results will be indicated as exploratory in the reporting.

Obesity and Type II Diabetes Risk Assessment

The primary objective is to investigate whether adults who receive comprehensive mobility services (“treatment group”) exhibit changes in obesity and type II diabetes risk after two years compared to a control group that receives no special mobility services, and whether children experience changes in obesity. The corresponding endpoints for this objective are change in BMI and change in HbA1c from baseline to the follow-up visit for adults and change in BMI z-score for children. To minimize selection bias, the evaluation will primarily use an intent-to-treat (ITT) design, where individuals are analyzed according to the study group to which they were randomized, regardless of whether they moved to a lower-poverty neighborhood. For the analytic sample (those with collected follow-up data), the evaluation will develop a set of analytic weights to ensure that the weighted assignment groups are similar to each other on pre-randomization characteristics (which are collected during enrollment into the main Demonstration study). These weights will address any differential enrollment into the Obesity and Type II Diabetes Risk Assessment sample across assignment groups and any differential non-response to follow-up Obesity and Type II Diabetes Risk Assessment data collection.

The evaluation will use a linear regression model to estimate the differences in the change from baseline to 2 years between study groups at the follow-up visit, while including the baseline value (BMI, HbA1c, or BMI z-score), baseline sociodemographic characteristics, and study site as covariates in the model. The adjusted mean difference in change in BMI (BMI z-score) and change in HbA1c between study groups will be estimated with a 95 percent confidence interval. The evaluation will allocate a type I error rate of 5 percent each to the analyses for adults and kids. For adults, a 2-sided p-value < 0.05/2 = 0.025 will be considered statistically significant for each analysis (BMI and HbA1c).12 In secondary analyses, differences in waist circumference between groups will be explored and use clinical cut-points for BMI and HBA1c using multinomial logistic regression models.

The evaluation will then examine whether the associations between study group and outcomes vary across pre-specified subgroups by including interactions in the models described above. Subgroups include adult age, baseline BMI category, and study site. Most adult participants are anticipated to be female and of Black race; gender and race will therefore not be included as subgroups. For children, subgroups include age (<13 years old, 13+) and gender given prior MTO findings on economic outcomes and mental health by these factors, respectively. P-values for the interaction analyses will be adjusted for multiplicity using the Benjamini-Hochberg approach with a false discovery rate corresponding to the number of comparisons performed for each adult and child.

Consistent with prior MTO analyses, the evaluation will estimate secondary local-average treatment effect (LATE) models in addition to the ITT models described above, since not everyone who receives services will move to lower-poverty neighborhoods. First, the evaluation will estimate the likelihood of moving to a lower-poverty neighborhood conditional on study group assignment, baseline sociodemographic characteristics, and site. Next, the evaluation will use this estimate to examine change in BMI and HbA1c, conditional on voucher use, controlling for baseline factors as in the analyses proposed above. The evaluation will also seek to explore the magnitude of the change in BMI and HbA1c associated with changes in neighborhood poverty exposure, while recognizing that neighborhood poverty is associated with other neighborhood attributes. A two-stage model will be estimated, where the first stage uses treatment group assignment to predict neighborhood poverty exposure in the first two years after randomization, and the second stage uses predicted poverty exposure to estimate changes in BMI and HbA1c.

The evaluation is also designed to explore how a range of behavioral, psychosocial, contextual, and structural factors change over 2 years in both the CMRS and control groups and whether this leads to different outcomes for participants offered CMRS services. The evaluation will next explore the potential for these factors to mediate the relationship between treatment group assignment and the primary outcomes of changes in BMI and HbA1c. Importantly, changes in the factors may mediate the relationship even if no main effect is observed. The evaluation will apply a causal mediation approach, estimating the independent effects of study group assignment and a behavioral/psychosocial factor on change in BMI, accounting for potential confounders. The total effect of treatment group assignment on change in BMI can then be calculated. The proportion of the total effect of treatment group assignment on change in BMI that is composed of the indirect effect, i.e., the effect that is mediated by the factor, will then be estimated. The evaluation will use a linear regression model to associate the change in each behavioral or psychosocial factor with each outcome (change in BMI, HbA1c) while including the study group as a covariate and employing the R package ‘mediation’ to estimate the average causal mediation effect.13

Each of the three assessments described above leverages the baseline study’s existing randomized experimental design framework to assess the impact of the overall CCD outcome on child health, while adjusting for individual characteristics that may affect outcomes, including unobservable characteristics. Baseline data collection methods are cognizant of race and other protected classes. The evaluation contractor will conduct balance testing to ensure that characteristics of participants in the treatment and control groups at baseline are statistically equivalent and will apply weights if they are unbalanced so that the analyses yield robust impact estimates. Notably, the baseline study design does not allow for isolating the impact of the intervention on child health regardless of historical and geographical differences in health by race because the impact of moving to an opportunity area will be measured for each study participant at the individual level and compared against the baseline health measures.

Degree of Accuracy needed for the purpose described in the justification

Home Assessment

The evaluation contractor has estimated the minimum detectable effect (MDE) for a threshold measure of one of the Home Assessment study’s key allergen outcomes.14 MDEs are the smallest true effects of an intervention that researchers can be confident of detecting as statistically significant when analyzing samples of a given size. The evaluation contractor estimates that if it conducts 456 follow-up assessments, as planned, it could detect a difference of 7.1 percentage points in the share of families living in dwellings with a cockroach Bla g1 allergen measurement greater than 8 units/g, between families in the treatment versus control groups.15 This calculation assumes 80 percent power for a two-tailed test significant at the 0.10 level, an outcome base rate of 25 percent, and an R-squared of 0.50 (corresponding approximately to a 0.7 correlation between outcome measures at baseline and 12-month follow up). In a previous study examining the impact of various home remediation interventions on low-income rental housing quality, Klitzman et al. (2005) found levels of Bla g1 greater than 8 units/g in 25 percent of dwellings pre-intervention and 6 percent of dwellings post-intervention.16 The base rate of binary outcomes (i.e., the proportion in the control group) will vary across other outcomes. In this table, the evaluation contractor has provided the MDE when the base rate is relatively low or high (10 percent or 90 percent), when the base rate takes a value of 25 percent, and when the base rate is 50 percent. The maximum MDE for a binary outcome occurs when the base rate is 50 percent.

Exhibit B-2: Expected Sample Sizes and Minimum Detectible Effects (MDEs) for Binary (Threshold) Home Assessment Outcomes

Comparison

Sample

CMRS Sample Size

Control Sample Size

MDE if control group mean is 10 percent (or 90 percent)

MDE if control group mean is 25 percent (or 75 percent)

MDE if control group mean is 50 percent

CMRS vs. control

All families

228

228

4.9pp

7.1pp

8.2pp

Notes: Minimum detectible effects (MDEs) are based on calculations assuming two-sided tests are used at the 10 percent significance level, the desired power is 80 percent, and the regression R2 is 0.50. This R2 value corresponds approximately to a 0.7 correlation between outcome measures at baseline and 12-month follow-up. The evaluation contractor believes this high correlation is justified because in cases where families do not move, indoor allergen and pollutant measures will be collected in the same dwelling, 12 months apart.

Child Assessment

For the Child Assessment, the evaluation contractor has estimated the MDEs for the impact analysis.17 Consistent with analyses in the main Demonstration study, the evaluation contractor proposes using .10 as the level of statistical significance for hypothesis testing. For the full Child Assessment sample, the evaluation contractor expects to be able to detect significant differences in children’s outcomes of 0.155 standard deviations between the CMRS (treatment) and control groups, assuming the 80 percent response rate specified above and an R-squared of 0.35 for using a repeated measure (the same at baseline and follow-up) and other baseline characteristics. For binary outcomes, such as child health conditions and access to health services, the evaluation contractor expects to be able to detect a difference of 4.7 to 7.8 percentage points between the CMRS and control groups, depending on the base rate of the outcome in the control group (e.g., 10 percent versus 50 percent). MDEs will be somewhat higher for some outcomes only asked for certain age groups. The smallest age subset at follow-up is children between the ages of 12 and 17 (e.g., binary outcomes for police involvement or arrests outcomes and tobacco use are only asked for this age range).

As noted above, the research questions for the Child Assessment are not part of the confirmatory research questions for the study, so there are no confirmatory outcomes as a focus of the power analyses. However, the full-sample MDEs shown in Exhibit B-2 below would apply to key outcomes in each domain, including scores for executive functioning, child behavioral strengths and difficulties, school and neighborhood safety, and child general health status.

Though there is not directly comparable research on short-term effects of mobility interventions on child outcomes, the study would be sufficiently powered to detect changes in behavioral problems and strengths found in a study of the effects of housing vouchers on children who have experienced homelessness (effect sizes = 0.23 SD, 0.16 SD, respectively).18 Base rates for mental health challenges in 4-to-7 year MTO follow-ups were around 7 percent for anxiety and depression among children ages 12 to 19 in the control group.19

Exhibit B-3: Expected Sample Sizes and Minimum Detectible Effects (MDEs) for Continuous and Binary Child Assessment Outcomes, by age ranges

Comparison

Sample

CMRS Sample Size

Control Sample Size

MDE in SDs (continuous)

MDE if control group mean is 10 percent (or 90 percent)

MDE if control group mean is 50 percent

CMRS vs. control

All children

335

335

0.155 SDs

4.7pp

7.8pp

CMRS vs. control

Children age 12+

144

144

0.237 SDs

7.1pp

11.8pp

Notes: Minimum detectible effects (MDEs) are based on calculations assuming two-sided tests are used at the 10 percent significance level, the desired power is 80 percent, and the regression R2 is 0.35. With outcomes being measured on a range of scales and units, MDEs reflect standardization of the outcomes such that the standard deviation is 1.0, with the effect size being in standard deviation units. The base rate of binary outcomes (i.e., the proportion in the control group) will vary across outcomes. In this table, we have provided the MDE when the base rate is relatively low or high (10 percent or 90 percent) and when the base rate is 50 percent. The maximum MDE for a binary outcome is when the base rate is 50 percent. Standard deviation for a binary outcome is calculated as the square root of the variance, where the variance is the base rate proportion of the outcome times one minus the base rate. Sample sizes reflect assumed response rates and an overall child age range of 4 to 17 years at follow-up.

Obesity and Type II Diabetes Risk Assessment

The evaluation contractor anticipates enrolling 900 adults, 80 percent of whom it expects to retain in follow-up, yielding a sample size of 720 for adult analyses. Prior studies of similar cohorts estimated the mean (standard deviation (SD)) BMI in adults to be ~28 (SD=6).20 Using a simulation approach with 1,000 replications and a correlation 0.8 for repeated measures of BMI within participants yields a finding of 93.3 percent power to observe a 1.0-point difference in change in BMI between study groups, which is consistent with impacts that have been achieved in neighborhood-based interventions.21 For example, the Pittsburgh Research on Neighborhood Change and Health (PHRESH) study, which investigated the impact of various neighborhood investments on health in two predominantly African American neighborhoods, saw a 0.34 unit reduction in BMI after 2 years in an intervention versus control community; results were not statistically significant, but the study sample size was substantially smaller than in the proposed work (N=532).22 Shape Up Somerville—a child-targeted, community-based obesity prevention intervention reduced parent BMI by 0.411 point over 2 years relative to control parents (N=478).23 The NIH-funded Watts Neighborhood Health Study estimated 80 percent power to detect a minimum difference of 2.19 BMI units among adults over 4 years (N=596).24

For HbA1c, approximately half of adults in the sample are expected to be pre-diabetic or diabetic with a mean HbA1c values of 7.7 percent (SD = 1.8 percent).25,26 HbA1c values are expected to be ~5.6-5.9 percent (SD = 0.40-0.89 percent) in the remaining half without diabetes. The simulation approach allows for 1) a 0.5 percent improvement in HbA1c among pre-diabetics and diabetics in the intervention arm relative to the control arm, which is in line with prior studies;27,28 2) no change on average in HbA1c among non-diabetics in the intervention arm; and 3) a correlation of 0.7 for repeated measures within study subjects. Assuming an equal distribution of participants in the intervention and control groups, the evaluation anticipates having 89.5 percent power.

To examine effect heterogeneities, the simulation approach has been used to calculate power. The evaluation is interested in several key subgroups (age, BMI at baseline, study site). Assuming subgroups are equally balanced in size, a difference-in-difference (interaction) of ½ a standard deviation (for example, a 0.5 effect size in one subgroup compared to a 1.0 effect size in the other), can be detected between subgroups with 83.8 percent power and a 2-sided type I error rate of 0.05/3 = 0.0167, corresponding to the 3 subgroups for adults. 

There are a number of behavioral, psychosocial, contextual, and structural factors included, all of which are of interest, and the evaluation has not pre-selected one as a primary outcome. With a sample size of 720 and arms balanced as 50/50 percent, 60/40 percent, or 70/30 percent, the evaluation has 87 percent, 86 percent, and 79 percent power to detect a 0.30 standardized difference in continuously measured outcomes between study groups. These estimates conservatively use a 2-sided type I error rate of 0.05/14 = 0.004, though in practice the evaluation plans to adjust for multiplicity using the Benjamini-Hochberg approach.29 In preparation for the evaluation, simulations were conducted for mediation analyses using the ‘mediate’ function in R. For each simulation, a 0.30 standardized difference in the factors between study groups was assumed along with differences between study groups in BMI and HbA1c based on the above power calculations (e.g., 1.0-point difference in change in BMI). For BMI in adults, the average indirect effect (proportion of the total effect mediated by the factor) across all 1,000 simulations was 9.5 percent (95 percent Coverage Interval [CI] 2.9percent, 17.9 percent). For HbA1c, estimates were 3.7 percent (95 percent CI: 1.3 percent, 7.2 percent). The proportion of all simulations that yielded a significant indirect effect p < 0.05 was 89.1 percent for BMI and 86.0 percent for HbA1c.

Unusual Problems Requiring Specialized Sampling Procedures

There are no unusual problems associated with the samples for this information collection request.

Any use of periodic (less frequent than annual) data collection cycles to reduce burden.

Home Assessment

There are two planned assessments, one at baseline and another one year later. The primary data collection is on a less-than-an-annual basis, a sign of reduced burden. Data cannot be collected less frequently.

Child and Obesity and Type II Diabetes Risk Assessments.

The evaluation contractor will use two types of administrative data to provide additional contact information for tracking and locating the study sample. These data do not impose any respondent burden. The evaluation contractor will obtain data from HUD’s Inventory Management System/PIH Information Center (IMS/PIC) and Tenant Rental Assistance Characteristics (TRACS) data systems (or successor systems) to obtain historical data on the receipt of housing assistance. IMS/PIC and TRACS will provide address information for families who receive housing assistance that is reported in these systems.

There are two planned assessments, one at baseline and one two years later. The primary data collection is on a less-than-an-annual basis, a sign of reduced burden. Data cannot be collected less frequently.


  1. Describe methods to maximize response rates and to deal with issues of non-response. The accuracy and reliability of information collected must be shown to be adequate for intended uses. For collections based on sampling, a special justification must be provided for any collection that will not yield "reliable" data that can be generalized to the universe studied.


The evaluation contractor plans extensive efforts to maximize the response rates for the Home Assessment, the Child Assessment, and the Obesity and Type II Diabetes Risk Assessment data collection efforts. These efforts, described below, include maintaining contact with study participants, the use of incentives, and procedures to manage both item non-response and survey non-response.

Maintaining Contact with Participants

The evaluation contractor will establish a website to foster engagement and to update participant contact information. For the Child Assessment and the Obesity and Type II Diabetes Risk Assessment, the team will contact participants by telephone at 6, 12 and 18 months and conduct electronic tracking (email or text) with those who are unresponsive to the telephone contacts at 9 and 15 months. Participants will receive the URL to the website in the email and text contacts along with their unique PIN to enter the web site to update or confirm their contact information. At each contact, participants will be asked to confirm or update their contact information as well as contact information for up to two people who will always know how to reach them. The evaluation contractor would only reach out to these people if they are unable to reach the study participants.

There is no planned tracking survey for the Housing Assessment. Because the follow-up is only one year after the initial survey, and the families are in the Housing Choice Voucher program at baseline, tracking is not needed.

Incentives

As described in Supporting Statement A of this ICR, the evaluation contractor will provide incentives to families for participating in the Home Assessment, the Child Assessment, and the Obesity and Type II Diabetes Risk Assessment data collections. These incentives will help to offset the costs of participating in the study for participants and increase the likelihood that they will respond to each data collection effort. Maximizing response rates will help to minimize the potential for non-response bias.

Managing Item Non-Response

Interviewers, equipped with tablets, will administer both the Home Assessment Survey, Adult Survey, and the Survey about Child Outcomes to the Adult (parent or guardian) at baseline and again at follow-up using computer assisted personal interviewing (CAPI) technology. The CAPI technology has several features for addressing missing data. First, to ensure that heads of family do not skip questions inadvertently, the surveys will require participants to provide a response to each question—even if that response is ‘prefer not to say’ or ‘do not recall’. CAPI technology ensures that the survey logic and skip patterns are administered as intended by displaying only the relevant questions based on the parent/guardian’s prior responses. Interviewers will also administer the child assessment and survey, accelerometer, blood pressure, height and weight measurement, and the Home Assessment using CAPI technology.

For the adult height, weight, and waist circumference measurements, trained interviewers will conduct each assessment using a standard protocol and capturing each measure 3 times for an average. All measurements will be recorded using CAPI technology. Blood pressure readings will also be conducted using standard procedures and captured 3 times and recorded using CAPI technology. Blood spot samples will be taken for each adult using a small finger prick and 4 drops of blood on a small test kit. If any adult does not wish to participate in these measurements, interviewers will mark “do not wish to participate” in that portion of the adult assessment. Accelerometers will be placed on the wrist of a sub-sample of adults for 7 days. The evaluation contractor will make it easy for participants to send back Accelerometers. The contractor will provide pre-addressed/pre-paid envelopes to each family. It will be recorded if participants do not send accelerometers back to the study team. If participants have any difficulties, they may reach out to the study team via the study email address or toll-free number to request assistance, such as providing the participant with a replacement accelerometer.


  1. Describe any tests of procedures or methods to be undertaken. Testing is encouraged as an effective means of refining collections of information to minimize burden and improve utility. Tests must be approved if they call for answers to identical questions from 10 or more respondents. A proposed test or set of tests may be submitted for approval separately or in combination with the main collection of information.


Most of the questions in the Adult Survey for the Obesity and Type II Diabetes Risk Assessment are identical or similar to questions used in previous studies of similar populations led by the evaluation team or other national surveys (e.g., the CDC National Health and Nutrition Examination Survey, CDC National Health Interview Survey, Pittsburgh Hill/Homewood Research on Neighborhood Change and Health). The same is true for the Survey of Child Outcomes in the Child Assessment (e.g., CDC Youth Behavior Risk survey, CDC National Health and Nutrition Examination Survey, NIDA Monitoring the Future Survey, CDC National Health Interview Survey, MTO Evaluation). As such, they have been thoroughly tested on large samples. (Specific sources for each question are provided in the copies of the survey instruments in Attachments H, I, and M).

The evaluation contractor relies on review of each instrument by skilled data collection staff and other evaluation contractor staff, HUD personnel, and JHU personnel to ensure that all instruments are clear and flow well. Their review also ensures that the instruments are concise but collect all the data necessary for analysis that is not otherwise available through other sources. Before the enrollment phase begins, the evaluation contractor will conduct mock interviews with one another using each data collection instrument. These mock interviews will provide information on the average length of the survey and identify any final modifications to improve the clarity and flow of the instrument.

To test the surveys further, the contractor will pretest the Child Assessment and Obesity and Type II Diabetes Risk Assessment adult and child surveys with three families in the Housing Choice Voucher programs of PHAs in the greater Boston area, but not participating in the Community Choice Demonstration.

Abt will follow a similar process for pretesting the Home Assessment survey. It will be pretested with 3 families in the Housing Choice Voucher programs of PHAs in the greater Boston area, but not participating in the Community Choice Demonstration.

Many of the questions in the semi-structured interview guide for the Obesity and Type II Diabetes Risk Assessment are based on questions used in prior studies of similar populations. The evaluation contractor will not pretest this Assessment instrument.


  1. Provide the name and telephone number of individuals consulted on statistical aspects of the design and the name of the agency unit, contractor(s), grantee(s), or other person(s) who will actually collect and/or analyze the information for the agency.


HUD’s Office of Policy Development and Research will work with the contractor, Abt Associates, to conduct the proposed Home Assessment and Child Assessment data collection efforts and analyze the data. Marina L. Myhre, PhD, and Teresa Souza, PhD, Social Science Analysts in HUD’s Office of Policy Development and Research, Program Evaluation Division serve as Contracting Officer’s Technical Representatives (COTRs). Their supervisor is Ms. Carol Star. Dr. Myhre can be contacted at (202) 402-5705, Dr. Souza can be contacted at (202) 402-5540 and Ms. Star can be contacted at (202) 402-6139. The study’s Principal Investigator is Dr. Daniel Gubits from Abt Associates. Dr. Gubits can be reached at (301) 634-1854. Jeffrey Lubell serves as the study’s Project Director and can be contacted at (301) 634-1752. Tresa Kappil serves as the study’s Project Manager and can be contacted at (301) 347-5923.

The Obesity and Type II Diabetes Risk Assessment is funded by the National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK)30 and led by Johns Hopkins University (JHU) as part of a study called the Mobility Opportunity Vouchers for Eliminating Disparities (MOVED). HUD and Abt Associates are partnering with JHU on this research as well. Dr. Craig Pollack is the Principal Investigator for the Obesity and Type II Diabetes Risk Assessment. His number is (410) 955-4201. Supporting the JHU team from Abt Associates are: Dr. Daniel Gubits, Jeffrey Lubell, and Tresa Kappil. Their phone numbers are noted in the above paragraph.

The individuals shown in Exhibit B-3 also assisted HUD and JHU in the statistical design of the HCV Mobility Demonstration evaluation.

Exhibit B-3: Individuals Consulted on the Study Design

Name

Role in Study

Jeffrey Lubell, Abt Associates

Project Director for the main Demonstration

Jill Khadduri, Abt Associates

Project Quality Advisor

Mathew Stange, Abt Associates

Data Collection Lead for Child Assessment and the Obesity and Type II Diabetes Risk Assessment components

Debi McInnis, Abt Associates

Senior Advisor

Scott Brown, Abt Associates

Child Assessment Director of Analysis

Jonathan Dorn, Abt Associates

Home Assessment Task Lead

Laura Paulen, Abt Associates

Project Manager for the main Demonstration

Dr. Craig Pollack, Johns Hopkins University

Principal Investigator, Study Lead for the Obesity and Type II Diabetes Risk Assessment (Mobility Opportunity Vouchers to Eliminate Disparities (MOVED) study)

Alyssa Moran, Johns Hopkins University

Co-Investigator, MOVED, nutrition and diet expert

Erin Hager, Johns Hopkins University

Co-Investigator, MOVED, physical activity expert

Eliana Perrin, Johns Hopkins University

Co-Investigator, Pediatrician, child health expert

Stefanie DeLuca, Johns Hopkins University

Co-Investigator, MOVED, Sociologist and housing expert,

Sabriya Linton, Johns Hopkins University

Co-Investigator, MOVED, health equity, substance use, and mental health expert

Matthew Eisenberg, Johns Hopkins University

Co-Investigator/Advisor, MOVED, mental health and economics expert

Amanda Blackford, Johns Hopkins University

Co-Investigator, MOVED, biostatistician

Christopher (Ross) Hatton, Johns Hopkins University

PhD candidate, Graduate Research Assistant








References

Angrist, J. D. & Pischke, J. S. (2008) Mostly Harmless Econometrics: An Empiricist's Companion. Princeton, NJ: Princeton University Press.

Benjamini, Y., & Hochberg, Y. (1995). Controlling the false discovery rate: A practical and powerful approach to multiple testing. Journal of the Royal Statistical Society Series B (Methodological), 57(1), 289-300.

Bryce, R., WolfsonBryce, J.A., CohenBryce, A., Milgrom, N., Garcia, D., Steele, A., Yaphe, S., Pike, D., Valbuena, F., & Miller-Matero, L. R. (2021). A pilot randomized controlled trial of a fruit and vegetable prescription program at a federally qualified health center in low-income uncontrolled diabetics. Preventive Medicine Reports, 23, 101410.

Coffield, E., Nihiser, A. J., Sherry, B., & Economos, C. D. (2015). Shape up Somerville: Change in parent body mass indexes during a child-targeted, community-based environmental change intervention. American Journal of Public Health, 105(2), e83–e89.

Datar, A., Shier, V., Braboy, A., Jimenez-Ortiz, M., Hernandez, A., King. S. E., & Liu, Y. (2021). Assessing impacts of redeveloping public housing communities on obesity in low-income minority residents: Rationale, study design, and baseline data from the Watts Neighborhood Health Study. Contemporary Clinical Trials Communications, 25, (February 2022), 100879.

Fenelon, A., Lipska, K. J., Denary, W., Blankenship, K. M., Schlesinger, P., Esserman, D., & Keene, D. E. (2022). Association between rental assistance programs and hemoglobin A1c levels among US Adults. JAMA Network Open, 5(7), e2222385.

Judkins, D. R., & Porter, K. E., (2015). Robustness of ordinary least squares in randomized clinical trials. Statistics in Medicine, 35(11), 1763–1773.

Klitzman, S., Caravanos, J., Belanoff, C., & Rothenberg, L. (2005). A multihazard, multistrategy approach to home remediation: Results of a pilot study. Environmental Research, 99(3), 294–306.

Little, R. J. A. (August 1986), Survey nonresponse adjustments for estimates of means. International Statistical Review, 54(2), 139-157.

Orr, L. L. (1999). Social experiments: Evaluating Public Programs with Experimental Methods. Thousand Oaks, CA: Sage Publications.

Rummo, P. E., Feldman, J. M., Lopez, P., Lee D., Thorpe, L. E., & Elbel, B. (2020). Impact of changes in the food, built, and socioeconomic environment on BMI in US counties, BRFSS 2003-2012. Obesity (Silver Spring, Md.), 28(1), 31–39.

Seligman, H. K., Lyles, C., Marshall, M. B., Prendergast, K., Smith, M. C., Headings, A., Bradshaw, G., Rosenmoss, S., & Waxman, E. (2015). A pilot food bank intervention featuring diabetes-appropriate food improved glycemic control among clients in three states. Health Affairs (Project Hope), 34(11), 1956-1963.

Tabaei, B. P., Rundle, A. G., Wu, W. Y., Horowitz, C. R., Mayer, V., Sheehan, D. M., & Chamany, S. (2018). Associations of Residential Socioeconomic, Food, and Built Environments With Glycemic Control in Persons With Diabetes in New York City From 2007–2013. American Journal of Epidemiology, 187(4), 736-745.

Tingley D., Yamamoto T., Hirose K., Keele L., & Imai K. (2014). mediation: R package for causal mediation analysis. Journal of Statistical Software, 59(5), 1-38.

Troxel, W. M., Bogart, A., Holliday, S. B., Dubowitz, T., Ghosh-Dastidar, B., Baird, M. D., & Gary-Webb, T. L. (2021). Mixed effects of neighborhood revitalization on residents’ cardiometabolic health. American Journal of Preventive Medicine, 61(5), 683-681



1 Subject to budget availability, the time window for recruitment could be extended by a few months to facilitate progress toward enrolling the target of 570 consented families.

2 Pregnant women are excluded because the primary outcome is change in body mass index (BMI) and type II diabetes risk from baseline to follow-up. Pregnancy at baseline would impact BMI and potentially blood sugar control in ways that would make it hard to assess whether change over time was due to study arm (randomization) or due to giving birth.

3 Subject to budget availability, the time window for recruitment could be extended to facilitate progress toward enrolling the target of 837 families in the Home Assessment.

4 Subject to budget availability, the time window for recruitment could be extended to facilitate progress toward enrolling the target of 900 families in the Obesity and Type II Diabetes Risk Assessment.

5 Although the recommendation is to start with the child activities, if the Adult wants to do their part first, interviewers will defer to their preference.

6 If the adult respondent is not the parent or guardian, then we will stop the adult survey prior to the child outcome questions. This will enable the interviewer to complete that section with the parent or guardian.

7 Judkins, D. R., & Porter, K. E., (2015). Robustness of ordinary least squares in randomized clinical trials. Statistics in Medicine, 35(11), 1763–1773.


8 Orr, L. L. (1999). Social experiments: Evaluating Public Programs with Experimental Methods. Thousand Oaks, CA: Sage Publications.


9 Joshua D. Angrist and Jörn-Steffen Pischke, Mostly Harmless Econometrics: An Empiricist's Companion. Princeton, NJ: Princeton University Press. 2008. p. 47.

10 Judkins, David. R., & Porter, Kristen. E. 2016. “Robustness of ordinary least squares in randomized clinical trials.” Statistics in Medicine, 35, 1763–1773.

11 Little, R. J. A. (August 1986), Survey nonresponse adjustments for estimates of means, International Statistical Review, 54 (2), 139-157.

12 These thresholds are aligned with biomedical standards.

13 Tingley D, Yamamoto T, Hirose K, Keele L, Imai K. mediation : R Package for Causal Mediation Analysis. J Stat Softw [Internet]. 2014 [cited 2022 Mar 5];59(5). Available from: http://www.jstatsoft.org/v59/i05/

14 The power analysis is presented here in the form of MDEs for an assumed desired 80% power for detecting effects, consistent with social science conventions and prior HUD evaluations.

15 Standard deviation for a binary outcome is defined as the square root of the base rate proportion of the outcome times one minus the base rate.

16 Klitzman, S., Caravanos, J., Belanoff, C., & Rothenberg, L. (2005). A multihazard, multistrategy approach to home remediation: Results of a pilot study. Environmental Research, 99(3), 294–306. https://doi.org/10.1016/j.envres.2005.03.003

17 We present our power analyses in the form of MDEs for an assumed desired 80% power to detect effects, consistent with social science conventions and prior HUD evaluations.

18 Gubits, D., Shinn, M., Wood, M., Bell, S., Dastrup, S., Solari, C. D., Brown, S. R., McInnis, D., McCall, T., & Kattel, U. (2016). Family Options Study: 3-Year Impacts of Housing and Services Interventions for Homeless Families. U.S. Department of Housing and Urban Development.

19 Orr, L., Feins, J. D., Jacob, R., Beecroft, E., Sanbonmatsu, L., Katz, L. F., & Kling, J. R. (2003). Moving to Opportunity for Fair Housing Demonstration Program: Interim Impacts Evaluation. U.S. Department of Housing and Urban Development.

20 Rummo PE, Feldman JM, Lopez P, Lee D, Thorpe LE, Elbel B. Impact of Changes in the Food, Built, and Socioeconomic Environment on BMI in US Counties, BRFSS 2003-2012. Obes Silver Spring Md. 2020 Jan;28(1):31–39. PMID: 31858733

21 Power analyses are presented here in the form of level of power for detecting an assumed fixed difference in outcomes between the treatment and control groups, consistent with biomedical conventions.

22 Troxel WM, Bogart A, Holliday SB, Dubowitz T, Ghosh-Dastidar B, Baird MD, Gary-Webb TL. Mixed Effects of Neighborhood Revitalization on Residents’ Cardiometabolic Health. Am J Prev Med. 2021 Nov;61(5):683–691. PMCID: PMC8541899

23 Coffield E, Nihiser AJ, Sherry B, Economos CD. Shape Up Somerville: Change in Parent Body Mass Indexes During a Child-Targeted, Community-Based Environmental Change Intervention. Am J Public Health. 2015 Feb;105(2):e83–e89. PMCID: PMC4318303

24 Datar A, Shier V, Braboy A, Jimenez-Ortiz M, Hernandez A, King SE, Liu Y. Assessing impacts of redeveloping public housing communities on obesity in low-income minority residents: Rationale, study design, and baseline data from the Watts Neighborhood Health Study. Contemp Clin Trials Commun. 2022 Feb;25:100879. PMCID: PMC8685992

25 Tabaei BP, Rundle AG, Wu WY, Horowitz CR, Mayer V, Sheehan DM, Chamany S. Associations of Residential Socioeconomic, Food, and Built Environments With Glycemic Control in Persons With Diabetes in New York City From 2007–2013. Am J Epidemiol. 2018 Apr;187(4):736–745. PMCID: PMC6676946

26 Fenelon A, Lipska KJ, Denary W, Blankenship KM, Schlesinger P, Esserman D, Keene DE. Association Between Rental Assistance Programs and Hemoglobin A1c Levels Among US Adults. JAMA Netw Open. 2022 Jul 1;5(7):e2222385. PMCID: PMC9301513

27 Bryce R, WolfsonBryce JA, CohenBryce A, Milgrom N, Garcia D, Steele A, Yaphe S, Pike D, Valbuena F, Miller-Matero LR. A pilot randomized controlled trial of a fruit and vegetable prescription program at a federally qualified health center in low income uncontrolled diabetics. Prev Med Rep. 2021 Sep;23:101410. PMCID: PMC8193138

28 Seligman HK, Lyles C, Marshall MB, Prendergast K, Smith MC, Headings A, Bradshaw G, Rosenmoss S, Waxman E. A Pilot Food Bank Intervention Featuring Diabetes-Appropriate Food Improved Glycemic Control Among Clients In Three States. Health Aff Proj Hope. 2015 Nov;34(11):1956–1963. PMID: 26526255

29 Benjamini Y, Hochberg Y. Controlling the False Discovery Rate: A Practical and Powerful Approach to Multiple Testing. J R Stat Soc Ser B Methodol. 1995;57(1):289–300


30 The NIDDK grant number is R01DK136610.

1



File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorTresa Kappil
File Modified0000-00-00
File Created2023-10-26

© 2024 OMB.report | Privacy Policy