Supporting Statement B

Supporting Statement B.docx

Healthy Start Evaluation and Quality Assurance

OMB: 0915-0338

Document [docx]
Download: docx | pdf

Office of Management and Budget

Supporting Statement Part B

for Transformed Healthy Start Program and Evaluation:

Collections of Information Employing Statistical Methods


OMB Supporting Statement Part B for Transformed Healthy Start Program and Evaluation

B. Collection of information employing statistical methods

1. Respondent universe and sampling methods

The respondent universe and sampling methods is described below by data collection activity and illustrated in Figure A.2. in Section A.

Preconception, Pregnancy, and Parenting (3p’s) information form

For purposes of monitoring, the 3P’s Information Form respondent universe will include all women participating in the Healthy Start program.

For purposes of the evaluation study, the 3P’s Information Form respondent universe will include (1) women four to seven months postpartum receiving services from Healthy Start projects, (2) a sample of women four to seven months postpartum receiving services from 15 selected Healthy Start projects, and (3) a sample of women four to seven months postpartum from 15 comparison sites during a four-month period. The selection of women four to seven months postpartum will allow for assessment of birth outcomes as well as service receipt, knowledge, and behavior before and after pregnancy. Identifying the sample of respondents for the evaluation involves a two-step selection process:

  1. Site selection: This will include (1) the selection of 15 comparison sites matched to the Healthy Start projects and (2) the selection of 15 Healthy Start projects for in-depth study that is as representative as possible of the universe of Healthy Start projects.

  2. Sampling of women four to seven months postpartum during a four-month study period from the 15 comparison sites, which will be compared to (1) eligible women in all Healthy Start projects and (2) eligible women in the 15 selected Healthy Start sites for in-depth study. The sample of women at comparison sites will be matched to both groups of Healthy Start women on individual-level characteristics only if it is deemed necessary to further ensure comparability of the women participating in Healthy Start and those in comparisons sites. It is possible that requiring the same eligibility criteria and matching at the site level will lead to sufficiently comparable groups of participants and nonparticipants without individual-level matching.

Comparison Site Selection. The comparison communities will be selected from those of unfunded Healthy Start applicant communities and community partner organizations may be selected from unfunded applicant organizations or other community agencies or organizations such as the Special Supplemental Nutrition Program for Women, Infants, and Children (WIC) program. Community partner organizations within comparison communities will be offered compensation to defray costs associated with participation in the study, such as supporting recruitment of eligible women for the study, and we will offer a site-specific report on results from the data collection at the end of the study. The unfunded communities demonstrate a readiness to implement the program and will be Healthy Start eligible according to the criteria laid out in the funding opportunity announcement. In addition, organizational characteristics, aggregate population characteristics from the comparison communities, and the grantee application funding scores will be used as matching variables to ensure that the risk profile of the population along key dimensions will be representative of Healthy Start communities to the extent possible at baseline (Attachment I provides a list of potential matching variables). The data may be used from several sources to conduct such matching, including Healthy Start program applications, census data, Medicaid data as reported publicly online, and information on health and birth outcomes available through the National Center for Health Statistics, Behavioral Risk Factor Surveillance System, Pregnancy Risk Assessment Monitoring System, and other national databases. Additional factors will be considered to ensure that the 15 comparison sites are located in communities that represent the diverse geographies and populations that Healthy Start serves, such as tribal and border communities. A community will be defined as a contiguous geographic area (for example, a single neighborhood, service area, or town/city) as provided in Healthy Start applications. 1,2

Healthy Start Site Selection. The matching process will initially consider all Healthy Start projects for inclusion in the evaluation. Ideally, the evaluation will incorporate information on participants in all Healthy Start projects that meet study criteria (in other words, that are four to seven month postpartum during the study period), which will be possible as long as there is a set of unfunded sites that form an acceptable match to represent the overall Healthy Start project profile. If this is not the case, it is possible that Healthy Start projects will have to be removed until there is an appropriate set of matched comparison sites.1,2 The monitoring data for all Healthy Start participants will allow us the flexibility to subset as needed to develop the Healthy Start matches that will provide the most rigorous information on program effect during each four-month study period. Although this will reduce the generalizability of the findings across all Healthy Start projects, it will ensure that the evaluation is measuring the effect of Healthy Start programs and not some other characteristics associated with the selection of communities for Healthy Start funding.

In addition, we will randomly select a subset of 15 Healthy Start projects for in-depth study. As needed, projects will be replaced by other ones to ensure representation of the diverse geographies and populations served by Healthy Start (such as tribal organizations and organizations located on the border between the United States and Mexico). Healthy Start projects selected for participation will be contacted by MCHB staff and invited to participate in the in-depth study of the program. Alternate grantees will also be selected in the case that the originally selected grantees are not able to participate. The comparison of outcomes from these 15 Healthy Start projects will also provide an estimate of program effect. But, because these projects will also participate in the Community Action Network (CAN) Survey, site visits, and focus groups, we will be able to examine estimated effects in the context of systems and implementation of the program.

Identifying individual respondents. Women at comparison sites will be eligible if they receive services from the partner organization in the comparison community and are four to seven months postpartum during the four-month periods of recruitment: February–May 2015, February–May 2017, and February–May 2019. The three points in time represent the beginning, middle, and end of the grant and will enable an assessment of outcomes as the program matures. We will examine the need to further match comparison and Healthy Start women. This individual-level matching would further ensure that the comparisons in the evaluation are between similar women (with the exception that the participants have access to the transformed Healthy Start program), and the evaluation produces estimates of the effects of Healthy Start on individual-level outcomes. However, because both sets of women are subject to the same eligibility criteria and the communities will already be matched based on aggregate client and community characteristics, the need for individual-level matching is likely lower. Because the information needed for evaluation will already be collected for all Healthy Start participants for monitoring purposes, all Healthy Start participants will be eligible for inclusion in the evaluation that are four to seven months postpartum and completed the form during the study period.

Because information cannot be collected from all women receiving services from the comparison sites, sampling is required to collect individual-level information from participants for the outcomes and multilevel studies. During each four-month study period, nonparticipant women four to seven months postpartum will be sampled from the 15 comparison sites. Sampled women in the comparison sites will be compared to participants from all Healthy Start projects and from the 15 selected Healthy Start projects.

We expect an overall response rate of 100 percent for Healthy Start participants because response to the 3P’s Information Form is part of the process for enrollment and annual monitoring. We anticipate a 65 percent response rate for the data collection effort at comparison sites.3 Based on this analysis, our recommendation is to ensure that each comparison site can provide a sample of 70 women to achieve the needed number of completes. This assumes a 65 percent response rate among women at comparison sites, yielding 45 completed responses per site. Therefore, the recommended sample size is 675 women at selected Healthy Start projects (although we expect there to be many more women eligible for inclusion in the evaluation) and 1,050 women at comparison sites for a total of 1,725 women with 1,350 completes.


To ensure statistical rigor in developing consent rates and analysis weights, we will develop process to track all eligible women at comparison sites regardless of whether they consent to participate in the evaluation. De-identified data will be collected with a unique identifier for each woman, including her age, due/delivery date, race/ethnicity, and preferred language, along with the consent forms for those who consent to participate.

National Healthy Start program survey

All Healthy Start projects will be asked to complete the NHSPS to ensure that consistent information is collected about implementation across the program and to enable analysis of variation in implementation to contribute to multilevel, network, and implementation studies. Approximately 88 Healthy Start projects will be funded for the grant period of June 2014 to May 2019. Project directors are likely to take the lead on responding to the survey but may delegate sections of the survey to other project staff.

Community action survey

In each of the 15 Healthy Start projects selected for the in-depth evaluation study, the survey will be fielded with CAN board members and committee chairs—approximately 10–15 per project for a total of 225 across the projects. Healthy Start projects will be asked to give a list of CAN board and committee members and their contact information. If there are more than 15 per site, we will randomly select up to 15 members. Individual consumers, community leaders, or those not associated with an organization will not be included in the respondent universe because the purpose of this data collection is to gauge organizational relationships in the community. Healthy Start participant perspectives will be captured in the focus groups, and community leader perspectives may be captured during the site visits.

Healthy Start site visits

The site visits will be conducted in the 15 Healthy Start communities selected for in-depth study. During the site visits, four to seven key informant interviews will be conducted with Healthy Start administrative staff (one interview per site visit), Healthy Start service staff (one to two interviews per site visit), health care providers (one to two interviews per site visit), and CAN members (one to two interviews per site visit); we expect on average six interviews per site for a total of 90 interview across 15 sites. The number of key informant interviews that can be scheduled within the allotted time will depend on logistics for scheduling the focus groups (which will occur during the same two-day site visit) and the amount of travel time required between interviews.

The project director at each Healthy Start site will be asked to identify service staff members and providers who have regular interactions with Healthy Start participants as well as active CAN members. We expect these Healthy Start service staff members will include outreach workers, case managers, and health educators. Providers may include clinicians, such as physicians, midwives, and nurse practitioners. CAN members will include representatives of local organizations in the community with an interest in improving maternal and child health.

Healthy Start focus groups

The Healthy Start focus groups will be conducted in the 15 Healthy Start projects selected for in-depth study. One focus group will be conducted in each community, with 10 to 12 Healthy Start participants per group. Therefore, the focus groups will include a total of 150 to 180 participants across the 15 projects. Based on our experiences conducting focus groups with similar populations of low-income women, we anticipate recruiting twice the number of women to obtain the necessary numbers for the focus groups. The recruitment strategy will rely on assistance from the 15 Healthy Start projects in posting information and handing out flyers about the focus groups to their participants. The recruitment materials will invite interested focus group participants to call a toll-free number, and those who are eligible will be given information about the dates and locations of the focus groups. Eligible participants include women with at least one live birth while enrolled in Healthy Start that are active participants at any of the 15 selected Healthy Start projects (that is, receiving services on an ongoing basis). A reminder telephone call and/or email will be sent to participants one week in advance and again the day before the focus group.

2. Procedures for the collection of information

3P’s information form

Healthy Start projects will collect information using the 3P’s Information Form as part of their project monitoring activities, and the 15 organizations in comparison sites will enroll eligible women into their project for the evaluation study.

  • Monitoring. Women will be enrolled on a rolling basis over the five-year grant period for Healthy Start projects. Once women consent to participate in the Healthy Start program, they will be administered the 3P’s Information Form at enrollment and then annually thereafter.

  • Evaluation. Information on Healthy Start women will be drawn from monitoring activities, and no additional data collection using the 3P’s Information Form is needed from Healthy Start sites for the evaluation. Eligible comparison women will be enrolled and provide data over a four-month period (or as long as is needed to reach the target number within each comparison site). Data will be abstracted for Healthy Start women four to seven months postpartum during the four-month study periods in the first, third, and fifth grant years. The data collection from eligible women four to seven months postpartum will occur at comparison sites during the same periods.

For both monitoring and evaluation, the form will be administered using a web-based application, which will reduce burden on administering staff and women participants and improve data quality by allowing the collection of information that is specific to each respondent and having automated quality checks.

National Healthy Start program survey

The National Healthy Start Program Survey will be conducted with all 88 Healthy Start grantees over a two-month period at the end of the first, third, and fifth grant years. The survey is designed to be self-administered through a web-based application by Healthy Start staff. The web-based application will allow respondents to stop and save the survey and return to it later, reducing burden as they may complete it at their convenience. In addition, internal skip patterns and range checks will be programmed into the survey to ensure the accuracy of data and that respondents do not answer questions unnecessarily. All Healthy Start project directors will be emailed a link to the survey for completion as well as accompanying material, such as a frequently asked questions document. Once they complete the survey, they will click on a submit button and HRSA will be informed that the grantee completed the survey. The web-based application will flag incomplete surveys weekly and grantees will receive email reminders to complete the survey.

Community Action Network survey

The CAN Survey will be conducted over a two-month period with up to 15 CAN board members and committee chairs at 15 Healthy Start sites selected for in-depth study at the end of the first, third, and fifth grant years. There are approximately 10 to 15 CAN board members and committee chairs per Healthy Start grantee for a total of 225 respondents. The survey is designed to be self-administered through a web-based application by CAN members. The survey will take approximately 30 to 45 minutes to complete. The web-based application will allow respondents to stop, save the survey, and return to it at a later time, thus reducing burden as they may complete it at their convenience. In addition, internal skip patterns and range checks will be programmed into the survey to ensure the accuracy of data and that respondents do not answer questions unnecessarily. Active CAN members will be emailed a link to the survey for completion as well as accompanying material, such as a frequently asked questions document. Once CAN members complete the survey, they will click on a submit button and HRSA will be informed that the CAN member completed the survey. The web-based application will flag incomplete surveys weekly, and CAN members will receive email reminders to complete the survey.

Healthy Start site visits

Site visits will be conducted with 15 Healthy Start grantees selected for in-depth study. At each site visit, we will schedule meetings to conduct interviews with four types of key informants: Healthy Start administrative staff, Healthy Start service staff, partner health care providers, and CAN participants. All interviews will be in person. Interviews with Healthy Start administrative staff will last up to 75 minutes, with one conducted per site. Up to two 45-minute interviews will be conducted with service staff, such as outreach workers, case managers, and health educators. Up to two 30-minute interviews will also be conducted with individual health care providers that serve Healthy Start participants in the community, and up to two 45-minute interviews will be conducted with individual active CAN members. We anticipate an average of 6 interviews per site for a total of 90 interviews across the 15 selected Healthy Start sites. At each site, we will attempt to schedule interviews to take place over two days and within regular work hours. The two-person interview team will include a senior team member to lead the interviews and a junior member to help schedule and facilitate the interviews. We will audio-record the interviews, if key informants agree, and transcribe the recordings.

Healthy Start focus groups

One focus group will be conducted in each of the 15 Healthy Start grantees selected for in-depth study, with 10 to 12 women per group, for a total of 150 to 180 participants. The groups will be conducted in an accessible location, such as Healthy Start offices, public libraries, or community centers. We will ensure that the space is private (such as an enclosed conference room) to maintain confidentiality and minimize distractions. In each site, we plan to schedule the focus group based on participants’ preferences as stated during the time of recruitment. Each focus group will last a total of 90 minutes. Fifteen minutes will be devoted to intake (including obtaining consent), welcome, and introductions; 60 minutes to discussion; and 15 minutes to wrap up the session and distribute the gift cards. The focus groups will be taped for transcription. Upon arriving, participants will receive a participant information form to collect demographic information and responses to closed-ended questions about their perinatal experiences. After completing the focus group, women will receive a $25 gift card for their participation. Each focus group will be staffed with a moderator and facilitator. The facilitator will be responsible for intake, processing gift cards, welcoming late arrivals, recording the discussion, and taking notes. The moderator will lead the group discussion, ensuring that all participants have an opportunity to speak, drawing out those who are reticent, and cueing participants to share the diversity and similarity of their experiences.

Information collection schedule

Table B.3 summarizes the information collection schedule. Comparison sites will be recruited one time only, before the first round of data collection for the 3P’s Information Form. After OMB approval is received, enrollment and consent procedures will be adapted as needed for each site, and Healthy Start and comparison site staff will be trained on how to implement the enrollment and consent procedures for the 3P’s Information Form. Upon completion of the training, staff will begin enrolling eligible women using the materials described above. Information collection from women will begin in September 2014 for Healthy Start grantees and continue until the end of the grant in June 2019. The comparison sites will collect data during March to May 2015, March to May 2017, and March to May 2019. HRSA will conduct data quality reviews periodically during the field period. The NHSPS and the CAN Survey will be sent out for completion during April to May 2015, April to May 2017, and April to May 2019. The Healthy Start site visits and focus groups will occur during January to April 2019. Attachments C through G include all of the data collection instruments. Attachment C consists of the 3P’s Information Form; Attachment D consists of the NHSPS; Attachment E consists of the CAN Survey; Attachment F consists of the site visit protocols; and Attachment G consists of the focus group protocols.

Table B.3. Information collection schedule

Task

Time Schedule

Develop data collection tools

December 2013–January 2014

Receive OMB approval

Summer 2014

Develop data collection systems

June 2014–September 2014

Administer 3P’s Information Form


Train staff on data collection

August 2014

Collect individual-level data for monitoring (Healthy Start grantees)

September 2014–May 2019

Collect individual-level data (Comparison organizations—Round 1)

February 2015–May 2015

Collect individual-level data (Comparison organizations—Round 2)

February 2017–May 2017

Collect individual-level data (Comparison organizations—Round 3)

February 2019–May 2019

Field National Healthy Start Program Survey


Collect program-level data (Round 1)

April 2015–May 2015

Collect program-level data (Round 2)

April 2017–May 2017

Collect program-level data (Round 3)

April 2019–May 2019

Field Community Action Network Survey


Collect program-level data (Round 1)

April 2015–May 2015

Collect program-level data (Round 2)

April 2017–May 2017

Collect program-level data (Round 3)

April 2019–May 2019

Conduct Site Visits

January 2019–April 2019

Conduct Focus Groups

January 2019–April 2019

Conduct Analysis and Reporting


Analyze and synthesize data (Phase I)

June 2016–December 2016

Develop Phase I report

September 2016–December 2016

Interim study briefing

December 2016

Analyze and synthesize data (Phase II)

June 2019–December 2019

Develop Phase II report

September 2019–December 2019

Final study briefing

December 2019

3. Methods to Maximize Response Rates and Deal with Nonresponse

3P’s information form

The data collection procedures discussed below were designed to maximize response rates and to promote the accuracy and completeness of information collected.

Training Grantees and Comparison Site Partner Organization Staff. Prior to the launch of data collection, grantees and comparison site partner organizations will be briefed and trained to ensure they have a full understanding of the informed consent procedures for adults and minors that comprise enrollment and the data collection plan. HRSA will provide technical assistance to each organization to use procedures customized to its staffing arrangement and work flow as needed. This will ensure that all consent procedures are clear and implementable at the site level. For comparison sites, we will also go over evaluation study eligibility criteria. Well-trained staff will help improve response rates by gaining consent, by administering the form in a professional but friendly manner that keeps the respondent actively engaged in the interview, and ensuring complete information is collected. The training will be based on a detailed manual and supplemented by practice exercises in gaining cooperation and multiple practice exercises to help staff administering the form become comfortable with the various major paths of the form. During the training, problems with language or routing are sometimes identified. HRSA will leave sufficient time between training and the start of interviewing to correct and test any errors discovered during training.

Implementing the Form in a Web-Based Application. Implementing the form in a web-based application will provide a controlled way to collect data that ensures high quality and consistency by enforcement of rules to avoid various kinds of error. The application will (1) control the routing through the form, thus avoiding pathing errors; (2) control response ranges so that out-of-range values are checked and updated in real time by staff; and (3) make consistency checks to ensure that the respondent’s answers are consistent throughout the questionnaire. In addition, the application will fill responses from previously asked questions, thus helping staff smoothly administer the survey.

Review of the Submitted Forms to Ensure High Quality Data. Monitoring how staff administer the form, especially early in the process, is critical to ensuring the high quality of the data. During the first week of the launch of data collection, HRSA will review and debrief the first cases completed by each staff. After this point, HRSA may periodically monitor administration through submitted cases. The review will focus on identifying missing and inconsistent information to provide corrective feedback.

Debriefing Interviewers to Identify Problems Early. HRSA regards debriefing staff as a critical step in quality control of the data collection. Staff will be debriefed after the first few weeks of data collection to identify problems in the question language or survey routing. Corrections will be made to errors identified during the debriefings. Periodically, HRSA will update the training manual to keep pace with changes to or clarifications of procedures, thus ensuring consistency across all staff.

Reviewing Data Frequencies. Frequency reviews are an important tool in ensuring data quality. To determine whether the instrument is performing as specified, frequencies will be reviewed after the first 50 forms are submitted. If programming errors are detected (for example, erroneous skip logic or inadequate range specifications), HRSA will correct the errors immediately. If missing data need to be retrieved from respondents, staff will be instructed to follow up and obtain the information.

Minimizing Nonresponse Bias. Women that choose to participate in Healthy Start at grantee sites will complete the form as part of the intake and enrollment process; thus, the response rate at Healthy Start sites will be 100 percent. However, at the comparison sites, we expect a response rate of approximately 65 percent. A previous Healthy Start participant survey had a response rate of 66 percent.4

The potential degree of nonresponse bias is a function of both response rate and how different respondents and nonrespondents are with respect to factors that are related to the outcomes (for example, if nonrespondents are less likely to have access to nutritious foods, they could have worse birth outcomes than the respondents, making the comparison group look better than the population as a whole). We will do everything possible to maximize the response rate, which will help mitigate the risk of nonresponse bias in the comparison sites. However, it is unlikely that we will achieve an 80 percent response rate or higher in comparison sites, as we will be dependent upon comparison sites help us recruit women. And, because we cannot directly determine how different respondents and nonrespondents are on our key outcome measures, we plan to compare what is known about both respondents and nonrespondents to get a sense of the risk for nonresponse bias (age, due/delivery date, race/ethnicity, and preferred language). This, in turn, will point us to the best set of characteristics to use when adjusting the sampling weights for nonresponse in comparison sites. We will explore available data for both respondents and nonrespondents, using modeling to determine which ones are significantly related to the propensity to respond and which of these are likely to be related to our key outcomes. We will then construct a propensity score from this logistic regression model to use to adjust our sampling weights for nonresponse.

National Healthy Start program survey and Community Action Network survey

Two previous Healthy Start Program Surveys have been conducted with grantees (OMB #0915-0287 and #0915–0338), with a response rate of 99 and 100 percent. Based on previous experiences conducting Network Surveys with organization partners, we expect a response rate between 80 and 95 percent for the CAN survey. Although we do not expect issues with responses, the self-administered web-based NHSPS and CAN surveys will allow programs to stop and return to the survey at their convenience, encouraging completion. In addition, clear instructions with an email and telephone number for a help desk will be provided to answer any questions that respondents may have. As with the 3P’s Information Form, implementing the form in a web-based application will provide a way to collect high quality and consistent data and minimize burden by (1) routing respondents through the form, thus avoiding pathing errors; (2) including range checks so that out-of-range values are checked and flagged for respondents to correct immediately; and (3) including consistency checks to ensure that the respondent’s answers are consistent throughout the questionnaire. In addition, the application will fill responses from previously asked questions, thus helping respondents smoothly complete the survey. We will develop clear instructions and program the web-based application to be as intuitive as possible to minimize time that grantee staff and CAN member staff have to be trained. During the field period, the web-based application will automatically send weekly reminders to those that have not completed the survey.

Healthy Start site visits and focus groups

Response to the two qualitative components—site visits and focus groups—is expected to be high because of the interest of Healthy Start projects in the evaluation among stakeholders and Healthy Start grantees’ ability to leverage their relationships with participants for focus group recruitment. For the site visits, interviews will be scheduled at the convenience of the key informant. A response rate of 95 percent is expected for key informants during the site visits based on experience with similar activities and a typically high level of motivation from the Healthy Start staff and their partners. Outreach to prospective focus group participants will take place through Healthy Start grantees; we will ask Healthy Start grantees to post flyers and other materials about the focus groups in their location and mention the focus groups to their participants. Even with Healthy Start endorsement of the focus groups to their participants, we expect a 50 percent rate of no-shows on the day of the focus group based on experiences conducting focus groups with similar populations of Medicaid and CHIP participants and pregnant and postpartum women for the Text4baby evaluation. Therefore, we will recruit twice the target number of focus group participants. We will also offer a payment of $25 in the form of a gift card to compensate them for their time. In addition, the groups will be scheduled based on the preferences of the most women as provided by them at the time of recruitment. The groups will be held in convenient locations to increase attendance. Reminders will be sent to confirmed participants by email and/or telephone one week in advance and again the day before the group.

4. Tests of procedures or methods to be undertaken

HRSA carried out a pre-test of the 3P’s Information Form with six Healthy Start participants; the NHSPS with two Healthy Start programs; and the CAN Survey with five CAN members. Key findings for each instrument are discussed below. Attachment H contains the Pre-test Report and Recommendations.

3P’s Information Form. The timing of the survey was consistent with the budgeted survey length (30 minutes). The survey worked well but needed a few additional probes and wording changes (particularly, additional probes to indicate that the questions should be answered for the last pregnancy; wording changes to clarify the type of nursery the infant stayed in at the hospital, the types of people in the room with the woman at delivery, and the number of children living in the same household; and addition of a response option to allow respondents to select both Healthy Start and another source of information). In addition, because none of the women could answer the question, we removed the question related to Apgar score and revised the vaccination question from asking for a list of vaccinations received by the woman’s child to asking about if the child ever received vaccinations and when the last vaccination occurred. We also removed duplicative questions asking about receipt of health education about breastfeeding and vaginal and C-section delivery.

NHSPS. The timing of the survey was an average of three hours, which was shorter than the instrument used for the previous evaluation (four hours). However, to further reduce burden, we pared down the survey to an estimated burden of two hours. Based on feedback from the pre-test, we revised questions to improve flow and reduce burden, such as combining questions related to outreach and participant recruitment strategies and changing requests for number of participants to response options for percentage ranges. We added a few additional topics on domestic violence and immigration for one question. Additionally, we eliminated or simplified questions for which project applications and reports may be a source of information, such as additional names for the project in the community, specific models and curricula used, average case load, and specific types of activities engaged in related to health insurance enrollment; many of these questions were open-ended responses or required a series of responses that require more time to complete. We found that grantees went through and checked all topics asking about their provision of health education for a specific topic as if they may not have fully read each topic. Because the list was very long, we deleted health education topics already covered under the participant-level form to minimize the length of the list and encourage more-thoughtful responses. Otherwise, we made minor wording changes to make questions and response options more clear.

CAN Survey. The timing of the survey was consistent with the budgeted survey length (45 minutes). The survey worked well but needed a few additional instructions for questions related to estimating numbers and dates, and a few additional “don’t know” response options were added. We also deleted one question that asked for budget information and the fields where CAN members could list organizations outside of the CAN with which they collaborated; these questions were confusing for all respondents and yielded unreliable data. As key community organizations are represented on the CAN, we anticipate that this will not affect the quality of the information collected.

5. Individuals consulted on statistical aspects and individuals collecting and/or analyzing data

Individuals consulted on statistical aspects

HRSA/MCHB, Mathematica Policy Research staff, and previous Healthy Start grantees were consulted about the substantive, methodological, and statistical aspects of the study. Their recommendations were incorporated into the study design and instruments on an ongoing basis. The person responsible for receiving and approving the instruments and information collection is Keisher Highsmith, MCHB. Table B.4 lists the individuals consulted.

Table B.4. Individuals consulted

Hani Atrash, Director
Division of Healthy Start Services, MCHB
[email protected]
301-443-0543

David de la Cruz, Deputy Director
Division of Healthy Start Services, MCHB
[email protected]
301-443-6332

Keisher Highsmith, Director, Special Initiatives and Program Planning and Evaluation
Division of Healthy Start Services, MCHB
[email protected]
301-443-1963

Johannie Escarne, Senior Public Health Analyst
Division of Healthy Start Services, MCHB
[email protected]
301-443-5692

Willie Tompkins, Senior Public Health Analyst
Division of Healthy Start Services, MCHB
[email protected]
301-443-1551

So O’Neil, Senior Researcher
Mathematica Policy Research
[email protected]
617-301-8975

David Jones, Senior Researcher
Mathematica Policy Research
[email protected]
617-674-8351

Margo Rosenbach, Vice President
Mathematica Policy Research
[email protected]
617-301-8967

Barbara Carlson, Associate Director of Statistics
Mathematica Policy Research
[email protected]
617-674-8372

Jared Coopersmith, Statistician
Mathematica Policy Research
[email protected]
202-250-3512

Angela Jaszczak, Senior Survey Researcher
Mathematica Policy Research
[email protected]
312-994-1052

Holly Matulewicz, Survey Researcher
Mathematica Policy Research
[email protected]
617-674-8362

Tiffany Wootson Majors, Project Director
Baltimore Healthy Start
[email protected]
410-396-7318 ext. 232

Maria Lourdes F. Reyes, Director of California Programs
California Border Healthy Start
[email protected]
619-791-2610 ext. 305

Virginia Berry White, Project Director
Low Country Healthy Start
[email protected]
803-531-8008


Individuals collecting and/or analyzing data

Funded Healthy Start grantees and HRSA evaluation contractor field staff or partner organization staff at 15 comparison sites will collect data for the 3P’s Information Form. Healthy Start grantees are expected to be funded by June 2014, and a list of funded Healthy Start grantees will posted at http://www.hrsa.gov. The comparison sites will be selected from unfunded Healthy Start applicants. An evaluation contractor will be selected to collect information for the National Healthy Start Program Survey, Community Action Network Survey, Site Visits, and Focus Groups. The evaluation contractor will also conduct the analysis of the data collected for all five information collection efforts presented in this OMB package.

1 It is possible that there will not be a set of unfunded sites that comprise an appropriate comparison to the group of all Healthy Start projects, particularly because we know that these sites will have lower funding scores based on their applications. For example, the comparison sites could be quite different from a representative group of Healthy Start projects in terms of need and populations served. A subset of Healthy Start projects with funding scores closest to the funding cutoff would likely be a more appropriate comparison. In this case, outcomes for comparison women in the unfunded sites with the 15 highest funding scores will be assessed against matched women four to seven months postpartum during the four-month study period in the Healthy Start projects with the lowest 15 funding scores. This comparison would likely do a better job of isolating the effect of Healthy Start on outcomes when there is not an appropriate group of sites among the unfunded organizations to be compared to a representative group of Healthy Start projects. In other words, there is less concern that there are unobserved characteristics of this subgroup of Healthy Start projects that are associated with the likelihood of funding and the outcomes. However, the effect measured by this approach will be less generalizable to all Healthy Start projects, given that it is the subset of projects with the lowest funding scores. The selection of the 15 lowest scores among the Healthy Start projects can be relaxed somewhat to ensure that the group is more representative of all Healthy Start projects, thus increasing the generalizability of these findings while largely maintaining the ability to isolate the effect of Healthy Start.

2 In the event that there are not a sufficient number of appropriate comparison sites from unfunded communities, we can also use secondary data sources to select comparison communities. The secondary data sources would be the same as those we list to conduct the matching. We would begin by narrowing communities down to those that meet eligibility criteria for application to Healthy Start and conduct the same matching methods as would have with unfunded communities.

3 All Healthy Start participants are expected to respond to the 3P’s Information Form as it is a part of intake into the program. Those that do not consent to participation in the program and answer the form will not be considered as a participant.

4 Rosenbach, M., S. O’Neil, B. Cook, L. Trebino, and D. Klein Walker. “Characteristics, Access, Utilization, Satisfaction, and Outcomes of Healthy Start Participants in Eight Sites.” Maternal and Child Health Journal, 2010, 14(5):666–79.

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File TitleInstructions for writing Supporting Statement B
AuthorJodi.Duckhorn
File Modified0000-00-00
File Created2021-01-27

© 2024 OMB.report | Privacy Policy