HS Supporting Statement_Revision_ April 2011_04 28 11

HS Supporting Statement_Revision_ April 2011_04 28 11.doc

Evaluation of Core Components of the Federal Healthy Start Program

OMB: 0915-0338

Document [doc]
Download: doc | pdf

Health Resources and Services Administration/Maternal and Child Health Bureau

An Evaluation of Core Components of the Federal Healthy Start Program


A. Justification


1. Circumstances of Information Collection

The Health Resources and Services Administration’s (HRSA) Maternal and Child Health Bureau (MCHB) is requesting Office of Management and Budget’s (OMB) approval for a survey of Healthy Start grantees. The survey (Grantee Project Director Survey of 2011) is designed to collect primary data that is needed for an evaluation of the current National Healthy Start Program that includes 104 grantees across the nation. This one-time collection of data will be combined with other extant data (e.g., performance measures, impact reports, etc.) about Healthy Start to enable MCHB to assess the progress and results of the program in terms of its core program goals. The evaluation is consistent with the needs of MCHB to meet its Government Performance and Results Act (GPRA) requirements, and is aligned with the nature of the program as a community based intervention. The Healthy Start Reauthorization Act of 2007 (Public Law No: 110-339) authorized appropriations for the Healthy Start initiative through fiscal year 2013 (Attachment A), including appropriations reserved for evaluations to determine whether Healthy Start projects have been effective in reducing racial and ethnic disparities in infant mortality.


The National Healthy Start Program

The National Healthy Start Program, funded through HRSA’s MCHB, was developed in 1991 with the goal of reducing infant mortality disparities in high-risk populations through community-based interventions. The program originally began as a 5-year demonstration project within 15 communities that had infant mortality rates 1.5 to 2.5 times above the national average. The National Healthy Start Program has since expanded in size and mission to include 104 grantees implementing projects in 38 states, the District of Columbia and Puerto Rico. The Program emphasizes a community-based, culturally competent approach to the delivery of services for women and their infants. As specified by the HRSA 2001 Guidance, the three core program goals for Healthy Start are to (1) reduce racial and ethnic disparities in access to and utilization of health services; (2) improve the local health care system; and (3) increase consumer/ community voice and participation in health care decisions. To meet those goals, HRSA identified nine core components that grantees are required to implement. These include five service components1 and four specific systems-building components2.


Past Evaluations of the National Healthy Start Program


The first national evaluation of the National Healthy Start Program, conducted by Mathematica Policy Research in 1997 – 1999, included a process and outcome analysis of the original five-year demonstration project within 15 communities.3 This national evaluation examined the implementation of the 15 demonstration projects during fiscal years 1992 and 1996, and whether these projects achieved the goals of Healthy Start in reducing infant mortality and improving maternal and infant health. Findings suggested that Healthy Start was associated with improvements in measurements of prenatal care utilization, preterm birth rate, low and very-low birth weight rates, and the infant mortality rate. Specifically, several sites had significantly better outcomes than comparison sites.


In response to the lack of information on the context in which the Healthy Start programs operated and lack of a social determinants of health framework in this first national evaluation for the National Healthy Start Program, the Secretary’s Advisory Committee on Infant Mortality recommended to MCHB that future evaluations of the National Healthy Start Program should include this information and reflect that community-based, culturally competent focus of the programs. This first national evaluation also could not delineate which program components or features lead to which outcomes.


In 2002, a contract was awarded to Abt Associates Inc. and its partner Mathematica Policy Research to conduct a second national multi-year evaluation of the implementation of the Healthy Start program. The purpose of this second national evaluation was to examine the projects implemented during the 2001 to 2005 funding cycle. This national evaluation was designed to provide information on quality improvement by assessing program implementation and performance while also tracking program outcomes.


The second national evaluation was comprised of two phases. A key objective of the first phase of this evaluation (conducted from 2002 to 2004) was to provide information about the funded grantees and the implementation of the program components of the National Healthy Start program. The following three questions directed the first phase of the evaluation: (1) What are the features of the individual Healthy Start projects? (2) What results (intermediate outcomes) have Healthy Start projects achieved? (3) Is there an association or link between certain project features and the achievement of project results?


A survey of all grantees served as the primary data source to address the questions of the first phase of the evaluation. The data were collected in 2004, but asked about the grantees’ Calendar Year 2003 activities and projects. Completed by 95 of the 96 grantees funded at that time, the survey provided a “point-in-time snapshot” of the implementation of the Healthy Start program components. Specifically, the survey gathered data on the extent to which the components were being implemented, the grantees’ self report of their achievement of intermediate outcomes, and the relative contributions of program components.


Grantees were more likely to report that service components (namely case management and health education) made a primary or major contribution to achieving their intermediate program outcomes, with fewer grantees reporting the same about system components. As a result, this phase of the evaluation helped to establish the importance of the five service components and four systems components (referenced in footnotes 1 and 2 above) that are now requirements for all grantees (OMB #s 0915-0287 and 0915-0300). A copy of the report of this phase of the Healthy Start national evaluation is available on line at http://mchb.hrsa.gov/healthystart/phase1report/.


Building on the findings from the first phase of the evaluation, the second phase of the national evaluation (conducted 2004-2007) provided more in-depth analysis of a subset of eight grantees. In addition to further examining the three research questions mentioned above, the second phase explored a fourth question: What Healthy Start features are associated with improved perinatal outcomes? This second phase of the evaluation included site visits to eight grantees to assess program implementation and outcomes, as well as a survey of Healthy Start program participants (only at those 8 grantee sites) to ascertain their perspectives on services received during pregnancy and the interconceptional period. These findings confirmed the results found in the first phase of the evaluation and elucidated important themes found across the eight (8) grantees. 4,5 For instance, results illustrated that each project has a unique configuration of the core components which is designed to serve the specific populations within its community. Overall, staff at the Healthy Start projects described in the case studies most frequently cited outreach, case management, and health education as the service components that contributed most to their achievements while consortia were cited as the most influential systems component in reaching the goals of Healthy Start.


Present Evaluation

The previous evaluation helped to establish the importance of the Healthy Start program components as they relate to achieving the goals of the National Healthy Start Program from the grantees’ perspectives. The present evaluation will further build on the previous evaluation, by employing both quantitative and qualitative data sources. The Grantee Project Director Survey of 2011 (Grantee PD 2011 Survey) will document the accomplishments of the National Healthy Start Program for the subset of the service and system activities that were identified as most important in previous evaluations. The present evaluation will also utilize quantitative Performance Measure data6 to assess grantees’ progress toward achieving short-term, intermediate, and long-term outcomes that are expected to occur if program elements are successfully and completely implemented.


The evaluation relies on a logic model (see Exhibit 1) to illustrate how the implementation of the nine program components hypothetically lead to a progression of the achievement of a number of short-term outcomes, which in turn, may translate into intermediate outcomes and then to longer-term outcomes:


  • The double-sided arrows in the “Contextual Factors” section of the Logic Model are meant to represent the various characteristics from the individual level up to National and State policy levels that may interact with each other and may impact service and system implementation.


  • The larger arrows, from Contextual Factors to Implementation, indicate, as mentioned, the influence that contextual factors may have on how the Healthy Start projects then implement their services and systems components and the various features associated with each component.


  • The implementation of the services and systems may then impact outcomes in the short-, intermediate-, and long-term. Although distinct, the individual and systems short- and intermediate-term outcomes may interact or influence each other. For example, positive individual outcomes may lead participants to become move involved in the Healthy Start project at the systems level. As short- and intermediate-term individual outcomes unfold, they inevitably will impact longer-term population outcomes.


  • At the bottom of the logic model are arrows indicating that funding, and in particular, duration of funding may impact a Healthy Start project’s ability to achieve short-, intermediate-, and long-term outcomes.



These long-term outcomes – improved maternal and child health outcomes – are the primary focus for evaluating Healthy Start such that not all outcomes in the logic model will be measured in the current national evaluation. The outcomes of focus for this evaluation include:

  • Birth outcomes including low/very low birth weight and infant mortality

  • Maternal health including health risk behaviors

  • Inter-pregnancy/inter-delivery interval and birth spacing

  • Child health during the first two years of life.


As depicted in the logic model shown in Attachment B, these maternal and child health outcomes consist of improvements that are expected to occur if program elements are successfully and completely implemented and if short-term and intermediate outcomes are achieved. Furthermore, these outcomes should be observed at a later point in the project as a result of the intermediate outcomes being reached and only after the fully implemented program model has been operating for a sufficient period of time.


The logic model also illustrates a set of assumptions about the Healthy Start Program that have guided the evaluation design. These assumptions, based on the previous evaluations of Healthy Start conducted to date, include the following:

  • Contextual factors and implementation features influence how Healthy Start projects implement service components.

  • Each project uses a configuration of implementation features specific to the needs of the community.

  • Consumer participation and leadership are features of service components that can influence achievement of outcomes.

  • A project’s ability to measure or achieve desired outcomes may be influenced by how long the project has been funded and stage of implementation of the nine components.


Informed by these assumptions and the logic model for the Healthy Start Program, the goals of the present national evaluation are to:

  • Assess the grantees' performance and progress toward achieving the goals and outcomes; and

  • Assess the relationship of grantees’ performance and progress to the implementation features of Healthy Start Program components and contextual conditions.


Seven (7) questions guide the national evaluation of the Healthy Start Program:

  1. How are the nine program components and their features implemented across all healthy start projects?

  2. What expanded service components are implemented by Healthy Start projects?

  3. How do project components and features correlate to intermediate and long term outcomes?

  4. How does consumer participation and leadership function as a feature of the Healthy Start program? Is consumer participation more strongly associated with achievement of outcomes for some program components compared with others?

  5. How does the stage of implementation for each Healthy Start projects influence the project’s ability to measure and achieve intermediate and long-term outcomes?

  6. What is the relationship between grantee reflections and program outcomes (service/systems level; individual/population levels)?

  7. What social determinants and contextual factors influence the implementation of the program components and subsequent outcomes?


Table 1 illustrates how the present evaluation will utilize the Grantee Project Director Web-based Survey of 2011 (for which OMB approval is needed) and other data sources to answer these evaluation questions.

TABLE1: Evaluation Questions and Relevant Data Sources

Evaluation Questions

Data Sources

Grantee PD Survey, 2011

MCHB Performance Measures

Phase I Evaluation Project Director Survey (2004)

Impact Reports

Local Evaluations

County, State & National Benchmark Data

  1. How are the nine program components and their features implemented across all Healthy Start projects?

  2. What expanded program components are implemented by Healthy Start projects?

  3. How do program components and features correlate to intermediate and long-term outcomes?

tbd

  1. How does consumer participation and leadership function as a feature of the Healthy Start service components? Is consumer participation and leadership more strongly associated with achievement of outcomes for some components than others?


tbd


  1. How does the stage of implementation for each project component or length of funding influence the project’s ability to measure and achieve intermediate and long-term outcomes?

tbd

  1. What is the relationship between grantee reflections and program outcomes?




  1. What social determinants and contextual factors influence the implementation of the program components and subsequent outcomes?

tbd

tbd




The other data sources listed in Table 1 that will be used for this present national evaluation are briefly described below:

  • Performance Measures are developed by MCHB to monitor progress of MCH programs towards achievement of objectives. All Healthy Start grantees report annually on objectives and indicators for 15 performance measures in all project areas through the Electronic Handbook (EHB). Since these measures are standardized across all grantees, they provide an objective measure of comparison of project outcomes.

  • Phase 1 Evaluation Project Director Survey (2004) was the survey conducted in the previous national evaluation conducted 2002 to 2007. There are items from this survey included in the proposed 2011 survey to enable comparisons on some component implementation characteristics and on grantees’ perspectives of outcome achievements and the contribution of program components. However, the proposed 2011 survey is more comprehensive than the previous 2004 survey in that it will collect detail on aspects of the Healthy Start program components that have not been explored in previous evaluations but have been identified as essential elements of the Healthy Start Program logic model.

  • Impact Reports provide a detailed narrative of project activities and the project directors’ perspective of the impact of these activities on the service population. Grantees are required to submit these narratives at the end of their 4-year period through the EHB. However, it is difficult to extrapolate data from these reports since they are not standardized across the 104 projects and vary in quality and depth of information. These reports were used to inform survey questions and response categories, and will be used to identify contextual factors to include in the analyses.

  • Local Evaluations are grantee funded evaluations on specific topics/issues of interest to the project and analyses of project outcomes based on trend analysis of performance measures. A number of grantees have collaborated with external researchers to conduct rigorous studies that have been published in peer-reviewed journals such as the Maternal and Child Health Journal. Depending on the number of quality evaluations, we will use these in a systematic review to provide further evidence of the impact of select program components on outcomes of the National Healthy Start Program.

  • County, State (Maternal and Child Health Block Grant - Title V) and National Benchmarks are also performance measures that are either the same as or similar in definition to Healthy Start performance measures and Healthy People 2010 (HP 2010) targets. State benchmarks are also reported annually for the MCHB title V program in the Title V Reporting System (TVIS); and HP2010 are national targets in many health areas available from national data sources (e.g. vital records) through the Centers for Disease Control and Prevention, National Center for Health Statistics. Accessing these existing data will enable us to make benchmark comparisons with Healthy Start to further illustrate the association between the Program components and outcomes by quantifying the effects of the Healthy Start Program on participants.


As illustrated in Table 1, the Grantee PD Survey of 2011 represents the primary data source that will provide the most consistent, updated and detailed information pertaining to all of the evaluation questions. In fact, the survey will collect data that provides consistent information on these items for all grantees. This data does not currently exist. This survey is the only systematic effort to collect information on the features of each program component – e.g., features of each program component including intervention type, settings and frequency, staffing and personnel, case load, referral types, consumer participation, cultural competence, and male involvement. Additionally, the data collected in the survey will provide detail on aspects of the Healthy Start program that have not been explored in previous evaluations but have been identified as essential elements of the National Healthy Start Program logic model. These elements include home visiting, breastfeeding and smoking cessation activities, healthy weight services, domestic violence and child abuse services, and the establishment of a medical home for women and infants.


The survey will be the main source of data on grantee’s implementation activities and used for generation of independent variables. Objective data on maternal and child health outcomes achieved by the Healthy Start grantees will be obtained from annual performance measures reported in the Maternal and Child Health Bureau’s Discretionary Grant Information System (DGIS), in addition to state and national data sources such as the States’ vital records and the National Center for Health Statistics.



2. Purpose and Use of Information


The purpose of this one-time survey is to collect consistent data on the service and systems building components of all current Healthy Start Projects. The data collected though this survey will be used to:


  • Evaluate the grantees' performance and progress toward achieving the goals and outcomes specified in Attachment C;

  • Evaluate the relationship of performance and progress to the implementation features of Healthy Start Project components (listed on bottom of page 1);

  • Assist MCHB in determining, on a national level, where technical assistance may be needed to improve program performance, set future priorities for program activities, and contribute to the overall strategic planning activities of MCHB; and

  • Provide baseline for future measurement of the initiative's long-term effect.


This one-time survey will assist in addressing the seven evaluation questions identified above (see page 5).


To address these questions, the evaluation will utilize data from the proposed one-time Grantee Project Director Survey of 2011 (provided in Attachment D) to all Healthy Start grantees in conjunction with data from other existing data sources (identified on pages 6-7). In addition, this survey will provide data on the core service and systems-building components of the program (referenced in footnotes 1 and 2 on page 1); and features of each program component including intervention type, settings and frequency; staffing and personnel; case load; referral types; consumer participation; cultural competence; and male involvement. The survey data will be useful at the Federal level and for individual grantees.


Federal Uses of Information


The data collected by MCHB from all current grantees will allow the Bureau to monitor grantee performance and progress toward achieving goals and outcomes specified in Attachment C. The information will provide timely data on the range and variation of approaches of service implementation. This information will help the Bureau determine, on a national level, where technical assistance may be needed to improve program performance, set future priorities for program activities, and contribute to the overall strategic planning activities of the Bureau.


Grantee Uses of Information


Grantees include a variety of public and non-profit organizations. The data collected through the survey will help grantees assess their activities and compare their activities to other Healthy Start programs. The aggregated information will inform grantees about strategies being used by other programs and help them understand important contributing factors. It will also assist grantees in meeting their requirements for accountability. The survey data will also help individual grantees, their national association, and others involved in addressing disparities in maternal and child health outcomes to more systematically explain the Healthy Start program and its contributions to this national problem.


Information Collection


The 2011 Grantee Project Director Survey (in Attachment D) is divided into the four (4) main parts outlined below:

  • Part A. Services: This section asks general programmatic information on the service components (referenced on page 1) to gain an understanding of an individual project’s service delivery model. Additional sections gather specific information on special implementation features and health education topics.

  • Part B. Systems: This section asks general information about the systems components (referenced on page 1) and specific information on the structure, roles, and activities of the Consortium.

  • Part C. Data Systems and Tracking: This section will collect information on whether the project has any systems in place to track participant-level data on services and health outcomes; whether local evaluations are conducted voluntarily; and how collected data is used to inform the project.

  • Part D. Reflections and Accomplishments: This section will assess grantees’ perceived accomplishments and progress towards intermediate and long-term outcomes. Questions related to changes over time will capture information about key contextual factors and barriers.

The survey respondent will be the Healthy Start Grantee Project Director as they are expected to be the most knowledgeable about their program. However, they can share access to the web survey to allow other staff to assist them in completing the survey. The data will be reported for the year 2009. Additional information on the sampling methodology can be found in the Statistical Methods section.


3. Use of Improved Information Technology


To expedite and standardize data collection, the survey will be administered online. Grantee project directors will be primary respondents for the survey.


  1. Efforts to Identify Duplication


There are no other HRSA/MCHB data collection activities evaluating the Healthy Start Program. The information that we are requesting to collect is not available elsewhere and the data collection has not taken place previously. We will use existing data on performance measures from the Discretionary Grant Information System (OMB #0915-0298) collected from Healthy Start Grantees to supplement the data collection from this survey for this evaluation.

5. Involvement of Small Entities


This activity does not impact small entities.


6. Consequences If Information Collected Less Frequently


This is a one-time data collection effort. It will not be possible to evaluate the success of implementation approaches of the individual grantees or of the National program without collecting this information. The data from this survey will enable the MCHB to describe grantees’ implementation of service and systems-building program components that are likely to have positive maternal and child health outcomes.


7. Consistency With the Guidelines in 5 CFR 1320.5(d)(2)


This data collection is fully consistent with the guidelines in 5 CFR 1320.5(d)(2).


8. Consultation Outside the Agency


The notice required in 5 CFR 1320.8(d) was published in the Federal Register on April 29, 2010, Volume 75, Number 82, Page Number 22595-22596. The 30-day Federal Register notice published on September 22, 1010. No comments were received.


A description of the national evaluation, areas of inquiry for the survey, and the timeline and procedures for administering the survey were presented to all Healthy Start grantees at the annual National Healthy Start Association Meeting in March 2010.


In addition, the Project Officer invited members of the National Healthy Start Association Board and grantees to review or participate in a pilot of the draft survey (paper version) in June 2010. Four (4) grantees participated in the pilot by completing the paper version of the survey. This exercise was intended for the grantees to provide feedback on the wording, order and flow of questions, as well as the amount of time needed and materials needed for completion. Their survey responses were not used for analyses. MCHB staff and other grantees reviewed the survey (but did not complete it) and provided feedback over several conference calls. The survey was revised based on feedback from grantees and MCHB staff. Contact information for the four grantees that participated in the pilot is below:



9. Remuneration of Respondents


Respondents will not be remunerated or compensated.


10. Assurance of Confidentiality


No personally identifiable information will be collected. The survey will collect information at the program level and then data will be aggregated to the national level.


11. Questions of a Sensitive Nature


There are no questions of a sensitive nature.


12. Estimates of Annualized Hour Burden


The 104 Healthy Start grantees will be asked to complete the survey. Based on feedback from grantees, the average burden is estimated at four hours. The following table identifies the annualized burden estimate:


Type of Form

Number of Respondents

Responses Per Respondent

Burden hours per response

Total Burden Hours

Hourly Wage
Rate*

Total Hour Cost


Survey


104


1


5


520


$40


$20,800

*An average of project director salaries was used to determine the hourly wage rate used in the calculation.



13. Estimates of Annualized Cost Burden to Respondents


There are no capital and start up costs associated with this data collection.


14. Estimates of Annualized Cost to the Government


The total cost to the Government for collecting these data is estimated to be the portion of the evaluation contract that is devoted to data collection efforts. The costs include the cost of all data collection and analysis activities. The total estimated cost is approximately $529,786 for the two year contract.


15. Changes in Burden


This is a new data collection activity.


16. Time Schedule, Publication and Analysis Plan


Schedule

The schedule for fielding, analysis, and publishing the results of the National Survey of Healthy Start programs is as follows:


OMB approval received April 2011


Send advance email notification via listserv May 2011


Conduct survey introduction training via webinar May 2011

Launch survey (survey will “go live”) May 2011


Email reminders sent at 2, 4 and 6 weeks to May – June 2011

non-respondents


Phone reminders at 8 weeks to non- respondents June 2011


Phone-assisted survey completion June – July 2011


Close survey July 2011


Tabulation and analysis July – August 2011


Submit final evaluation report September 2011


Analysis Plan

The analysis plan for the National Healthy Start Program Evaluation is based on the hypothesized relationships between program components and features with outcomes at different levels as depicted in the Logic Model (shown in Attachment B). The table in Attachment E highlights how the key data source for this evaluation, the Grantee Project Director Survey of 2011 supplemented with grantee performance measures data provided to HRSA in the Electronic Handbook (EHB), will be used to address each evaluation question. In addition, data for related State MCHB Title V performance measures from the TVIS and HP2010 measures and targets from CDC/NCHS will be used as bench marks. These standardized measures will enable comparison across states and provide an estimate of progress toward program, state and national goals/targets; in addition to identifying which aspect of the program leads to each type of outcome.


Table 1 in Attachment E presents a summary of the key constructs to be measured to answer each of the evaluation questions and the data to be used to define the independent and dependent variables that will be used as measures of those constructs. It also highlights sections of the survey from which questions will be taken to develop and operationalize independent and dependent variables, and a column to document the process for operationalizing the variables. Table 2 provides the definition of all healthy start performance measures and similar measures to be used for state and national benchmarks.


Note that not every element presented in the logic model will be measured in the current national evaluation. In fact, it is difficult to collect “real” measures for many of the systems-based outcomes. The evaluation will collect data that can be used to make inferences about achieved outcomes.

For constructs such as consumer participation, consumer voice, efficiency of the service system and sustained community capacity to reduce disparities in the target population, there are a number of questions throughout Sections 1, 5, 6, and 7 of the 2004 survey and Parts A (Section 1, Section 5); B (Section1, Section 2); C (Section1) and D (Section 1) of the 2011 survey that that will be used to define and measure these outcomes. For example, responses to questions about the use of former Healthy Start Participants as Program Staff and Peer Group Leaders for health education sessions, cultural competence of program staff , community participants serving as active members of the Consortium and Grantee reflections will be used to define consumer voice. Comparison of questions from the 2011 and 2004 surveys specifically related to the number, structure, purpose and active membership of Healthy Start Project Consortia will provide sufficient information to make an inference about the improvement of consumer voice.


Inferences about the "improved efficiency of the service system", or improved coordination of services, will be based on responses to questions in Part A Section 1 of the 2011 Project Director Survey that address the strategies used to raise community awareness of the Healthy Start project, recruit and retain participants, and processes for following up on completion of participant referrals for services.


With inferences about “sustained community capacity”, we expect that communities with active membership in Healthy Start Consortia and with former Healthy Start participants serving as program staff will create a local pool of individuals who are aware of the issues around infant mortality and the resources available to address those issues in their community. We can, therefore, infer that an expansion of this pool over time will lead to sustained community capacity to reduce disparities in health status in the target community. In addition, Project Directors’ responses to the Grantee Reflections sections in the 2004 and 2011 surveys will provide data to define and measure this outcome.


Analyses for this evaluation will employ a variety of methods including descriptive statistics (means, percentages), simple tests of differences across subgroups and over time (chi-square tests), and analytic statistics (correlations, measures of association and regression) to examine relationships between program components and program features with program outcomes. Most of the evaluation questions call for descriptive analyses which can be addressed by calculating the percentages of grantees that implemented each of the nine program components and the features of each program component; with comparisons of these percentages across grantees and across time. Cross-tabulations of program components and program features with program outcomes will also provide important information.


Comparisons across time will be based on data for the same variables or constructs from the 2004 and 2011 Project Director surveys. Note that the Project Directors Survey administered in 2004 reported on 2003 data and the 2011 Project Director Survey will report on 2009 data. Similar to the 2011 Project Director’s Survey, the 2004 Project Directors Survey, completed by 95 of the 96 grantees funded at that time, provided a “point-in-time snapshot” of the implementation of the Healthy Start program components.






17. Exemption for Display of Expiration Date


The first screen of the web version of the survey will display the OMB expiration date, as well as a Paperwork Reduction Act (PRA) statement.


18. Certifications


HRSA certifies that the collection of information encompassed by this request complies with 5 CFR 1320.9 and the related provision of 5 CRF 1320.8(b)(3).



B. Statistical Methods


1. Respondent Universe and Sampling Methods


The respondent universe for the National Survey of Healthy Start programs will include the full universe of 104 Healthy Start projects. The rationale for inclusion of all grantee projects is based on the fact that this is a relatively small number of grantees. Sampling the entire universe will result in the most reliable and valid data for the national evaluation and provide the Healthy Start program with better information for management improvement purposes.


A 99-100% response rate is expected based on previous experience in administering the Project Director Survey to Healthy Start grantees in the Phase I evaluation of the National Healthy Start Program. During the Phase 1 evaluation, a 99% response rate was achieved over a four-month period in administering the Project Director survey using both electronic and paper submission. At that time, we did not have the benefit of regular communications with the grantees, and advances in web technology have made website surveys more accessible and user-friendly. The mechanisms through which we will engage grantees and support their completion of the survey are described under sections 2 and 3 below.


2. Information Collection Procedures


Data collection procedures have been designed to maximize timely response, reduce burden to the grantees, and promote accuracy and completeness of responses. Specifically, the following steps are planned:


i. Approximately 1 month prior to survey launch, an advance notification (Attachment F) will be emailed to all grantees via the existing MCHB Listserv notifying them that they will soon be asked to fill out the survey. The advance notification will clearly explain the purpose of the study and stress the importance of obtaining accurate information from each project. The memo will also include the dates and times for 3-4 webinars (see step #2 below) to explain how to fill out the survey and to answer any questions from the grantees.


ii. Approximately 2 weeks prior to survey launch, the research team will host webinars to review the content and functionality of the survey. It will be strongly suggested that each Project Director (or his/her designee) participate in one of the webinars.


iii. The survey will “go live” on the web within two weeks after the last webinar. As needed, instructions and tips for completion will be available for reference via the existing MCHB Listserv and updated throughout the data collection period as needed. A toll-free phone number (Helpdesk) and email will be made available to request additional assistance. This information will also be provided via the existing MCHB Listserv.


iv. Approximately 2 weeks after the survey is launched, the research team will send email reminders to grantees that have not completed the survey. These email reminders will be repeated at 4 weeks and again at 6 weeks after survey launch.


v. Those grantees that have not completed the survey within 8 weeks will receive a phone reminder from the research team.


vi. Approximately 10 weeks after survey launch, members of the research team will contact non-responding grantees by phone and email to offer and schedule a “phone-assisted” survey completion option. In this phone-assisted method, the research team member will initiate a “Go-To-Meeting” webinar where the grantee can see the researcher’s screen that will contain the online survey. During the session, the researcher will key in the responses for the grantee as they communicate by phone – question by question. More than one session may be needed if the grantee has to follow-up with information or does not have time to complete the survey in a given timeframe.


vii. All surveys will be completed within 16 weeks following the launch of the survey.



3. Methods to Maximize Response Rates


Based on previous experience working with the Healthy Start grantees and experience with web-based surveys, we expect to achieve a 99% response rate. We have implemented, and will continue to pursue, strategies to engage grantees and ensure a high response rate. Among these strategies includes the pilot test of the paper survey conducted with 4 grantees to ensure that the survey content can be understood. We will also conduct a beta test of the survey in web format (as intended to be administrated) with no more than 3-4 grantees to ensure that the technology is accessible and user friendly. To assist grantees with any technical problems in accessing the web-based survey or issues regarding the interpretation of the content of the survey, the project team will respond within 24 hours to questions submitted via email or by phone. Follow-up is of the utmost importance in achieving the expected 99% response rate. To achieve this response rate, the research team will send out reminder emails, post reminders on the designated web portal, and make follow-up phone calls to non-respondents.


Every effort will be made to engage grantees early in the process as a means to achieving the desired response rate. The general timeline and areas of inquiry of the national evaluation were presented to all Healthy Start grantees at the annual National Healthy Start Association meeting in March 2010. The project team will hold an introductory webinar during which the survey and the survey process will be introduced to increase grantees’ comfort with completing the survey online. The goal in this introduction is to not only explain the survey questions but to build relationships with the grantees and reinforce the value of the research for our shared interest in maternal and child health. During this orientation webinar, we can track grantee participation to ensure that we reach each grantee with an introduction to the survey. We will also update grantees’ email addresses and track grantees that leave and find their replacement.


Following the training and orientation, we will employ several methods to track progress and follow-up with grantees to ensure high response rates. Our team will send emails to each grantee with logon and password information and a link to the survey. Grantees simply click the link to open the survey. Grantee project directors (primary respondent0 can forward the link to other members of their team if they wish to assign them to fill in relevant portions of the survey. Our system will automatically track the status of each survey by recording the percent of questions completed and whether the survey is in process or has been officially submitted. The research team will monitor submissions and field questions. Two weeks after the survey launch and initial emails, the project team will send a second email with the survey link to remind grantees to begin their survey. Reminder emails will be sent to non-respondents at 4 and 6 weeks post survey launch.


By week 8 after the survey launch, the project team will begin focusing on grantees that have not begun their survey by calling grantee project directors to ensure that the initial email with the survey link was received and that the survey has not presented any problems. Based on past experience, this direct attention will prompt the majority of people to begin and complete the survey.


While the vast majority of the grantees we have worked with on similar efforts prefer using online surveys to submit responses, some individuals prefer completing a paper survey. (These individuals might have a very slow local Internet connection or simply dislike computers.) In these cases, the research team will accept mailed paper copies of the survey and key the responses in for the grantee and send them a copy of the entered data for confirmation. Printable versions of the survey will be made available in all communications and on the website.


Finally, in all large groups there are inevitably some organizations that have difficulty responding. In these cases, the research team will invite the grantee to participate in a phone-assisted survey session. In this session, the research team member will initiate a “Go-To-Meeting” webinar where the grantee can see the researcher’s screen that will have the online survey. During the session, the researcher will enter the grantee’s responses to survey questions while on the phone. It is our past experience that many times simply the invitation to this webinar will prompt survey submission.


In the rare case where none of the above strategies work, the HRSA/MCHB will work with the contractor to engage the grantees’ project officers and ask them to intervene.


4. Test of Procedures


Healthy Start grantees were invited to participate in a pilot of the draft survey in June 2010. Grantees were asked to complete the paper version of the survey in order to provide feedback on the wording and content of the questions, and the time and staffing resources needed to complete the survey. Four (4) grantees (approximately 5% of the 104 Healthy Start grantees) participated in the pilot. Their survey responses were not used for any analyses. Grantees were asked to provide feedback via conference call and/or via written notes on the survey.


No more than four (4) grantees (may include those who participated in the pilot if they are available) will participate in a beta testing of portions of the web-based survey before the survey is made available to all grantees. They will be asked to provide feedback on web interface and usability.


5. Statistical Consultants


The following individuals contributed to the survey and study design and will be involved in the interpretation and analysis of the findings:


Chanza Baytop, MPH, Dr.PH.

Project Director

Abt Associates Inc.

4550 Montgomery Avenue, Suite 800N

Bethesda, MD 20814

Deborah Klein Walker, Ed.D.

Principal Investigator

Practice Leader, Public Health & Epidemiology

Abt Associates Inc.

55 Wheeler St.

Cambridge, MA 02138


The Project Officer is:

David de la Cruz, Ph.D.

Division of Healthy Start and Perinatal Services Health Resources and Services Administration

Maternal and Child Health Bureau

5600 Fishers Lane

Rockville, MD 20857

(P) 301-443-6332

[email protected]



Attachment A: Legislation

Attachment B: Logic Model for the National Evaluation of the National Healthy Start Program

Attachment C: Healthy Start Program Goals and Outcomes

Attachment D: Survey Instrument

Attachment E: National Healthy Start Evaluation: Analytic Plan

Attachment F: Advance Notification Email to Grantees

1 Service components: direct outreach services and client recruitment, case management, health education services, screening and referral for perinatal depression, and interconceptional continuity of care through the infant’s second year of life.

2 Systems –building components: utilization of community consortia and provider councils to mobilize key stakeholders and advice local grantees, development of a local health system action plan, collaboration and coordination with Title V services; and development of a sustainability plan for continuation of services and project work beyond the grant period.

3 Devaney, J., Howell, E., McCormick, M., Moreno, L. “Reducing Infant Mortality: Lessons Learned from Healthy Start. Final Report.” Washington, DC: Mathematica Policy Research, July 2000.

4 Brand, A., Walker, D.K., Hargreaves, M. & Rosenbach, M. (2010). Intermediate outcomes, strategies and challenges of eight Healthy Start Projects. Maternal and Child Health Journal, 14, 654-665.

5 Rosenbach, M., Cook, B., O’Neil, S., Trebino, L., & Walker, D.K. (2010). Characters, access, utilization, satisfaction and outcomes of Healthy Start participants in eight sites. Maternal and Child Health Journal, 14, 666-679.



20


File Typeapplication/msword
File TitleAn Evaluation of Core Components of the Federal Healthy Start Program: A Cross-Site Examination
AuthorMeridith Eastman
Last Modified ByCHaddad
File Modified2011-04-29
File Created2011-04-29

© 2024 OMB.report | Privacy Policy