passback

6247 OMBresponses 010807_wjb.doc

Head Start Oral Health Initiative Evaluation

passback

OMB: 0970-0314

Document [doc]
Download: doc | pdf

RESPONSES TO QUESTIONS ON SUPPORTING JUSTIFICATION FOR OMB CLEARANCE OF DATA COLLECTION INSTRUMENTS FOR THE HEAD START ORAL HEALTH INITIATIVE EVALUATION

January 8, 2007


1. Outcomes/impact assessment: the supporting statement says that this study is intended to assess the process of implementation of this program, rather than assess its impacts. Does ACF intend to conduct an outcomes assessment? If so, when would that be taking place?


No, ACF does not intend to conduct an outcomes assessment of the Head Start Oral Health Initiative.


2. Sample

2a. Do the 52 Head Start programs in this study represent the universe of Head Start programs that were given grants for the Oral Health Initiative? Or is this a sample? If it’s a sample, how were the programs picked?


The 52 Head Start programs in this study represent the universe of Head Start programs that were awarded grants for the Oral Health Initiative. This is not a sample.


2b. Why does ACF plan to conduct site visits to only 16 of the 52 sites?


ACF plans to conduct site visits to a subset of 16 sites due to constraints on resources available for the evaluation and concerns about grantee staff burden associated with visiting all 52 grantees. We will, however, collect information from all 52 grantees via telephone interviews and the recordkeeping system.


2c. How will ACF select the sub-sample of service locations referenced on page 18?


Data from the Head Start Director telephone interview will be used in the RE-AIM (Reach, Effectiveness, Implementation, Adoption, and Maintenance) analytic model to evaluate early grant implementation and select a subset of high- and low-performing grantees to participate in the site visits. (Glasgow et al. 1999; Dzewaltowok et al. 2006). The RE-AIM model evaluates multiple dimensions that contribute to overall public health impact and assesses the replicability of public health promotion interventions to encourage their dissemination.


Researchers developed the RE-AIM model by drawing on previous work in several areas of public health evaluation, including “diffusion of innovations,” “multi-level models,” and the “precede-proceed” model (Rogers 1995; Green and Kreuter 2005).1 RE-AIM extends this previous work in three main ways: (1) by focusing on the translation of research to practice, (2) by equally emphasizing internal and external validity issues and representativeness of diverse populations, and (3) by providing specific and standard ways of measuring key dimensions of public health impact and widespread application. Researchers have used RE-AIM to evaluate a range of public health interventions in such areas as encouragement of physical activity among children and adults and promotion of school health. RE-AIM has been cited as an evaluation framework in more than 40 articles published in well-respected public health journals.2


The RE-AIM framework facilitates analysis of public health promotion strategies at both the individual and the institutional levels as defined by the following dimensions:


  • REACH: the intervention’s reach into the target population

  • EFFECTIVENESS: the intervention’s effectiveness in modifying health risk

  • ADOPTION: the extent to which the intervention is adopted in the target setting

  • IMPLEMENTATION: the extent to which services are delivered with fidelity and at the desired level of intensity

  • MAINTENANCE: the extent to which the intervention and its impact on participants is maintained over time

To conduct the RE-AIM analysis, we will create measures for assessing grantees’ performance on each RE-AIM dimension. To facilitate comparison across grantees, the measures will be quantitative, primarily percentages or ratios. In addition, a few of the measures will draw on qualitative information from telephone interviews. We will create quantifiable measures from these data by rating various aspects of grantee activities, such as the extent to which grantees have implemented key components of the initiative.3 Table 1 presents our initial set of measures to use in the RE-AIM analysis.


Once we have collected the necessary data, we will rank grantees within and across the RE-AIM dimensions (Figure 1). To begin, we will calculate each measure for each grantee. Next, we will rank grantees from highest to lowest according to their scores on each measure. If two or more grantees receive the same result on any measure, their resulting ranking for that measure will also be the same. We will then average these rankings to calculate the average rank scores for each of the five dimensions. We will then convert the average rank score into scaled scores ranging from 0 to 100. The scale will depend on the number of rankings within each dimension; in most cases the scale will be divided into increments of 1.92, which assumes that all 52 grantees receive an individual ranking. However, if two or more grantees have the same average ranking, we will adjust the scale to accommodate the number of rankings in the measure. Next, we will average the scaled average rank scores for all dimensions to create a composite RE-AIM score. We will then rank the composite scores from highest to lowest to compare performance across all 52 grantees.


The grantee rankings will enable us to determine which grantees have strong performance in particular aspects of the intervention and to identify grantees that demonstrate strong performance across multiple RE-AIM dimensions. In addition, by triangulating these rankings with qualitative data from the telephone interviews, we will begin to understand the early successes and challenges associated with providing oral health services to Head Start families. For example, analysis of recordkeeping system data may suggest that a grantee is performing strongly in the Reach dimension but is less successful in the Effectiveness dimension. During the telephone interview with this grantee’s director, we might learn that the high Reach ranking is the result of effective outreach strategies to engage Head Start families, while the low Effectiveness ranking is due to the grantee’s challenges in delivering oral health services to families at the levels of intensity intended or in establishing referral systems for oral health treatment services.


Ultimately, the RE-AIM analysis will result in the selection of 16 grantees—12 high-performing and 4 low-performing—to participate in the site visits. Our goal for selecting high-performing grantees will be to select grantees that have both high composite RE-AIM scores and strong performance in multiple RE-AIM dimensions. Since the composite score is based on an average, it is feasible that a grantee has both extremely high and extremely low scaled scores on individual RE-AIM dimensions. To adjust for this, we will develop a flag that indicates whether a grantee’s average rank scores are above the median for at least 3 of 5 dimensions. We will use this indicator when selecting high-performing grantees for site visits to ensure that grantees demonstrate success in multiple dimensions.

In addition to the RE-AIM analysis, site selection will also represent the variety of contexts in which Head Start, Early Head Start, and Migrant/Seasonal Head Start programs operate. Thus, our site selection process will balance the goals of visiting a variety of programs and collecting information on promising practices relevant to specific hard-to-serve populations and community contexts. To accomplish these goals, ACF will identify subgroups of grantees of interest, such as those located in rural and urban communities; those serving special populations, such as Native Americans, migrant farmworkers, and English language learners; or other subsets of programs with particular characteristics. Using a similar process for ranking across all grantees, we will rank grantees within the agreed-upon subgroups.


2d. If ACF will conduct an outcomes assessment as some later date, wouldn’t it be important and useful to randomly select a sample of children rather than surveying everyone at select service locations?


ACF does not plan to conduct an outcomes assessment of the Head Start Oral Health Initiative.


3. The Structure of the Program


3a. What is the relationship between the types of respondents ACF intends to interview, i.e. grantee directors, key staff, and community partners? What are their respective responsibilities in this initiative?


Interview respondents include staff who work directly on the Head Start Oral Health Initiative and staff from other community organizations with whom the Head Start agencies have formed partnerships to operate the Oral Health Initiative. Below, we describe the roles of the grantee director, key staff, and community partners:


  • Grantee Directors. Grantee directors are the designated directors of the Head Start programs that received Oral Health Initiative grants. These directors are typically responsible for all Head Start operations and staff supervision; some may also oversee other early childhood education programs. We will interview all 52 grantee directors by telephone as soon as possible after receiving OMB clearance, and we will interview the directors of the 16 grantees selected for site visits during the visits.

  • Key Staff. In some grantee sites, a coordinator may be designated to oversee grant operations, and possibly to supervise other staff members who work on the Oral Health Initiative. Grantees may have hired staff, such as oral health advocates or family service workers, to work specifically on the initiative. Other grantees may assign some duties associated with the Oral Health Initiative to existing staff such as home visitors, center teachers, and family service workers. During site visits, we will interview these staff members—either individually or in small groups—about the characteristics and needs of families and children served by the initiative, service provision, implementation experiences, successes and challenges, and lessons learned.

  • Community Partners. All grantees will work with a range of community partners to deliver clinical and non-clinical oral health services to enrolled children and pregnant women. For example, grantees partner with dentists, dental hygienists, and other oral health care providers; pediatricians, OB/GYNs, and other health care providers; WIC agencies; and dental and dental hygienist schools. Depending on the number and types of partners involved in grantee initiatives, we will conduct individual or small group discussions with community partners during site visits to learn about their roles in the Oral Health Initiative and their implementation experiences.

3b. Was this Oral Health Initiative designed very openly to provide programs maximum flexibility to design innovative strategies appropriate to their community needs? Or were there specific design issues or programmatic aims that every grantee is expected to meet?


The Office of Head Start designed the Oral Health Initiative to provide programs with maximum flexibility to design innovative strategies appropriate to the needs of their communities and the target population of children and families to be enrolled in the program.


4. Program Recordkeeping System


4a. How will the staff inputting the data know when enrollees receive services?


As part of the Oral Health Initiative, some grantees are providing some services—such as education on oral health, provision of oral health supplies, and oral exams—directly. Others arrange for children and pregnant women to receive the oral health care services they need through referrals to community partners and other community service providers. When referrals are made, program staff typically follow up to determine whether the services were provided and whether follow-up services are needed. Program staff usually record this information in case records. Depending on how grantees decide to organize the data entry process, family service workers who track services received by particular families can enter those data directly, or administrative staff could use case records to enter the information.


4b. Is there a way to go back through existing records to establish a “baseline” that can be used to compare utilization rates at the end of the 2-year period?


While ideally we would like to obtain service use information for the full Oral Health Initiative implementation period, we are concerned about the burden associated with requiring grantees to retrieve and compile these records. In addition, we do not know the extent to which grantees are collecting consistent service use information across sites.


5. Will the interviews, focus groups, and site visits utilize audio-recording? Doing so could improve the reliability of the data.


Yes. We will audiotape site visit interviews and focus groups unless a respondent specifically asks not be recorded.


6. Analysis and Reporting of Findings


6a. The “standard format” referenced on page 3 of the supporting statement should be developed and submitted to OMB.


The format we will use to write up notes from the telephone interviews (referenced on page 3) is attached as Exhibit 1.


6b. As ACF is probably aware, the use of qualitative analysis software can only aid in the analysis of data: it cannot be a substitute. Therefore, besides the use of the qualitative analysis software package, how does ACF intend to analyze the qualitative data? What analytical framework will used (e.g. grounded theory)?


As described in response to Question 1c above, we will use the RE-AIM analytic model as an organizing framework for our analysis. The response describes in detail how we will use information collected during telephone interviews to construct a series of measures for the RE-AIM analysis.


In addition to constructing these measures, once all interview reports have been coded, we will conduct searches using Atlas.ti to retrieve data on our research questions and subtopics. We will analyze these data both within and across sites to identify common themes that emerge across sites and within subgroups of sites as well as patterns of service delivery, staffing, and other program dimensions. We will also explore relationships across themes—for example, the kinds of implementation challenges sites face and their staffing patterns and partnership arrangements. Within sites, we will use descriptive information about various aspects of the program to develop site profiles.


To facilitate analysis of common themes and patterns across subgroups of sites, we will also code the site reports according to key site-level characteristics. We will create these codes based on information obtained during the interviews and from the program recordkeeping system. For example, we may want to group sites according to types of program models or types of partnering arrangements with other community services providers. Likewise, we may want to group sites according to the populations of children and families they serve, such as migrant farm workers, Native Americans, English-language learners, pregnant women, and other groups of interest. Creating these subgroups will enable us to compare, for example, staff reports about caregivers’ receptivity to pilot services for different types of programs.


Our analysis of the site visit data will focus on identifying implementation lessons and promising practices used by the grantees visited within each of the RE-AIM dimensions (see response to Question 1c). As a first step to analyzing the site visit data, we will use a strategy similar to that described for the telephone interview data—create site visit reports, load the reports into Atlas.ti, code the reports according to our primary research questions, and then retrieve data on specific topics and themes for the analysis.


The next step in analyzing the site visit data will be to identify implementation approaches and the strategies associated with them. For example, we may identify parent education as a common approach used by grantees. The strategies to educate parents may include (1) classes conducted at the center during the school day, (2) classes conducted at the center in the evening, (3) one-on-one education in the parents’ homes, and (4) written materials provided to parents (see Table 2). After identifying these approaches and strategies, the research team will systematically code the site visit reports to identify all grantees using the approaches and strategies. Next, the research team will identify the number of grantees using each of the identified approaches and strategies and compare the use of the strategy across high- and low-performing sites, if both types of sites use it. In addition, the research team would note any additional qualitative information deemed important for determining whether a strategy is to be classified as “promising.” We will then compare the qualitative data with relevant quantitative data that will be available in the program recordkeeping system. For example, for the previous illustration about parent education, researchers would assess the percentage of parents receiving oral health education at each of the relevant grantees. We could calculate the average percentage of parents receiving education among grantees using a specific strategy, which would provide a quantitative indicator of how various strategies work.


We will then use this information to assess whether a strategy is deemed “promising.” A set of consistent rules will be applied during this assessment step (see Figure 2). If only high-performing grantees use an identified strategy and the available quantitative data suggest that the strategy works, we will identify the strategy as promising. If no high-performing grantee uses an identified strategy and the available quantitative data does not suggest that the strategy works, we will not identify the strategy as promising. Many of the identified strategies will probably not follow the above rules and will require further assessment by the research team to determine whether they are promising. In these cases, we will first assess whether the qualitative and quantitative data agree. If they do not, we will try to determine the reason. If we can identify the reason and then believe that one of the data sources should receive more weight, we will consider using this rationale to identify a practice as promising.


Another likely scenario is that the strategy is used by both high- and low-performing grantees. In these cases, we will consider the ratio of high- to low-performing sites using that strategy. If more than 75 percent of the grantees using the strategy are classified as high-performing and the quantitative data suggest that the strategy works, we will identify the strategy as promising. We plan to use the 75 percent threshold, because it aligns with the ratio of high- and low-performing sites selected for site visits. If more than 75 percent of the grantees using the strategy are classified as high-performing but the quantitative data do not suggest that the strategy works, we will next try to understand why the discrepancy is occurring. Based on what we identify, we will determine whether the practice is promising.


6c. Will ACF be generalizing the findings from this analysis beyond the 16 sites? Will ACF be pooling information across the 16 sites or will each site be treated as an individual “case study?”


As described in our response to Question 5b, we will pool the information collected during the site visits for the purposes of identifying promising practices. This analysis will identify practices that appeared promising in the 16 sites visited. The evaluation design does not permit us to make statements about whether the practices will be effective in other Head Start programs. Therefore, we will refer to the practices as “promising,” rather than “effective.”


6d. Since the supporting statement expressly considers this study a process assessment rather than an outcomes assessment, will ACF be reporting any outcomes information in reports to congress or to scholarly journals?


No. ACF will not collect oral health outcomes information as part of the evaluation.


6e. How will ACF measure “the effectiveness of grantees in reaching their target populations?” (page 17)


As described in the response to Question 1c, we will use the recordkeeping system and telephone interview data to construct measures for each of the five RE-AIM dimensions. To assess grantees’ effectiveness in reaching their targeted populations, we will construct the following measure for the “reach” dimension:


  • Percentage of target children enrolled (children who have received at least one service)

  • Percentage of children enrolled, by age categories

  • Percentage of minority children enrolled

  • Percentage of targeted pregnant women enrolled (if grantee targets pregnant women)

7. Confidentiality: Does ACF have the statutory authority to provide assurances of confidentiality? If so, please cite it. If not, the term “confidential” should not be used: alternative words like “private” or “data safeguarding” should be used instead. Also, will ACF ever house or own identifiable data? Or will MPR strip all data of identifiable information before sending it to ACF? This should be spelled out in the supporting statement, as well as in contracts entered into between ACF and MPR.


ACF does not have the statutory authority to provide assurances of confidentiality. We will not use the term “confidential” for this data collection. Revised interview guides with the term “confidential” removed are attached as Exhibit 2. MPR will strip all data of identifiable information before transmitting them to ACF.


8. ICs


8a. It seems to me that the most important questions, given the stated research aims, are in the “early implementation experiences” section, and yet this section is allocated only 10 minutes. At the same time, 20 minutes are spent going through information that could be collected from the grantee’s application. A lot of questions also appear to be “yes/no” questions which do not require semi-structured interview techniques. Wouldn’t it be a better use of everyone’s time to confirm the background information and collect answers to yes/no questions through a postal survey and then reserve the interview to probe further into those questions that really require an interview format?


We considered conducting a mail survey to confirm and update information from the grantee applications. However, we decided not to pursue this option, because the small amount of information we would collect through a mail survey did not justify the cost and effort involved in conducting it.

To address the concerns raised by OMB, we will send, prior to conducting the interview, an advance letter that provides an overview of the topics we plan to discuss and contains information from the grantee’s proposal that we would like to verify and update (attached as Exhibit 3). By limiting questions about information contained in the proposal to updates, we can reduce the “grantee characteristics” section of the interview to 5 minutes and increase the “early implementation experiences” section to 15 minutes (the revised telephone interview guide is included in Exhibit 2).


8b. Relatedly, interviewees may not have the type of factual information ACF is requesting at their fingertips (e.g. “how many families does your agency service annually?”). Answering these types of questions will probably require the interviewee to look up information or sift through data. These are the types of questions that are better handled through a postal survey rather than an interview, where interviewees will feel put on the spot.


We will include these items in the advance letter (Exhibit 3) to so that grantees can have the information available for the telephone interview.


8c. Race: the collection of information on race and ethnicity should conform to OMB guidelines. Information on ethnicity (i.e. Hispanic origin) should be asked first, and then followed by questions on race (i.e. White, Black, etc.)


In the recordkeeping system, we will ask about race first, and then ethnicity. The questions will conform to OMB guidelines. The wording of the question will conform to OMB guidelines.



1 The Robert Wood Johnson Foundation funded the development of the RE-AIM model and its accompanying website (RE-AIM.org), which serves as a clearinghouse for information related to the model.

2 These journals include the American Journal of Public Health, Annals of Behavioral Medicine, American Journal of Preventive Medicine, Journal of School Health, Journal of the American Medical Association, and Preventive Medicine.

3 When using measures based on ratings, the interviewer and two other senior team members will independently rate each grantee and then discuss any discrepancies across raters to reach a consensus rating.

DRAFT /home/ec2-user/sec/disk/omb/icr/200610-0970-002/doc/1411501 9 02/06/21 12:27 PM

File Typeapplication/msword
File TitleMEMORANDUM
AuthorDiane Paulsell
Last Modified ByLaura R Hoard
File Modified2007-01-08
File Created2007-01-08

© 2024 OMB.report | Privacy Policy