HSHMDS OMB PartB_121105

HSHMDS OMB PartB_121105.docx

Head Start Health Managers Descriptive Study

OMB: 0970-0415

Document [docx]
Download: docx | pdf

Head Start Health Managers Descriptive Study


Supporting Statement Part B

For OMB Approval


March 6, 2012

Updated: November 5, 2012

B. Collection of Information Employing Statistical Methods



B.1. Respondent Universe and Sampling Methods


In this section, an overview of the respondent universe and study population for each data collection component is provided. The relevant procedures for identifying the study population and the data collection procedures are discussed. There are no unusual problems requiring specialized sampling procedures.


B.1.1. Overview of Respondent Universe, Study Population, and Expected Response Rates


This study will collect information through four components in two modes of data collection: (1) an online survey of the universe of Head Start directors and another online survey of the universe of Head Start health managers; and (2) a set of semi-structured interviews with a subset of Head Start health managers and another set of semi-structured interviews with teachers, family service workers, and home visitors.


The administration of the structured online survey to the universe of Head Start programs (directors and health managers) is necessitated by the expectation of a high degree of variability across Head Start programs in the health issues they face, how they structure their health services area including staffing, and the approaches to planning for and implementing health-related services and programs (see Section A.1). At this time, it is not possible to determine what attributes of Head Start programs (e.g., size, location, demographics of the children and families served, staffing model, health manager background, relationships with community providers) are associated with variation in the health services area. Thus, there is not sufficient information to establish a sampling frame that would capture important subgroups of Head Start programs (e.g., subgroups defined by the health services area staffing model, the organizational model for the Health Services Advisory Committee, or the nature of the partnerships with community-based providers) or that would adequately represent the resulting breadth and depth of the health services area across Head Start programs and the full array of challenges that programs face. If a sampling frame were to be created with the limited information that currently exists, the frame is presumed to be insufficient for providing a representative sample of important subgroups. For this reason, a sample-based approach would severely limit the usefulness of the data collection effort for the Office of Head Start. In addition, to support the technical assistance efforts of the Office of Head Start, it will be important to examine variation among Head Start program subgroups. Therefore, the universe of Head Start grantees and delegates will be asked to participate in the Head Start Health Managers Descriptive Study.


At the same time, to ensure that respondent burden is reasonable, the online health managers survey will be limited to 75 minutes on average, which includes an estimated 15 minutes to gather the needed information to complete the survey, a constraint that will limit the ability to collect even more indepth information about the health services area. Thus, the two sets of semi-structured interviews with health managers, teachers, family service workers, and home visitors are designed to allow for a more complete exploration of program approaches, successes, and challenges, and thereby provide even greater insight into these issues than what can be learned in a more structured online survey. In sum, the combination of a reasonably comprehensive but focused structured survey that covers the universe of programs with a more indepth semi-structured interview format for a small group of stakeholders will maximize the usefulness of the information collected. This approach will best inform the Office of Head Start in its efforts to design effective technical assistance, identify strategies for capacity building and programmatic improvement, and determine the information gaps that can be addressed in future data collection efforts.


With this overview, the respondent universe and sampling method for the semi-structured interviews is explored. Exhibit B.1.1 summarizes this information for each data collection component.


Exhibit B.1.1 Respondent Universe, Study Population, and Expected Response Rate by Data Collection Component

Component

Respondent Universe

Study Population

Expected Response Rate

Head Start Director Survey

Head Start directors in programs administered by approximately 2,900 grantee and delegate agencies

Respondent universe

90–100%

Head Start Health Manager Survey

Head Start health managers in programs administered by approximately 2,900 grantee and delegate agencies

Respondent universe

90–100%

Head Start Health Manager Interviews

Head Start health managers in programs administered by approximately 2,900 grantee and delegate agencies

Quota sample of 40
health managers that completed the survey

90–100%

Head Start Teacher, Family Service Worker, and Home
Visitor Interviews

Head Start teachers, family service workers, and home visitors in programs administered by approximately 2,900 grantee and delegate agencies

Sample of 60
teachers, family service workers, and home visitors in programs where the health manager completed the survey

90–100%


Online Surveys. For the two online surveys, the universe of Head Start directors will be identified through the most recent Head Start Program Information Reports (PIR). The PIR database provides contact information for the program director for all 2,870 Head Start grantees and delegates, according to the 2010-2011 PIR. Each director will be contacted by email, with follow-up contact as needed, and will be asked to complete a brief survey that covers basic information about the health services area in their Head Start program. The director will also be asked to provide the name(s) and contact information for the health manager of their program. This information will then be used to establish the universe of Head Start health managers for the second online survey. With this information, each health manager will receive an email with an invitation to participate in the study (recruitment scripts can be found in Appendix K). Based on other recent surveys of Head Start program administrators, as discussed further below, response rates for the online surveys are expected to be in the range of 90 to 100 percent.


Semi-Structured Interviews. For the second component, semi-structured interviews will be conducted with 40 Head Start health managers and 60 Head Start teachers, family service workers, and home visitors. The sampling approach for this component is discussed in B.1.2.


All four study components are expected to have a high response rate based on the response rates obtained in other recent surveys of administrators and other staff in Head Start programs. The 1993-1995 Descriptive Study of Head Start Health Services sampled 80 Head Start centers for participation and all 80 centers participated in the study.1 The 2008 Study of Healthy Activity and Eating Practices and Environments in Head Start (SHAPES) surveyed the universe of 1,810 Head Start program directors (excluding those in U.S. territories) and obtained an 87% response rate.2 As another example, a study of oral health services conducted in 2005 for the universe of 18 Early Head Start programs in North Carolina obtained a 100% response rate for program directors and health managers and a 98% response rate for the program staff.3


B.1.2. Statistical Methodology for Stratification and Sample Selection and Degree of Accuracy Needed


Online Surveys. In order to reduce the burden on health manager respondents, we divided the Health Manager survey instrument into a set of Core Questions and Supplemental Questions as shown in Exhibit B.1.2 below, separately for each module. (Appendix M provides a list of the survey questions and marks which items are included in the core and in the supplement.)


As shown in the table, there are a total of 66 questions in the Core questionnaire and 57 questions in the Supplemental questionnaire. We plan to divide the supplemental questions into four modules and, within the strata below, randomly assign each respondent to one of the four supplemental modules (S1 to S4 in the table). Thus, each supplemental question will be administered to approximately 25 percent of the sample of respondents. Supplemental questions in the same module will be grouped together and be administered as part of the survey in the current order to maintain the flow of the instrument (a feasible approach with an online instrument). By administering supplemental questions in a given module together, responses across questions on the same topic can be analyzed together for the random subset of respondents who answered those questions.


The supplemental questions will be randomly assigned at a 25% sampling rate within strata defined by the following four program characteristics to ensure that the samples responding to each supplement are representative of the overall population of health managers:

  • Special population program (MS, AIAN, standard program),

  • Program type (EHS or EHS/HS, HS only),

  • Program size of grantee or delegate agency (three levels), and

  • Percent dual language learners (three levels).

This stratification will yield 54 groups where 25% of each group will be randomly assigned to one of the four supplemental modules.







Exhibit B.1.2 Number of Survey Items in Health Manager Instrument by Core/Supplement and Module


Core Items

Total Supplemental Items

Supplement 1

Supplement 2

Supplement 3

Supplement 4

Module 1

19

13

13

0

0

0

Module 2

4

7

0

7

0

0

Module 3

15

17

0

0

13

4

Module 4

6

11

0

0

0

11

Module 5

0

3

0

3

0

0

Module 6

2

4

0

4

0

0

Module 7

20

2

2

0

0

0

Total

66

57

15

14

13

15


Although the study is designed to be a descriptive analysis, we have conducted a power analysis to assess the ability of the available sample sizes to allow the detection of meaningful differences between different groups in the core module outcomes as well as the different supplement modules. The analysis assumes an 80% power and a 0.05 significance level. With the different comparisons that will be conducted with the data, we calculated the power analysis for comparison groups where the sample will be evenly split between the groups (e.g. small versus large programs with a 50-50 split of the sample) as well as comparisons where imbalance between the groups can be as low as a 70-30 split (e.g. programs defined by the percentage of dual language learners).


For the analysis of continuous variables (e.g., the number of hours worked by the health manager), a detectable effect size is defined as the ratio of the mean differences in the outcomes between the two groups divided by a standard error:

where μ1 and μ0 are respectively the average outcome in groups and σ their pooled standard deviation. Cohen (Statistical Power Analysis for the Behavioral Sciences, 1987) defines δ=0.2 as a small effect size, δ=0.5 as a medium effect size and δ=0.8 as a large effect size. Similarly, for dichotomous outcomes (e.g. a yes/no outcome such as whether or not the health manager has a bachelor’s degree), since the interest in mainly in the proportion of “Yes” to a question, a detectable effect size g is technically defined as the arcsin distance in unit between the proportion p1 of “Yes” in one group and the proportion p0 in the other group:

Once again the rule of thumb define in Cohen (1987) sets g=0.2 as small effect size, g=0.5 as medium effect size and g=0.8 as large effect sizes. As the effect size for dichotomous variable can vary depending on the base probability p0, this analysis is done for both p0=50% (for the conservative estimation) and p0=80%.


Exhibit B.1.3 below reports the minimum detectable effect sizes for the different module and the different sample split options.


Exhibit B.1.3 Minimum Detectable Effect Size and Differences for Core and Supplemental Modules for Different Sample Split Options

Module

Total sample size

Sample
split

Minimum detectable effect size and differences

Continuous outcome

Categorical outcome

Group 1 probability is 50%

Group 1 probability is 80%

Effect
size

Effect
size

Detectable Difference

Effect
size

Detectable Difference

Core

2800

1400 - 1400

0.1059

0.1060

5.3%

0.1062

4.1%

840 - 1960

0.1156

0.1157

5.8%

0.1150

4.4%

Suppl.

700

350 - 350

0.2121

0.2120

10.6%

0.2125

7.8%

210 - 490

0.2314

0.2311

11.5%

0.2285

8.3%


When using outcomes from the core module where all 2800 health managers can be used for analysis, a very small minimum effect size (around 0.1) will be detectable for continuous as well as dichotomous outcomes. This will translate in a 5.3 to 5.8% change in one group compared to the comparison group in analyses where the outcome prevalence is 50% (the conservative case scenario). For comparison group outcomes with an 80% prevalence, this will be equivalent to a 4.1 to 4.4% detectable change. For the supplemental modules where 700 observations will be available, small effect sizes of around 0.2 will be detectable for all outcomes, an effect size that will be equivalent to a 7.8% to a 8.3% change in the outcome when the probability in the comparison group is at 80%.

Overall these sample sizes will allow us to detect very small to small effect sizes in all cases.


Semi-Structured Interviews. The semi-structured interviews are designed to explore, in more depth, the issues raised in the web-based survey in order to provide more insight into specific topics. To ensure that a range of perspectives are represented, we will undertake purposive sampling of Head Start health managers from among the population of health managers that respond to the Head Start Health Manager Survey. Likewise, we will construct a purposive sample of Head Start teachers, family service workers, and home visitors from among the Head Start programs represented in the respondents to the health manager interviews.


The general approach to sampling for the semi-structured interviews will be to sample within Head Start program subgroups, where we expect to define the subgroups based on the following characteristics listed in order of priority:

  • Staffing model for the health services area

  • Special populations served (e.g., American Indian and Alaska Native, migrant seasonal, programs with relatively larger shares of homeless children, children in foster care, and children living with disabilities);

  • Head Start program model (i.e., center-based Early Head Start and Head Start and home-based Early Head Start and Head Start); and

  • Urban versus rural location.

Since the goal of the interviews is to provide further insight into the responses obtained through the online survey, the sampling plan for the interview sites, including the selection of any additional program characteristics needed to ensure variability on key issues identified through data analysis of the survey, will be finalized after the online survey has been in the field for 45 days. Responses from the survey at that time will be tabulated to identify any other subgroups of interest where the semi-structured interviews can provide additional perspective on the survey responses. For those programs selected for an interview with a health manager, the interviewers will contact the health manager by email or telephone based on the information provided about future contact at the end of the Head Start Health Manager Survey (see Appendix K for recruitment materials). At the end of the health manager interview, the health manager will be asked to nominate up to two individuals from their program in each of the three positions of interest for the other set of interviews, i.e., teachers, family service workers, and home visitors. The interviewers will then proceed to contact a sample of the nominated individuals using the contact information provided by the health manager, where that sample will also be purposively selected to give us representation across the program features listed above. The final sampling plan and decisions made regarding the sample for inclusion will be provided to OMB.


The qualitative interviews will allow for clarification of ideas and add dimension and context for the quantitative data collected in the online surveys. As such, the number of interviews—40 completed interviews with health managers and 60 completed interviews with teachers, family service workers, or home visitors—will be sufficient for theme saturation and convergence according to the standard qualitative analysis protocols.45 Data codes will employ an iterative process to developing and refining themes that may emerge but were not a priori anticipated.


B.2. Procedures for Collection of Information


In this section, we describe the data collection procedures for the online surveys and the semi-structured interviews. We also discuss relevant estimation procedures. Since this is a one-time data collection, the use of periodic data collection cycles is not applicable.


B.2.1. Data Collection Procedure


The procedures for data collection differs between the two online surveys (Head Start Director Survey and Head Start Health Manager Survey) and the two semi-structured interview protocols (one for health managers and the other for teachers, family service workers, or home visitors).


Online Surveys. For the Head Start Director Survey and Head Start Health Manager Survey, the primary mode of data collection will be a web-based survey administered using RAND’s Multimode Interviewing Capability (MMICTM) survey system. (The survey instruments are provided in Appendixes B and C, respectively, while several illustrative screen shots are shown in Appendix G.) As noted in Section B.1, all Head Start directors at the grantee and delegate level will receive an email with an invitation to participate in the survey (see Appendix K for the recruitment materials). The email will include a clear rationale for the study and explain how the director can contribute to the survey effort. The email will also include a letter of support from the Office of Head Start to further encourage participation (see Appendix L for the proposed letter to be signed by the Director of the Office of Head Start). Contact information will be provided for those who have questions prior to agreeing to participate or for those who may need to conduct the survey in Spanish. Given the length of the director survey (15 minutes) and need for information about the health manager that will be collected in this survey, if a Head Start director does not respond to the email invitation to participate, or has initiated, but not completed the survey within one week of receipt of the invitation, MMICTM will be programmed to send a follow-up email. A second email will be sent two weeks after receipt of the initial invitation. Follow-up by phone will also be used for those who do not respond to the email prompts and will begin three weeks after receipt of the initial invitation, with priority being given to programs serving high-priority populations (see Appendix K for phone recruitment script).A similar process will be conducted for health managers, though the time frame is extended slightly due to the more comprehensive nature of the survey. Follow-up emails will be sent two and four weeks after the initial invitation was sent. Follow-up phone calls will also be conducted five weeks after the initial invitation was sent. Again, priority for these calls will be given to programs serving high-priority populations. A final email will be sent between six and seven weeks after the initial invitation was sent.


Respondents using the MMICTM interface will be able to begin the survey, save responses, and return later to the instrument if they are not able to complete the survey in one session. At the conclusion of the survey, MMICTM will allow respondents to download and print their complete set of responses. Those who are not able to complete the survey using one of the available electronic methods (e.g., Internet, personal digital assistance, smart phone, WebTV) will be offered the opportunity to conduct the survey over the telephone with a trained interviewer.


Semi-structured Interviews. A telephone interview will be the mode of data collection for the semi-structured interviews. Three trained interviewers from RAND (one of whom is fluent in Spanish) will conduct the interviews with the sample of Head Start health managers selected for interview and the sample of Head Start teachers, family service workers, and home visitors selected for interview (see the discussion in Section B.1). The two interview protocols are provided in Appendices D and E. As with the online survey, those selected for the semi-structured interviews will receive an email invitation to participate in the study (see recruitment email in Appendix K) that is accompanied by a letter of support from the Director of the Office of Head Start for the study. This letter will be drafted after OMB approval and will be submitted to OMB prior to fielding the study. The email will explain that an interviewer will call them to set up a time for an interview, answer any questions, and determine if the respondent would prefer to conduct the interview in Spanish. Once the interview time has been established, the interviewer will place a return call to conduct the semi-structured interview at the specified time.


The interviewers will participate in a one-day training, led by RAND experts on qualitative data collection, on consent procedures, rights in research, framing of questions for respondent ease and comfort with candid questions, and use of probes. The training will include an opportunity to practice with the interview protocol. All interviews will be audiotaped, which will allow the lead researcher for this task to review a 10 percent sample of the tapes within two days of the interview to ensure that the interviewers are adhering to the protocol and standards for qualitative interviewing.


B.2.2. Estimation Procedure


As discussed more fully in Section A.16, the results of the Head Start Director Survey and Head Start Health Manager Survey will be analyzed using standard quantitative analysis methods. If there is any nonresponse, we will employ appropriate statistical procedures in our analysis to correct for any potential nonresponse bias. Those procedures involve reweighting the observed cases to account for any cases of nonresponse. The nonresponse weights will account for known characteristics of the missing cases based on Head Start program information available in the Program Information Report.


Section A.16 also describes the use of qualitative methods to analyze the results of the semi-structured interviews. As such, the construction of analytic weights is not relevant for that portion of the analysis.


B.3. Methods to Maximize Response Rates and Deal with Nonresponse


The Head Start Health Managers Study expects to obtain a high response rate for each of the data collection components as discussed in Section B.1.1. We will employ a number of strategies to maximize response. First, we will use clear and easy-to-read materials, as well as our relationships in the Office of Head Start and the experts on our Technical Work Group, to enable us to explain the study to respondents targeted for both the online surveys and semi-structured interviews. The invitation to Head Start directors and health managers to participate in the study will begin with an email that provides the motivation for the study and highlights the importance of participation in the study (see Appendix K for recruitment materials). A letter from the Office of Head Start will accompany the invitation to further encourage participation. This letter will be drafted after OMB approval and will be submitted to OMB prior to fielding the study. A similar approach will apply to the recruitment materials for the semi-structured interviews. These plans are reflected in the recruitment materials provided in Appendix K.


Second, as noted in Section A.2, we are seeking approval from relevant Institutional Review Boards of American Indian and Alaska Native tribal groups in order to ensure a high rate of participation among directors, health managers, teachers, family service workers, and home visitors in American Indian and Alaska Native Head Start programs. Third, the ability to use a variety of devices for the online survey using MMICTM will facilitate higher response rates. Fourth, a high response rate to the online surveys and the semi-structured interviews will be further supported by providing the option of conducting the interviews in Spanish.


Despite encouraging participation through clear and attractive materials, we do anticipate some nonresponse to our initial requests to participate in the study. Each director and health manager invited to participate in the online survey will be assigned a unique ID through of the Multimode Interviewing Capability (MMICTM) survey system. That ID will be used to track, in real time, who has responded to the survey. Thus, we will monitor whether directors and health managers complete the questionnaire in a timely manner. For those who do not initially respond, we will use several strategies to encourage their participation. For instance, we will send two follow-up emails, followed by a phone call. A final email will be sent after the call; see Section B.2.1. for additional information (see Appendix K for recruitment materials). For those who are not able to complete the survey using one of the available electronic options, we will also offer to conduct the survey over the telephone. Thus, although we will make our best efforts to avoid nonresponse, we will also have procedures in place to convert nonresponse cases and maximize completion rates. In reporting our results, we will calculate nonresponse rates according to the standards promulgated by the American Association for Public Opinion Research. According to this standard, the response rate will be calculated as the ratio of the number of eligible completed cases to the number of eligible cases.


B.4. Tests of Procedures or Methods to be Undertaken


As discussed in Section A.12, pre-tests were conducted with nine potential respondents to the Health Manager Survey and three potential respondents to the semi-structured interviews (one health manager and two family service workers). All respondents were enthusiastic about the planned survey and thought the resulting information would be useful for the Office of Head Start.


Throughout this process, we refined the survey questionnaires and the semi-structured interview protocols. The respondent burden was also estimated from these tests. In addition, RAND’s Multimode Interviewing Capability (MMICTM) system has been successfully employed in numerous other studies conducted by RAND, including the American Life Panel and a large-scale survey in 12 countries with 17 different languages and 5 different scripts.6 The use of MMICTM for internet interviewing reduces respondent burden relative to an in-person interview, while the use of the MMICTM software allows for efficient survey programming and administration.


B.5. Individuals Consulted on Statistical Aspects and Individuals Collecting and/or Analyzing Data


The data for this study is being collected by the RAND Corporation on behalf of the Administration for Children and Families, Office of Planning, Research and Evaluation. With ACF oversight, RAND is responsible for the study design, data collection, analysis, and report preparation. Key input to the statistical aspects of the design was received from the following individuals:


Lynn Karoly, Project Director;

Laurie Martin, Co-Project Director;

Anita Chandra, Behavioral Scientist; and

Claude Setodji, Statistician.


The ACF Project Officer, Laura Hoard, has overseen the design process.


List of Appendixes Under Separate Cover



Appendix A Improving Head Start for School Readiness Act of 2007 (P.L. 110-134), Section 649 “Research, Demonstration, and Evaluation”


Appendix B Head Start Director Survey Questionnaire


Appendix C Head Start Health Manager Survey Questionnaire


Appendix D Head Start Heath Manager Semi-Structured Interview Protocol


Appendix E Head Start Teachers, Family Service Workers, and Home Visitors Semi-Structured Interview Protocol


Appendix F Research Question Matrix


Appendix G Illustrative MMICTM (Multimode Interviewing Capability) Screen Shots


Appendix H Sources for Head Start Health Manager Survey Questionnaire


Appendix I Federal Register Notice, January 11, 2012, Volume 7, Number 7,
page 1694


Appendix J Reponses to Federal Register Notice and Response


Appendix K Recruitment Scripts for Online Surveys and Semi-Structured Interviews


Appendix L1 Tribal Chairperson Letter

Appendix L2 Study Summary to Accompany Tribal Letters

Appendix L3 Letter to Tribal EHS/HS Administration

Appendix L4 Tribal Support Letter for AIAN IRB Process


Appendix M Survey Questions by Core and Supplement




1 Department of Health and Human Services. Descriptive Study of Head Start Health Services 1993-1996. Undated. http://www.acf.hhs.gov/programs/opre/hs/descriptive_stdy/index.html

2 Whitaker, et al. (2009). A national survey of obesity prevention practices in Head Start. Archives of Pediatrics & Adolescent Medicine 163(12): 1144-1150. http://www.rwjf.org/healthpolicy/product.jsp?id=52688

3 Kranz, A.M. et al. (2011). Oral health activities of Early Head Start teachers directed toward children and parents. Journal of Public Health Dentistry 71(2): 161-69.

4 Bernard, H., (2000) Social research methods: Qualitative and quantitative approaches, Thousand Oaks, CA: Sage.

5 Strauss, A. and J. Corbin. (1990) Basics of qualitative research: Grounded theory procedures and techniques, Newbury Park, CA: Sage.

6 RAND Corporation, MMICTM: Multimode Interviewing Capability, 2012. http://www.rand.org/labor/roybalfd/mmic.html

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File TitleAbt Single-Sided Body Template
Authorbartlets
File Modified0000-00-00
File Created2021-01-31

© 2024 OMB.report | Privacy Policy