Part B
Statistical Methods
This section of the Supporting Statement addresses the five points outlined in Part B of the OMB guidelines and focuses on statistical methods related to the collection of information for the study. Because the process evaluation component of the study will primarily collect descriptive data from existing documentation, a purposeful selection of key informants, and a small group of participants, it will not require the use of statistical methods. For this reason, the following five sections of Part B of the Supporting Statement for the impact evaluation component of the study only.
B.1 Respondent Universe and Sampling Methods
The impact evaluation for the four demonstration projects includes conducting pre- and post-surveys of the intervention group and a control or comparison group. Table A.2-1 in Part A provides a summary of the research design and data collection methods for each of the four demonstration projects. The respondent universe (study population) and sampling method for each of the four demonstration projects is described below. We also provide the expected overall response rates for each demonstration project. We anticipate that the overall response rates may be lower than 75% for the CNNS, UNV, and NYSDOH projects. This is because of the multiple rounds of data collection (i.e., consent to provide parental contact information, pre-survey, and post-survey) and expected attrition between the pre- and post-surveys (e.g., participants move or change schools). As described in Section B.3, our analyses will use statistical models to correct for participant attrition.
CNNS
The study population is parents/caregivers of first to third grade children attending elementary school in Pontotoc County, OK. Because of logistical and cost constraints, CNNS is unable to provide the intervention to schools outside of Pontotoc County. To provide the most rigorous design possible under this constraint, Bryan County, a neighboring county that is similar to Pontotoc County in terms of percentages of Native American students and students who receive free- and reduced-price meals, was identified as the control county. Using a quasi-experimental research design, schools in Pontotoc County were matched to schools in Bryan County. Parents/caregivers of students in the intervention and control schools will be surveyed pre- and post-intervention to collect information on key outcomes of interest. Table E-1 of Appendix E provides information on the study population for the intervention and control groups. Table E-2 provides the sample design with information on response/attrition rates for each stage of data collection. The overall response rate at post-intervention is anticipated to be 51%. This response rate takes into consideration nonresponse to the request to provide contact information, nonresponse to the pre-survey, and attrition/nonresponse between the pre- and post-surveys.
University of Nevada
The study population is parents/caregivers of preschool children (ages 3 to 4) attending 12 Acelero Head Start Centers in Las Vegas, Nevada. For the independent impact evaluation of the All 4 Kids intervention, a quasi-experimental research will be employed. The 12 centers will be assigned to either intervention or control groups. For this evaluation, a fully randomized design is not appropriate given that two of the centers have been exposed to the intervention and need to be assigned to the intervention condition. Among the remaining centers, assignment to condition will be random. Because of uniform enrollment criteria, all Acelero Head Start Centers are assumed to be similar across Las Vegas. This assumption will be evaluated prior to random assignment based on enrollment data that will be available in the fall of 2009. Selection will also take geographic proximity into account to reduce the likelihood of program spillover. Parents/caregivers of students in the intervention and control centers will be surveyed pre- and post-intervention to collect information on key outcomes of interest.
Table E-3 of Appendix E provides information on the study population. Table E-4 provides the sample design with information on response/attrition rates for each stage of data collection. The overall response rate at post-intervention is anticipated to be 67%.
NYSDOH
The study population is preschool children in 3- and 4-year-old classes and their parents/caregivers attending low-income Child and Adult Care Food Program (CACFP) childcare centers throughout New York State, including New York City (NYC) boroughs. CACFP centers will be randomly assigned to intervention and control conditions and parents/caregivers of students will be surveyed pre- and post-intervention to collect information on key outcomes of interest. The sampling fame will be based on the list of approximately 156 CACFP centers identified by NYSDOH nutrition education providers for receipt of the EWPHCCS program. The frame will include two strata: one for NYC boroughs, the other for the remainder of the State of New York. The number of pairs of centers in each stratum will depend upon their proportional distribution in NYC and the remainder of the State. Pairs of centers will then be created in a two step process. First, ineligible centers will be identified and removed from the pool of all potentially available centers based on a set of exclusion rules.
Second, from among the remaining centers, matches will be based on type of CACFP center (i.e., Head Start centers will be matched to other Head Start centers), geography (within county, when possible), and center size. Within each stratum, pairs will be ordered using a random number generation, with the pair assigned the lowest random number allocated to the first position, the pair assigned the second lowest number allocated to the second position, and so on. The first six pairs in each stratum will be selected for inclusion. Within each pair a second random number assignment process will determine which center is to receive the EWPHCCS program.
NYSDOH has not yet provided the sampling frame, thus we are unable to provide information on the study population in a tabular format. Table E-5 provides the sample design with information on response/attrition rates for each stage of data collection. The overall response rate at post-intervention is anticipated to be 38%. Our experience suggests that because there is minimal parent engagement with this program, the response rate will be lower compared to the response rates for the other demonstration projects.
PSU
The audience for the About Eating Program is SNAP eligible women, ages 18 to 45, and living in one of the 34 counties not served by SNAP-Ed or one of the 6 counties with service consisting only of County Assistance Office activities conducted by the Pennsylvania Nutrition Education Network. PSU’s target audience will also be English literate and have access to the Internet. Persons with conditions affecting eating competence (e.g., diabetes) will be restricted from participating in the study. In the 40 targeted counties, persons eligible for SNAP will be recruited by PSU through the use of Pennsylvania Department of Welfare SNAP databases (via email) and postings in County Assistance Offices and other locations.
The independent evaluation will employ the same research design being used by PSU. Participants who express interest in the study and meet the eligibility criteria will be randomly assigned to the intervention group or the comparison group, with stratification for rural/urban and participation in EFNEP. Participants in the intervention group will complete the Web-based About Eating Program. Participants in the comparison group will receive the link to the USDA SNAP-Ed Connection Web site and they will receive the link to the About Eating Program only after the study is complete. Consistent with the PSU design, surveys will be conducted before and after the intervention.
The study population is the number of individuals who express interest in the study and meet the eligibility criteria, which is currently unknown. Table E-7 provides the sample design. The response rate at post-intervention is anticipated to be 80%.
B.2 Procedures for the Collection of Information
As specified in the OMB guidelines, procedures for the collection of information addressed in this section include the following:
Statistical methodology for stratification and sample selection;
Estimation procedure;
Degree of accuracy needed for the purpose described in the justification;
Unusual problems requiring specialized sampling procedures; and
Any use of periodic (less frequent than annual) data collection cycles to reduce burden.
Appendix E describes the statistical methodology for stratification and sample selection, the estimation and analysis procedures, and the degree of accuracy needed for the purpose described in the justification for each of the four demonstration projects. Appendix F describes the data collection procedures for each of the four demonstration projects and Appendix G provides our assumptions for sample size estimation. Appendix H provides copies of advance letters, post cards, telephone scripts, and other materials used in contacting respondents for the impact evaluation and Appendix I provides similar materials used in contacting respondents for the process evaluation.
Unusual Problems Requiring Specialized Sampling Procedures
No specialized sampling procedures are involved.
Use of Periodic Data Collection Cycles to Reduce Burden
This is a one-time survey data collection effort.
B.3 Methods to Maximize Response Rates and Deal with Non-Response
The following methods will be employed to maximize the response rate for the study:
Meet with center directors or school principals and teachers to establish buy-in for the study and enlist their cooperation (CNNS, UNV, NYSDOH).
Deliver an informational packet to parents/caregivers explaining the purpose of the study, the importance of participation, and the contractor’s pledge of privacy (CNNS, UNV, NYSDOH).
Design an instrument that minimizes respondent burden (short in length, written in easy-to-understand language, and uses visual aides to facilitate understanding).
Test the draft instrument using cognitive interviews to ensure respondents can properly understand the questions and response options are robust.
Mail postcard to remind participants to complete the survey and/or thank them for their participation.
Contact nonrespondents and program drop outs by telephone to complete the survey.
Provide a monetary incentive to participants who complete each survey.
For post-surveys, conduct a second mailing of the survey to nonrespondents and program drop outs.
Provide survey materials in English and Spanish (UNV and NYSDOH). English language versions can be reviewed in Appendix A. Translation to Spanish, and validation of the translation to Spanish, will be provided after OMB approves the English language version.
When evaluating experimental and quasi-experimental designs, attrition can bias program effects leading researchers to erroneous conclusions. Attrition refers to incomplete participation and occurs when individuals assigned to an experimental condition provide baseline information and – for any reason – fail to provided data at follow up. To account for and correct potential bias due to attrition, we will compare baseline values of key outcome variables, covariates, and demographic variables between study participants who remain in the study for all surveys and those who do not. Variables associated with attrition will be included as covariates in the main analysis to help remove any potential biases caused by differential attrition rates. Additionally, variables associated with attrition will be included in a propensity analysis that will allow us to assess the relationship between key predictor variables and failure to provide follow up data. This will help characterize study participants lost to follow-up, regardless of whether differences in attrition between the two experimental groups are observed. The propensity analysis is based on a logistic regression that quantifies the probability of non-completion and provides a propensity weight that can be applied to statistical models to correct for participant attrition.
B.4 Tests of Procedures or Methods to Be Undertaken
One-on-one cognitive interviews were used to evaluate whether the survey instruments were appropriate for the target audience in terms of comprehension and understanding and whether improvements to the instruments could be made. Eight interviews were conducted to evaluate the instruments for the NYSDOH, UNV, and CNNS studies and six interviews were conducted to evaluate the instrument for the PSU study. A “think-aloud” approach (Willis, 2004) was used to address cognitive dimensions and usability issues. Using this approach, the respondent completed the instrument, and then the interviewer asked the respondent specific questions about the survey questions and response options. Based on the cognitive interviews, revisions were made to the instruments, to clarify questions and response options and to delete questions that were redundant. Five additional interviews were conducted in August, 2009, to further evaluate the instruments for the NYSDOH, UNV, and CNNS to assess respondents' ability to report their child's consumption of fruits and vegetables in terms of cups (rather than servings). This change in metric was assessed to determine the extent to which adherence to the conventions of the MyPyramid food guidelines (OMB# 0584-0535, expiration date: 7/31/2012) can be maintained without losing precision or burdening respondents.
A Fry Test (Fry, 1968) was conducted to assess the readability of the instruments. This test examines the proportion of syllables and sentence length and is a commonly used measure of reading level. Generally, the questions themselves were at the 5th grade reading level. The opening narrative (survey instructions) was at or below a 7th grade reading level, most likely because terms like “U.S. Department of Agriculture’s Food & Nutrition Service” add complexity, thereby raising the reading level.
B.5 Individuals Consulted on Statistical Aspects and Individuals Collecting and/or Analyzing Data
The contractors, Altarum Institute and RTI International, will collect the information and analyze the data on behalf of FNS. Altarum is leading the process evaluation, RTI is leading the impact evaluation, and both organizations will conduct the assessment of the IA-led evaluations. Jonathan Blitstein, Ph.D., of RTI (919-541-7313) is the senior methodologist for the study. Sampling and statistical methodologies were reviewed the National Agricultural Statistical Service (NASS). Please contact the NASS Survey Administration Branch for more information.
References
Berk, M. L., N. A. Mathiowetz, E. P. Ward, and A. A. White. (1987). “The Effect of Pre-paid and Promised Incentives: Results of a Controlled Experiment.” Journal of Official Statistics 3(4):449-457
Shadish, W. R., Cook, T.D., & Campbell, D.T. (2002) “Experimental and Quasi-Experimental Designs for Generalized Causal Inference.” Boston: Houghton Mifflin Company.
Fry, E. (1968) “A readability formula that saves time.” Journal of Reading 11(7):265-271.
Krueger, Richard A and Mary A. Casey (2009). "Focus Groups: A Practical Guide for Applied Research - 4th edition." Sage Publications: California.
Schewe, C. D. and N. C. Cournoyer. (1976). “Prepaid vs. Promised Monetary Incentives to Questionnaire Response: Further Evidence.” The Public Opinion Quarterly 40(1):105-107.
Willis, G. (2004) Cognitive Interviewing: A Tool for Improving Questionnaire Design. Thousand Oaks, CA: Sage Publications.
B
File Type | application/msword |
Author | hwilson |
Last Modified By | hwilson |
File Modified | 2010-01-13 |
File Created | 2010-01-13 |