Part B_7_18_14_REV10_16_14

Part B_7_18_14_REV10_16_14.doc

WIC Nutrition Education Study

OMB: 0584-0599

Document [doc]
Download: doc | pdf

Supporting Justification for OMB Clearance for the WIC Nutrition Education Study

Part B

OMB Supporting Statement

October 16, 2014



Project Officer: Karen Castellanos-Brown

Contract Number:

AG-3198-D-12-0082



Submitted to:

U.S. Department of Agriculture

Food and Nutrition Service

3101 Park Center Drive

Alexandria, VA 22302


Project Officer: Karen Castellanos-Brown

Telephone: (703) 305-2732

Facsimile: (703) 305-2576

Email: [email protected]



Submitted by:

RTI International

3040 Cornwallis Road

Research Triangle Park, NC 27709


Project Director: James Hersey

Associate Project Director: Sheryl Cates


Supporting Justification for OMB Clearance for the WIC Nutrition Education Study

Part B

Draft OMB Supporting Statement

October 16, 2014



CONTENTS

PART B. COLLECTION OF INFORMATION USING STATISTICAL METHODS 1

1. Respondent Universe and Sampling Methods 1

2. Procedures for the Collection of Information 11

3. Methods to Maximize Response Rates and to Deal with Non-Response 18

4. Test of Procedures or Methods to be Undertaken 20

5. Individuals Consulted on Statistical Aspects and Individuals Collecting and/or Analyzing Data 21

REFERENCES 22

TABLES

Table B1.1. Local Agency Survey: Respondent Universe, Sample, Expected Response Rate, and Estimated Number of Respondents by Stratum 2

Table B1.2. Site Survey: Respondent Universe, Sample, Expected Cooperation Rate, and Estimated Number of Respondents by Stratum 2

Table B1.3. Variables to Be Used in Selection of the Six WIC Sites for the Phase II Pilot Study 8

Table B2.1. Sample Sizes for Different Outcomes and Scenarios for the Phase II Participant Surveys 17

Table B4.1. Number of Pretests by Instrument 21

Table B5.1. Individuals Consulted on Data Collection or Analysis 21

FIGURES





appendix A.1: Request information for drawing sample and missing state plan information from state agencies

appendix A.2: Request information for drawing sample from local agencies

appendix b.1: Local Agency Web Survey—English

appendix B.2: Local Agency paper Survey—English

appendix B.3: Local Agency Web Survey Sample Screenshots

Appendix C.1: Site Web Survey Version 1—English

Appendix C.2: Site Web Survey Version 2—English

Appendix C.3: Site paper Survey Version 1—English

Appendix C.4: Site paper Survey Version 2—English

APPENDIX C.5: Site paper Survey Version 1—Spanish

APPENDIX C.6: Site paper Survey Version 2—Spanish

APPENDIX C.7: Site Web Survey Sample Screenshots—English

APPENDIX D: Regional Office Study Announcement Email

APPENDIX E: State Agency Study Announcement Email

APPENDIX F: Study Brochure

APPENDIX G: Email Invitation to State Agencies

APPENDIX H.1: Phase I Frequently Asked Questions (FAQ) —ENGLISH

APPENDIX H.2: Phase I Frequently Asked Questions (FAQ) —SPANISH

APPENDIX I.1: Email Invitation to Local Agencies

APPENDIX J.1: Email/Mail Invitation to Sites

APPENDIX J.2: mail Invitation to Sites—spanish

APPENDIX K.1: Local Agency Survey Email/letter Reminder 1

APPENDIX K.2: Local Agency Survey Email/letter Reminder 2

APPENDIX K.3 Local Agency Survey Email/letter Reminder 3

APPENDIX K.4: Local Agency Survey email/letter Reminder 4

APPENDIX L.1: Site Survey Email Reminder 1—English

APPENDIX L.2: Site Survey Letter Reminder 1—English

APPENDIX L.3: Site Survey Letter Reminder 1—Spanish

APPENDIX m: State Agency Email Notification of Nonrespondents

APPENDIX n: Local Agency Final Reminder Script

APPENDIX o: Phase I site Interview Guide

APPENDIX p: Email Invitation for site Interviews

APPENDIX Q: Script for Scheduling site Interviews

APPENDIX R: Email Reminder for site Interviews

APPENDIX S: Phase II Pilot Email Notification to State Agency

APPENDIX T: Phase II Pilot Script Invitation to State Agency

APPENDIX U: Phase II Pilot Email Notification to Local Agency

APPENDIX V: Phase II Pilot Script Invitation to Local Agency and Site

APPENDIX W: Phase II frequently asked questions (FAq)

APPENDIX X.1: Participant Flyer 3 month advance—English

APPENDIX X.2: Participant Flyer 3 month advance—Spanish

APPENDIX Y.1: Participant Flyer during enrollment—English

APPENDIX Y.2: Participant Flyer during enrollment—Spanish

APPENDIX Z.1: Participant Survey Fact Sheet—English

APPENDIX Z.2: Participant Survey Fact Sheet—Spanish

APPENDIX AA.1: Participant Survey Electronic Screener—English

APPENDIX AA.2: Participant Survey Electronic Screener—Spanish

APPENDIX BB.1: Participant Survey Informed Consent—English

APPENDIX BB.2: Participant Survey Informed Consent—Spanish

APPENDIX CC.1: Pregnant Women Baseline PapI—English

APPENDIX CC.2: Pregnant Women Baseline PapI—Spanish

APPENDIX DD.1: Postpartum Women Baseline PapI—English

APPENDIX DD.2: Postpartum Women Baseline PapI—Spanish

APPENDIX EE.1: Caregiver of Child Baseline PapI—English

APPENDIX EE.2: Caregiver of Child Baseline PapI—Spanish

APPENDIX FF.1: Pregnant Women Interim PapI—English

APPENDIX FF.2: Pregnant Women Interim PapI—Spanish

APPENDIX GG.1: Postpartum Women Interim PapI—English

APPENDIX GG.2: Postpartum Women Interim PapI—Spanish

APPENDIX HH.1: Caregiver of Child Interim PapI—English

APPENDIX HH.2: Caregiver of Child Interim PapI—Spanish

APPENDIX II.1: Pregnant Women Final PapI—English

APPENDIX II.2: Pregnant Women Final PapI—Spanish

APPENDIX JJ.1: Postpartum Women Final PapI—English

APPENDIX JJ.2: Postpartum Women Final PapI—Spanish

APPENDIX KK.1: Caregiver of Child Final PapI—English

APPENDIX KK.2: Caregiver of Child Final PapI—Spanish

APPENDIX LL.1: Pregnant Women Interim CATI—English

APPENDIX LL.2: Pregnant Women Interim CATI—Spanish

APPENDIX MM.1: Postpartum Women Interim CATI—English

APPENDIX MM.2: Postpartum Women Interim CATI—Spanish

APPENDIX NN.1: Caregiver of Child Interim CATI—English

APPENDIX NN.2: Caregiver of Child Interim CATI—Spanish

APPENDIX OO.1: Pregnant Women Final CATI—English

APPENDIX OO.2: Pregnant Women Final CATI—Spanish

APPENDIX PP.1: Postpartum Women Final CATI—English

APPENDIX PP.2: Postpartum Women Final CATI—Spanish

APPENDIX QQ.1: Caregiver of Child Final CATI—English

APPENDIX QQ.2: Caregiver of Child Final CATI—Spanish

APPENDIX RR.1: Baseline Reminder PostCard—English

APPENDIX RR.2: Baseline Reminder PostCard—Spanish

APPENDIX SS.1: Baseline Reminder Script—English

APPENDIX SS.2: Baseline Reminder Script—Spanish

APPENDIX TT.1: Baseline Thank You Letter—English

APPENDIX TT.2: Baseline Thank you Letter—Spanish

APPENDIX UU.1: Baseline Ineligibility Letter—English

APPENDIX UU.2: Baseline Ineligibility Letter—Spanish

APPENDIX VV.1: Interim Advance Letter—English

APPENDIX VV.2: Interim Advance Letter—Spanish

APPENDIX WW.1: Interim Cover Letter—English

APPENDIX WW.2: Interim Cover Letter—Spanish

APPENDIX XX.1: Interim Reminder PostCard—English

APPENDIX XX.2: Interim Reminder PostCard—Spanish

APPENDIX YY.1: Interim Reminder Email—English

APPENDIX YY.2: Interim Reminder Email—Spanish

APPENDIX ZZ.1: Interim Cover Letter Remailing—English

APPENDIX ZZ.2: Interim Cover Letter Remailing—Spanish

APPENDIX AAA.1: Interim Reminder Script—English

APPENDIX AAA.2: Interim Reminder Script—Spanish

APPENDIX BBB.1: Interim Thank You Letter—English

APPENDIX BBB.2: Interim Thank you Letter—Spanish

APPENDIX CCC.1: Final Advance Letter—English

APPENDIX CCC.2: Final Advance Letter—Spanish

APPENDIX DDD.1: Final Cover Letter—English

APPENDIX DDD.2: Final Cover Letter—Spanish

APPENDIX EEE.1: Final Reminder PostCard—English

APPENDIX EEE.2: Final Reminder PostCard—Spanish

APPENDIX FFF.1: Final Reminder Email—English

APPENDIX FFF.2: Final Reminder Email—Spanish

APPENDIX GGG.1: Final Cover Letter Remailing—English

APPENDIX GGG.2: Final Cover Letter Remailing—Spanish

APPENDIX HHH.1: Final Reminder Script—English

APPENDIX HHH.2: Final Reminder Script—Spanish

APPENDIX III.1: Final Thank You Letter—English

APPENDIX III.2: Final Thank you Letter—Spanish

APPENDIX JJJ.1: Focus Group moderator guide—English

APPENDIX JJJ.2: Focus Group moderator guide—spanish

APPENDIX KKK.1: Focus Group Flyer mailing—English

APPENDIX KKK.2: Focus Group Flyer mailing—Spanish

APPENDIX LLL.1: Focus Group Script Incoming Recruitment Calls—English

APPENDIX LLL.2: Focus Group Script Incoming Recruitment Calls—Spanish

APPENDIX MMM.1: Focus Group Script Outgoing Recruitment Calls—English

APPENDIX MMM.2: Focus Group Script Outgoing Recruitment Calls—Spanish

APPENDIX NNN.1: Focus Group Script Reminder Calls—English

APPENDIX NNN.2: Focus Group Script Reminder Calls—spanish

APPENDIX ooo.1: Focus Group Consent Form—English

APPENDIX ooo.2: Focus Group Consent Form—Spanish

APPENDIX ppp.1: Nutrition Educator Web Survey

APPENDIX ppp.2: Nutrition Educator paper Survey

APPENDIX ppp.3: Nutrition Educator Web Survey Sample Screen Shots

APPENDIX QQQ: Nutrition Educator Survey information Sheet

APPENDIX RRR: Nutrition Educator Survey Reminder Email/letter

APPENDIX SSS: Nutrition Educator Survey Reminder Script

APPENDIX Ttt: Site Staff Interview Guide Baseline

APPENDIX UUU: Site Staff Interview Guide Interim and Final

APPENDIX VVV: Site Staff onsite visit and Interview Invitation Script Baseline

APPENDIX WWW: Site Staff Interview Invitation Email Interim and Final

APPENDIX XXX: Administrative Data Request

APPENDIX YYY: Observation Forms

APPENDIX ZZZ: Federal Register Comments

APPENDIX AAAA: Response to Federal Register Comments

APPENDIX BBBB: National Agricultural Statistics Service Comments

APPENDIX CCCC: privacy and Nondisclosure Agreement

APPENDIX DDDD: IRB Approval Letters

Appendix EEEE: Pretest methods and findings


PART B. COLLECTION OF INFORMATION USING STATISTICAL METHODS

1. Respondent Universe and Sampling Methods

Describe (including a numerical estimate) the potential respondent universe and any sampling or other respondent selection method to be used. Data on the number of entities (e.g., establishments, State and local government units, households, or persons) in the universe covered by the collection and in the corresponding sample are to be provided in tabular form for the universe as a whole and for each of the strata in the proposed sample. Indicate expected response rates for the collection as a whole. If the collection had been conducted previously, include the actual response rate achieved during the last collection.

Phase I National Surveys and Site Interviews

Respondent Universe for the Phase I Local Agency and Site Surveys. The Phase I Local Agency and Site Surveys are designed to provide nationally representative descriptive information about nutrition education provided by all local agencies and the service delivery sites they manage. The respondent universe for the Local Agency Survey is all local agencies that provide WIC services in the 90 State agencies, including the 50 geographic States, 5 U.S. territories, the District of Columbia, and 34 Indian Tribal Organizations (ITOs) (see the first row of Table B1.1). The respondent universe for the Site Survey is all local sites that provide WIC services in the approximately 1,855 local agencies (see the first row of Table B1.2).

Sampling Methods for the Phase I Local Agency and Site Surveys. To collect information from local agencies and sites, we plan to use a probability-based sample design. In the first stage, a stratified probability proportional to size design will be used to select local agencies. For the first stage, the sampling unit and analytic unit are the local agencies, and the respondents will be the local agency directors (or other knowledgeable individual) who will complete the questionnaire. In the second stage, a systematic random sample of sites within each local agency will be selected. For the second stage, the sampling unit and analytic unit are sites among the selected local agencies, and the respondents will be the site supervisors (or other knowledgeable individual) who will complete the questionnaire. Figure B1.1 illustrates the sampling approach for the Local Agency and Site Surveys.

Table B1.1. Local Agency Survey: Respondent Universe, Sample, Expected Response Rate, and Estimated Number of Respondents by Stratum


Stratum 1
ITOs and
U.S. Territories

Stratum 2
EBT States

Stratum 3
Large Local Agencies (caseloads
> 10,000)

Stratum 4
All Other Local Agencies

Total

Respondent universe (number of local agencies)a

46

222

200

1,387

1,855

Caseload of local agencies in survey population

76,508

1,693,360

4,926,149

3,299,904

9,995,921

Allocated sample

46

222

200

532

1,000

Reserve sample

0

0

0

100

100

Expected response rate

80%

80%

80%

80%

80%

Estimated number of respondents

37

178

160

426

800

a1, 855 reflects the number of local agencies in the FNS WIC Program and Participant Characteristics (PC) 2010 data file. For the actual survey, we will use the 2012 data file and we will verify the list of local agencies with the most current FNS Local Agency Directory.

Table B1.2. Site Survey: Respondent Universe, Sample, Expected Cooperation Rate, and Estimated Number of Respondents by Stratum


Stratum 1
ITOs and U.S. Territories

Stratum 2
EBT States

Stratum 3
Large Local Agencies (caseloads
> 10,000)

Stratum 4
All Other Local Agencies

Total

Respondent universe (Estimated number of WIC sites eligible for sampling)a,b

164

908

2,116

1,534

4,722

Allocated sample

94

434

436

1,036

2,000

Expected cooperation rate

80%

80%

80%

80%

80%

Estimated number of respondents

75

347

349

829

1,600

a The PC 2010 provided the number of sites per local agency in approximately half of the local agencies. We used a simple linear regression model with local agency caseload as the predictor variable to build a model that estimated the number of sites within a local agency. For the local agencies with site-level information missing, we used the simple linear regression model to estimate the number of sites.

b All WIC sites within selected local agencies are eligible for sampling however only sites from local agencies that respond to the Local Survey will be contacted for the Site Survey (for estimating purposes we randomly chose 80 percent of the selected local agencies to be considered “responding” and calculated the number of sites from those local agencies).


Figure B1.1. Summary of Sampling Approach for the Local Agency and Site Surveys

a Includes Washington, DC.

Notes: ITOs refer to Indian Tribal Organizations.

EBT States refer to states that are using WIC Electronic Benefits Transfer (EBT) statewide.

Sample Design for Local Agencies—Stage 1 Sampling. In the first stage of the sample design, we will create four mutually exclusive strata:

  1. local agencies authorized by ITOs and U.S. territories

  2. local agencies authorized by States using EBT statewide

  3. large local agencies with caseloads greater than 10,000

  4. all other local agencies1

We will use the most current FNS WIC Local Agency Directory to create a list of all local agencies. We will use information available on the FNS web site to determine which State agencies have implemented EBT to identify the local agencies that use EBT. We will use the FNS WIC Participant Characteristics (PC) 2012 data file to determine the caseload for each local agency. Because we do not anticipate any difficulties in developing the sampling frame for the local agencies and sites, the survey population is the same as the target population.

As described in Part B.2, the required number of local agency respondents for the desired precision level is 800. Assuming an 80 percent response rate, we will need to sample 1,000 local agencies. The 80 percent response rate is based on FNS’s previous experience conducting surveys of local agencies in which response rates of 80 percent or better were achieved. We also plan to select a reserve sample of 100 local agencies in case any of the selected local agencies are no longer operational. Any sampled local agency that is no longer operational will be replaced by a reserve local agency, ensuring we have 1,000 local agencies in the sample. Table B1.1 details how we plan to allocate the primary and reserve sample across the four strata.

For strata 1, 2, and 3, we will select a census. Because larger local agencies would be selected with multiple hits in our probability proportional to caseload sampling algorithm, they are essentially selected with certainty. To facilitate this process, we created the third stratum comprising local agencies with caseloads of more than 10,000 for which we will also select a census. In the remaining stratum, we plan to select the remaining sample probability proportional to size with caseload being the size measure.

Simple Random Sample of Sites—Stage 2 Sampling. To gather site-level information, second-stage sampling will be conducted in which we will randomly select one to three WIC sites within each selected local agency. As described in Part B.2, the required number of WIC site respondents for the desired precision level is 1,600. Assuming an 80 percent cooperation rate, we will need to sample 2,000 sites.

Because there are no national lists of WIC sites maintained, we will need to compile a list of all sites and their caseloads, managed by the 1,000 sampled local agencies. To create the sampling frame for the WIC sites, we will use data from the PC 2012 data file and the 2014 State Plans. In the cases where site-level data are not available from these sources, we will contact the affected State agencies and, if needed, local agencies to obtain this information so that the frame represents the population of sites for the selected local agencies. Using the list of all WIC sites and their caseloads managed by the selected 1,000 local agencies, we will determine the overall sampling fraction (2,000/the total number of eligible WIC sites) and allocate the number of sites per local agency to be sampled by multiplying this sampling fraction by the number of eligible WIC sites in a given local agency. For local agencies with one site, the one site will be selected. Regardless of the sampling fraction, we will cap the number of sites selected per local agency to three to minimize respondent burden. Based on the sampling fraction, we will randomly select the appropriate number of sites by local agency. Before selecting the WIC sites, we will sort the data file by site caseload to ensure that a wide range of small and large WIC sites gets selected. Table B1.2 details how we anticipate allocating the sample of 2,000 sites across the four strata.

Sampling Method for Selection of Sites for the Phase I Site Interviews. To gather additional descriptive information on the delivery of nutrition education, we will conduct in-depth telephone interviews with a subset of respondents to the Site Survey. Assuming an 80 percent response rate, we will select a sample of 100 sites, yielding 80 completed site interviews. The selection of sites will be designed both to characterize how nutrition education is delivered and to gain a better understanding of the diversity of WIC nutrition education approaches and techniques and will seek geographic, caseload size, and delivery mode diversity.

Response Rates and Nonresponse Bias Analysis for Phase I. For calculating response rates for the Local Agency Survey, the numerator is the number of respondents and the denominator is the number of eligible local agencies (excludes any local agencies that are no longer in operation since selection of the sample). For calculating the cooperation rate for the Site Survey, the numerator is the number of respondents and the denominator is the number of eligible sites (excludes any sites that are no longer in operation since selection of the sample). The Site Survey response rate is equal to the Local Agency Survey response rate * the Site Survey cooperation rate.

To the extent that respondents are systematically different from the population as a whole with respect to characteristics used in an analysis, the potential for nonresponse bias exists. If the response rate for the surveys is less than 80 percent, we will conduct a nonresponse bias analysis to better understand any potential bias in the analysis introduced through nonresponse. To assess the magnitude of nonresponse bias for the Phase I Local Agency and Site Surveys, we will use the sample design weights (prior to eligibility and nonresponse adjustments) to determine if the caseload size and caseload population characteristics are statistically similar or different between respondents and nonrespondents for characteristics that are available for both groups. We will conduct this analysis separately for the Local Agency and Site Surveys. The analysis will use the PC 2012 file with local agency characteristics to provide information on nonresponding local agencies. We will use the nonresponse bias analysis to provide guidance on how the estimates may or may not be biased.

Phase II Pilot Study

Sampling Method for Selection of Sites for Phase II Pilot Study. The selection of sites for the Phase II pilot study will be designed to capitalize on the variability of WIC nutrition education to enable using a dose-response design. At the same time, we plan to employ a sampling procedure that would be scalable to a national study that yields nationally representative estimates. To do this, we will analyze the multiple sources of data gathered in the Local Agency and Site Surveys to characterize WIC sites in terms of the likely dosage of WIC nutrition education.

To ensure a diversity of sites in terms of geography, size, population characteristics, and mode of nutrition education, we plan to select the six pilot sites from the pool of 80 sites included in the site interviews, which will be selected to provide diversity in terms of the four sampling strata used in the Phase I surveys, size of the WIC clinic, the use of group nutrition education and technology-based nutrition education, and other factors. Hence, by selecting the pilot sites from within this pool we will have been assured of reasonable diversity of WIC sites.

We will classify the 80 sites from the site interviews into three tiers based on a multiple-component index of dosage of nutrition education, using information from the Local Agency and Site Surveys. This index will comprise measures of the frequency, duration, mode, use of learner-centered nutrition education, and use of reinforcers. For each of these components, we will create an index that allows ranking of the sites (see Table B1.3) and prepare a matrix that shows the ranking of each of the 80 sites on these various components.

We will also rank sites in terms of factors that are expected to enable the dosage of nutrition education: the ratio of WIC nutrition educators to WIC clients (i.e., staff-to-client ratio) and the extent of training of WIC nutrition educators in Value Enhanced Nutrition Assessment (VENA) learner-centered education approaches. Although staff-to-client ratio and staff training in learner-centered education are not measures of dosage of nutrition education, they are expected to enable more intense nutrition education and so may be useful in identifying pilot sites with variability in the dosage of nutrition education (Gerstein et al., 2010; Contento et al., 1995).

Table B1.3. Variables to Be Used in Selection of the Six WIC Sites for the Phase II Pilot Study



Tier

Component

Variable in Index

1

2

3

Indicators of Dose:





Frequency of nutrition education

Local Agency Survey: Frequency planned




Site Survey: Frequency provided [mean for site]a




Duration of nutrition education

Local Agency Survey: Mean length of nutrition education planned by type of visit





Local Agency Survey: Education planned by type of visit





Site Survey: Mean length of nutrition educationa




Modes of nutrition education

Local Agency Survey: Percentage of participants receiving different modes of nutrition educationa


NA



Site Survey: Ranking of modes




Learner-centered education approach

Site Survey: Approach to one-on-one nutrition education





Site Survey: Approach to group nutrition education and interactive resources for group educationa




Use of reinforcers

Local Agency Survey: Reinforcers and follow-up methods





Site Survey: Reinforcers and follow-up methodsa




Enabling Factors:





Staff-to-client ratio

Site Survey: Full time equivalents (FTEs) by position





Local Agency Survey: Number of WIC participants by site




Training in learner-centered education and Value Enhanced Nutrition Assessment (VENA)

Local Agency Survey: Hours of staff training in learner-centered education
Site Survey: Training in learner-centered education




Summary:





Count of components ranked in top tier





Count of components ranked in bottom tier





Total





a Primary indicator to be used in selection within the component category.

NA = not applicable

We will then seek to identify in the top-tier sites that were highly ranked on both indicators of dose and enabling factors; we will place in the bottom tier sites that received low ranks on both indicators of dose and enabling factors. We expect that sites with moderate ranks or inconsistent ranks will be placed in the middle tier. We will then select six sites from the three tiers as potential pilot sites (two sites per tier). In selecting the six sites, we will also take into consideration the need for diversity in terms of geographic location and size and the requirement to select sites that have not been involved in other recent data collection efforts. To ensure the inclusion of six sites in the pilot study, at the time of selection we will select one reserve site in each of the tiers that can serve as a replacement, if necessary, for a total of 12 candidate sites, and two sets of reserve sites (n = 12) if the initial selections are not eligible or unwilling to participate.

Respondent Universe and Sampling Method for the Phase II Participant Surveys and Focus Groups. The basic design for the Phase II pilot study is a longitudinal dose-response comparison, in which we will collect data from WIC participants at three time points (baseline, interim, final) over a 12-month period and examine changes in the outcome of interest in relation to exposure to WIC nutrition education over time.

For the pilot study, the respondent universe is the number of WIC recipients at the six WIC sites. Because the sites will not be selected until after Phase I is complete, we do not know the actual size of the respondent universe. The research team will work with the selected sites to enroll eligible participants into the study. To be eligible for participation, individuals must speak English or Spanish, be 18 or older, and be a pregnant or postpartum woman receiving WIC benefits or the parent or caregiver for a child up to age 4 who is receiving WIC benefits. For data collection and analysis purposes, we segmented WIC participants into the following three subpopulations: (1) pregnant women, (2) postpartum women, and (3) mother or other caregiver with eligible child aged 6 months to 4 years (reporting primarily on the child’s behaviors).

Figure B1.2 illustrates the sampling approach for the Phase II Participant Surveys and focus groups. Across the six sites, we anticipate that 1,100 WIC participants will be approached for screening and 900 (82 percent) will agree to be screened and will be eligible for the study. Of these, 800 will agree to enroll in the study and complete the baseline survey (89 percent), 640 of the 800 (80 percent) will complete the interim survey, and 600 of the 800 (75 percent) will complete the final survey. A subset of WIC participants (96) will take part in focus group discussions.

Figure B1.2. Sampling Approach for Phase II Participant Surveys and Focus Groups



Respondent Universe for Other Phase II Data Collection Activities. The population of WIC site administrative staff who could complete the staff interviews at baseline, interim, and follow-up is six people, and we assume that all six will respond. The population of WIC site staff who provide nutrition education and could complete the Phase II Nutrition Educator Survey is 38 (approximately 6 people per site), and we assume that 80 percent will respond for 30 completed surveys. The population of clerical local site staff who could receive the request to provide administrative data is 12 people, and we assume all 12 will respond. We are selecting a census of affected staff, so sampling is not required.

Response Rates and Attrition Analysis for Phase II Participant Surveys. The response rate for the baseline survey will be calculated with the number of eligible people approached as the denominator and the number of completed baseline surveys as the numerator. Survey cooperation rates for the interim and final surveys will be calculated with the number enrolled into the study as the denominator and the number of completed surveys as the numerator. The response and cooperation rates for the pilot study will provide information to inform the required sample size for a national evaluation study.

As part of conducting the impact analysis, we will investigate the potential impact of attrition on generalizability by comparing the pre-intervention (baseline) similarity of study participants who provide post-intervention data (at the interim and final time points) and those who do not. This is accomplished by fitting logistic regression models that regress variables of interest on indicator variables that differentiate those who completed the follow-up surveys, at each stage of treatment, and those who did not. This analysis provides odds ratios comparing these two groups on each variable, highlighting any association between a variable of interest and the likelihood of providing data at the post-intervention survey. If significant differences are found, a dummy indicator can be included in the impact models to account for any bias that may be associated with study attrition. The attrition rate for the pilot study and the degree to which attrition affects generalizability will be important factors to consider when designing a national evaluation study.

2. Procedures for the Collection of Information

Describe the procedures for the collection of information including:

  • Statistical methodology for stratification and sample selection,

  • Estimation procedure,

  • Degree of accuracy needed for the purpose described in the justification,

  • Unusual problems requiring specialized sampling procedures, and

  • Any use of periodic (less frequent than annual) data collection cycles to reduce burden.

Phase I National Surveys and Site Interviews

Statistical Methodology for Stratification and Sample Selection for Phase I Local Agency and Site Surveys. The survey design for the Phase I surveys is a two-stage stratified random design. In the first stage we group the local agencies into the four strata described in Section B.1. The sample allocation for the first stage is listed in Table B1.1. For strata 1, 2, and 3 we are selecting a census, and for stratum 4 we will select the local agencies probability proportional to size with caseload being the size measure. In the second stage, we plan to randomly select one to three sites within each selected local agency. The anticipated sample allocation for the second stage for selection of sites is shown in Table B1.2.

Following sample selection for the Local Agency Survey, we will contact the affected State agencies by email to request the name and contact information for the individual(s) they recommend we contact regarding the Local Agency Survey for the sampled local agency(ies) in their State (Appendix G). If available from FNS or other sources, we will provide contact information and request that they verify or update it. Additionally, we will ask State agencies to indicate if any of the local agencies selected are no longer operational. If not available from other sources, we will also request information on the listing of sites for the selected local agencies.

To commence data collection, we will send a recruiting email to the contacts at the 1,000 selected local agencies with a copy of the email provided to the associated State agencies (Appendix I). The email will explain that they were designated by their State agency as the contact for the survey and include instructions for completing the Local Agency Survey and the survey web link. Correspondence and a paper copy of the survey will be sent by postal mail if the State agency indicates that the target respondent for the Local Agency Survey does not have Internet access.

A similar approach will be used for recruiting for the Site Survey. At the end of the Local Agency Survey, the respondent will be asked to designate an appropriate respondent for each of the sites selected for the Site Survey and to provide their contact information. The Local Agency Survey will describe the topics of the Site Survey and suggest the job titles of potential site-level respondents to assist with identifying the appropriate respondent. An email will be sent to the designated respondents to provide the Site Survey link and instructions (Appendix J). Correspondence will be sent by postal mail if the local agency indicates that the target respondent for the Site Survey does not have Internet access. The Site Survey and accompanying materials will be translated and available in Spanish for sites in Puerto Rico.

Estimation Procedures. We plan to use standard design-based methods for estimation and variance estimation that will lead to confidence intervals on means and percentages. We will create two sets of survey weights: (1) for analyses conducted at the local agency level and (2) for analyses conducted at the WIC site level. The final analysis weights for the local agencies and the WIC sites will reflect the sample design and any nonresponse, which will allow for nationally representative estimates as well as subgroup-level estimates representative of the subgroups of interest. We will use these weights to conduct all statistical analysis.

  • Local Agency Weights. The first step is to create weights based on the sample design. The design-based weights are the inverse of the probability of selection. For the local agencies that are selected with certainty, the design-based weights are equal to one. Upon completion of data collection, the second set of weights, adjusting for nonresponse, will be created. If the number of local agencies has changed since creating the design weights, we will post-stratify the nonresponse-adjusted weights with the updated total counts, creating the final analysis weights.

  • WIC Site Weights. We will create the site weights in two steps. First, we will calculate a weight that sums to the total number of sites within each selected local agency. This weight is equal to the number of sites in the local agency divided by the number of selected WIC sites. For example, if there are 10 WIC sites in a given local agency and we have selected 2 WIC sites, then this first component is equal to 5. Next, we will multiply by the local agency weight to obtain a site-level weight that represents the population of WIC sites. In this example, if the WIC local agency weight = 1.5, then the site-level weight = 1.5 * 5 = 7.5. Similar to the local agency weights, after data collection is complete, we will adjust the WIC site weights for nonresponse. We will poststratify the weights if an updated count of the number of WIC sites per local agency is available.

The nonresponse bias analysis (described in Part B.1) will also inform us of any adjustments we may need to make in the local agency and WIC site weights.

When estimating standard errors in the context of a complex survey design, it is important that the design of the study be taken into consideration. Design complexities such as stratification, clustering, and weighting, in general, tend to increase the sampling variance. Not accounting for these factors can result in overestimation of sampling precision, leading to incorrect significance tests, that is, incorrectly assuming significance. The sample is stratified at the first stage (local agencies) and for the second stage (WIC sites) clustered within a local agency. Specifically, the clustering of the sample by local agency is necessary because of resource constraints, but there is a cost in terms of increased sampling variance and a reduced effective sample size. Clustering and other design features affect the standard errors and must be accounted for in statistical analyses. We plan to use SUDAAN (http://www.rti.org/sudaan/) and the nest statement within each SUDAAN procedure to specify the stratification, clustering, and primary sample units for the survey design, thus ensuring that standard errors are correctly calculated and incorporated into hypothesis testing. Weights will also be calculated and incorporated into the derivation of point estimates, standard errors, and statistical tests.

Degree of Accuracy Needed for the Purpose Described in the Justification. At the national level, assuming a design effect of 1.2 and an 80 percent response rate, the sample size of 1,000 for the Local Agency Survey will yield estimates for minimum detectable differences of ± 0.03 (3 percent) at 95 percent level of confidence (at 0.80 power). For the subgroup analysis (i.e., analysis by the four strata), assuming a design effect of 1.2 and an 80 percent response rate, the strata sample sizes shown in Table B1.1 on page 2 will yield estimates for minimum detectable differences of ± 0.05 (5 percent) at 95 percent level of confidence (at 0.80 power).

The Site Survey includes two versions (Appendix C). Approximately half of the respondents will be randomly assigned to complete Version 1, and the remaining half will complete Version 2. Each version includes the same set of base questions and a set of unique questions (i.e., module) to minimize respondent burden. At the national level, assuming a design effect of 1.8 and an 80 percent response rate, the sample size of 2,000 for the base questions will yield estimates for minimum detectable differences of ± 0.03 (3 percent) at 95 percent level of confidence (at 0.80 power). Regarding the modules, assuming a design effect of 1.8 and an 80 percent response rate, the sample size of 1,000 for the questions in each module will yield estimates for minimum detectable differences of ± 0.05 (5 percent) at 95 percent level of confidence (at 0.80 power).

Phase II Pilot Study

Statistical Methodology for Stratification and Sample Selection for Phase II Participant Surveys. The basic design for the Phase II pilot study is a longitudinal dose-response comparison, in which we will collect data from WIC participants at three time points (baseline, interim, final) over a 12-month period and examine changes in the outcome of interest in relation to exposure to WIC nutrition education over time. As illustrated in Figure B1.2, we anticipate that 1,100 WIC participants will be approached for screening, 900 (82 percent) will agree to be screened and will be eligible for the study. Of these, 800 (89 percent) will agree to enroll in the study and complete the baseline survey.

Research team members (two people per clinic) will be stationed in the waiting room of the six sites during a 4- to 6-week enrollment period. The research staff will approach each individual as they enter the clinic. If two people enter at the same time, the research staff will randomly select a person to approach. The research staff will briefly describe the study, and, if the individual is interested, the research staff will administer the electronic screening questionnaire (Appendix AA). If eligible, the research staff will obtain informed consent (Appendix BB) and then ask the respondent to complete the self-administered baseline survey. We will monitor enrollment so that the distribution of enrolled participants across the three subpopulations (pregnant women, postpartum women, mother/caregiver of eligible child) is similar to the distribution for the site. The survey questionnaires and other materials will be translated into Spanish.

Estimation Procedures. For the Phase II Participant Surveys, we will use SAS to estimate the standard errors, and our estimation procedures will take into account that participants are nested within WIC sites. When data are nested, responses within the same cluster tend to be correlated. If the correlated nature of the data is ignored in the model specification, it may lead to inflated type I error rates. A series of hierarchical, or mixed-effects, regression models will be used to account for correlated responses by allowing for the inclusion of multiple sources of random variation.

Degree of Accuracy Needed for the Purpose Described in the Justification. The pilot study will assess the operational feasibility of a dose-response design approach. Thus, the proposed sample size has been designed to be large enough that a meaningful change in nutrition-related behavior could be detected as statistically significant and thus have the ability to empirically inform the optimal design for the national evaluation.

Table B2.1 provides a range of sample size estimates for the Phase II Participant Surveys, based on different outcome measures and their smallest detectable differences. These sample sizes reflect the comparison of changes between baseline and 12-month follow-up for participants who received different levels of exposure to WIC nutrition education. For purposes of illustration, we consider two possible outcomes: (1) self-efficacy of serving children more fruits and vegetables and (2) drinking low-fat or nonfat milk.2 For each outcome, we developed two scenarios: (1) an equal number of participants in each dosage group, thus maximizing the difference that can be detected for total samples size and (2) twice as many participants in the higher dose group as in the lower dose group. In calculating these power estimates, we have assumed an intraclass correlation coefficient (ICC) of 0.1 because of clustering of participants within individual WIC sites. The effects of clustering would be considerably smaller with a larger number of sites in the national evaluation.

To be conservative, we plan to conduct the pilot study with 800 respondents at baseline (400 in the lower dosage group and 400 in the higher dosage group to maximize the power of comparisons between the two groups). Assuming a 75 percent retention rate, this would yield 600 respondents for the final survey or 100 respondents per site. We have assumed a 75 percent retention rate from baseline to the final survey based on the contractor’s experience conducting longitudinal surveys with low-income populations using multiple modes to collect survey responses.

Table B2.1. Sample Sizes for Different Outcomes and Scenarios for the Phase II Participant Surveys

Characteristic

Higher Dose

Lower Dose

Total

Without Clustering

With Clustering

Without Clustering

With Clustering

Without Clustering

With Clustering

[1] Outcome: Self-efficacy of serving children more fruits and vegetables

Baseline

50.0%

50.0%



12-month follow-up

65.0%

50.0%



Sample Size at Baseline (assumes 75% retention)







Equal allocation

60

99

60

99

120

198

Unequal allocation (2:1)

83

160

42

59

125

219

[2] Outcome: Child drinks low-fat or nonfat milk

Baseline

36.4%

36.3%



12-month follow-up

41.0%

33.2%



Sample Size at Baseline (assumes 75% retention)







Equal allocation

119

283

119

283

238

566

Unequal allocation (2:1)

176

546

88

176

264

722

Notes: This table shows the sample size needed to achieve statistical significance in the difference of baseline and post-intervention estimates between two groups, where p < 0.05 (two-tailed) and power = 0.80. The table provides the required sample size at baseline, assuming a 75% retention rate after 12 months and ICC of 0.1 associated with clustering within WIC clinics. Estimates were calculated using Power Analysis and Sample Size (PASS) software (NCSS, Kaysville, Utah).

3. Methods to Maximize Response Rates and to Deal with Non-Response

Describe methods to maximize response rates and to deal with issues of non-response. The accuracy and reliability of information collected must be shown to be adequate for intended uses. For collections based on sampling, a special justification must be provided for any collection that will not yield “reliable” data that can be generalized to the universe studied.

Phase I National Surveys and Site Interviews

Early and ongoing communication about the Local Agency and Site Surveys with State and local agencies will be critical to promote awareness of the survey; gain buy-in for it; and, ultimately, achieve the desired response rate. FNS Regional Offices will receive advance copies of all communications with State and local agencies to enable them to assist with questions and to support the study activities. The recruitment of the local agencies and sites for the Phase I Surveys will focus on explaining the importance and usefulness of the study data. Specific procedures to maximize response rates are summarized below.

  • Email a letter on FNS letterhead and a tri-fold brochure to inform all State agency directors about the upcoming study, including its objectives, approach, the importance of participation, and what will be asked of State and local agencies (Appendices E and F).

  • Following sample selection for the Local Agency Survey, email a notification letter to the associated State agencies and a “frequently asked questions” (FAQ) document (Appendices G and H). The letter to the State agency directors will include a list of the selected local agencies in their State and request their assistance in encouraging these local agencies to complete the survey.

  • Email a persuasive invitation letter to local agencies and sites selected for the surveys that describes the importance of the study, requests participation in the study, and provides a link to the brochure and FAQ document (Appendix F and Appendices H through J).

  • During data collection, maintain a help desk of trained support staff to provide assistance via email or phone requests on weekdays to assist local agencies and sites with technical issues associated with accessing the survey or submitting responses.

  • Follow-up with nonresponsive local agencies and sites using email and phone reminders.

  • Send the first reminder/thank you approximately 1 week after the survey is launched to all Local Agency Survey respondents (Appendix K.1).

  • Send a second email reminder to local agencies that have not responded approximately 3 weeks after the survey is launched (Appendix K.2).

  • Send a third email reminder to local agencies that have not responded (Appendix K.3) and to all sites (Appendix L) that have received the Site Survey recruitment email 5 weeks after the launch (3 weeks before announced end date for survey).

  • Two weeks before the scheduled survey end date, send another email reminder to the local agencies that have not submitted any surveys for the Local Agency or Site Surveys, as well as to any local agencies that completed the Local Agency Survey but one or more of their selected sites have not responded (Appendix K.4). Also, email the affected State agencies a list of their local agencies that have not completed any surveys (Local Agency or Site) and request their assistance with encouraging responses from these local agencies (Appendix M).

  • If necessary to achieve the response rate, contact up to 100 nonresponding local agencies by phone to remind them of the requirement to participate in the survey and address any concerns (Appendix N). We will focus these calls on local agencies that have not responded, local agencies that have multiple nonresponsive sites, and on local agencies and sites in strata with lower response rates.

Phase II Pilot Study, Participant Surveys

Our procedures for ensuring high response rates among WIC participants for the longitudinal Phase II Participant Surveys are summarized below. The response to the Phase II pilot will provide useful information for developing the data collection procedures for a national evaluation.

  • Launch an intensive recruitment effort that involves interacting with and enrolling participants in person starting 3 months prior to data collection (Appendices X through Z).

  • Implement standardized training for all data collectors that focuses on basic skills of interviewing, the study background and questionnaires, how to gain participant cooperation, and appropriate contact procedures. Data collectors must complete a certification process to work on the study.

  • Provide a toll-free number and email address for respondents to contact to verify the study’s legitimacy or to ask questions.

  • For the baseline survey, send a reminder postcard and make telephone reminder calls to nonrespondents (Appendices RR and SS).

  • For the interim and final surveys, send an advance letter reminding participants of the upcoming survey 1 week before the mailing of the interim and follow-up surveys (Appendices VV and CCC). Then, send via first class mail a packet to the study participants that will include a cover letter, the questionnaire, and a postage-paid return envelope (Appendices WW and DDD). Four days after the mailing of the survey packet, mail all study participants a postcard that thanks respondents who have already completed the survey and reminds those who have not completed the survey to do so within a specified time period (Appendices XX and EEE). Two weeks after the mailing of the initial survey packet, mail a second packet to participants who have not completed and returned the questionnaire (Appendices ZZ and GGG). For participants who indicated that we could contact them by email, send an email 2 days after the second mailing (which should arrive shortly before the second mailing) alerting them that a second copy of the survey is being sent in case they misplaced the first copy (Appendices YY and FFF). Two weeks after the second survey packet mailing, call nonrespondents and try to complete the survey over the phone using CATI. We will make a minimum of 10 call attempts to each working phone number (Appendices AAA and HHH).

  • Use telephone call scheduling procedures designed to call numbers at different times of the day (between 8 am and 9 pm in respondent’s time zone) and week (Sunday through Saturday) to improve the chances of finding respondents at home.

  • Leave a generic message on voice mail on the participant’s telephone to let her know to call back to complete a telephone interview.

  • Implement refusal conversion efforts by skilled telephone interviewers.

  • Provide a thank you gift up to $50 in gift cards, administered incrementally per survey, to encourage participants to enroll and continue participation through the data collection period ($20 for baseline, $15 for interim, and $15 for final).

4. Test of Procedures or Methods to be Undertaken

Describe any tests of procedures or methods to be undertaken. Testing is encouraged as an effective means of refining collections of information to minimize burden and improve utility. Tests must be approved if they call for answers to identical questions from 10 or more respondents. A proposed test or set of tests may be submitted for approval separately or in combination with the main collection of information.

The instruments for this study were pretested in November 2013 to March 2014 (see Appendix EEEE). All testing was done with nine or fewer respondents per unique instrument (see Table B4.1). Instruments to be administered in Spanish (Site and Participant Surveys) were translated and pretested with Spanish-speaking individuals. Using cognitive testing methodology, respondents from the target audience were asked to complete the survey instrument, and the interviewer asked debriefing questions to assess whether the question’s intent was clear, the terminology well defined, and the responses unambiguous. The pretests also provided estimates of participant burden. The instruments were revised, as needed, based on the pretest findings.

Table B4.1. Number of Pretests by Instrument

Instrument

Number

Phase I


Local Agency Survey

3

Site Survey

5 (3 English, 2 Spanish)

Site interviews

5

Phase II


Participant Surveys

9 (5 English, 4 Spanish)

Moderator guide for participant focus groups

3

Nutrition Educator Survey

3

Administrative data request (State agencies and sites)

6

Forms for observations of nutrition education delivery

2



5. Individuals Consulted on Statistical Aspects and Individuals Collecting and/or Analyzing Data

Provide the name and telephone number of individuals consulted on statistical aspects of the design and the name of the agency unit, contractor(s), grantee(s), or other person(s) who will actually collect and/or analyze the information for the agency.

RTI, Altarum Institute, Atkins Center for Weight and Health/University of California at Berkeley, and FNS staff consulted on statistical and other aspects of the design (see Table B5.1). The same staff will be responsible for collecting and analyzing the study’s data. Comments from the public were also incorporated.

Table B5.1. Individuals Consulted on Data Collection or Analysis

Name

Affiliation

Phone Number

Email Address

James Hersey, PhD

RTI International

202-728-2486

[email protected]

Sheryl Cates

RTI International

919-541-6810

[email protected]

Celia Eicheldinger, MS

RTI International

919-541-6222

[email protected]

Karl Krotki, PhD

RTI International

202-728-2485

[email protected]

Linnea Sallack, MPH, RD

Altarum Institute

405-310-4775

[email protected]

Lorrene Ritchie, PhD

University of California at Berkeley

510-489-8483

[email protected]

David Hancock

USDA/NASS

202-690-2388

[email protected]

Karen Castellanos-Brown, MSW, PhD

USDA/FNS

703-305-2732

[email protected]

Melissa Abelev, PhD

USDA/FNS

703-305-2209

[email protected]



REFERENCES

Contento, I., Balch, G. I., Bronner, Y. L., Paige, D. M., Gross, S. M., Bisignani, L., … Swadener, S. S. (1995). Nutrition education policy, programs, and research: A review of research. Journal of Nutrition Education, 27(6), 279–418.

Gerstein, D E., Martin, A. C., Crocker, N., Reed, H., Elfant, M., & Crawford, P. (2010). Using learner-centered education to improve fruit and vegetable intake in California WIC participants. Journal of Nutrition Education and Behavior, 42(4), 216–24.

1 This stratum includes Washington, DC.

2 We selected low-fat milk as an example for this illustration because we could draw on data from a prior study in developing a scenario for the expected magnitude of change. Although this illustration may be helpful in planning, the analysis of effects on low-fat milk will be tailored to the actual WIC requirement in the State agency and the age of the WIC participant. Current WIC food package rules require whole milk for children between 1 and 2 years old and then either 2 percent, 1 percent, or nonfat milk for children over age 2. Several states have set policy allowing for only 1 percent or nonfat after age 2. Hence, our actual analysis about effects on consumption of low-fat or reduced fat milk will be adapted to reflect the requirements of the State agencies where the pilot sites are located.


File Typeapplication/msword
AuthorDPatterson
Last Modified Byscc
File Modified2014-10-16
File Created2014-10-15

© 2024 OMB.report | Privacy Policy