Appendix E

APPENDIX_E _Statistical_Methods_for_the_Impact_Evaluation_SNAP-Ed_I_1_12_10hjw.doc

Models of SNAP-Education (ED)and Evaluation Study

APPENDIX E

OMB: 0584-0554

Document [doc]
Download: doc | pdf












Appendix E.

Statistical Methods for the
Impact Evaluation



This appendix provides a detailed description of the statistical methods that will be used by the contractor, RTI International, to collect and analyze the data for the impact evaluations of the four demonstration projects.

CNNS Impact Evaluation

Respondent Universe

The study population is parents/caregivers of first to third grade children attending elementary school in two Oklahoma counties – Pontotoc and Bryan. The schools in these counties have similar percentages of Native American students and similar percentages of students who receive free and reduced price meals.

Procedures for the Collection of Information

Statistical Methodology for Stratification and Sample Selection

Due to logistical and cost constraints, the implementing agency, CNNS is unable to provide the intervention to schools outside of Pontotoc County. To provide the most rigorous design possible under this constraint, we have identified Bryan County, a neighboring county that is similar on dimensions of interest to the program. Using a quasi-experimental research design, we matched schools in Pontotoc County to schools in Bryan County. Matching was accomplished through the use of an algorithm that included three variables describing characteristics of the schools – percentage of Native American students, percentage of students receiving free and reduced-priced meals, and school size. Variables were weighted according to importance and a distance value (Dij) between each school in treatment (i = 1 – 5) and each school in control (j = 1 – 5) was generated. The algorithm applies the following formula:

,

where Dij is the distance value between two schools i and j, “Abs” indicates the absolute value, FARM indicates free and reduced price meals, %NA indicates percentage of Native American Students, and SS indicates School Size. For each intervention school, i, the lowest distance value is deemed the best match. If two intervention schools are matched to the same control school, the difference between the best match Dij and the second best match Dij’ is estimated for each. The intervention school with the larger difference retains the best match and the other intervention school is assigned to its second best match.

Table E.1 shows the matched pairs of schools in the intervention and control groups and provides descriptive information on each school. We will survey parents/caregivers of students at pre- and post-intervention to collect information on the key outcomes of interest.



Table E.1. Descriptive Information on the Intervention and Control Groups for the Eagle Adventure Program Evaluation

Intervention Group
Pontotoc County

Control Group
Bryan County

School

Size

% FARM

%NA

School

Size

% FARM

%NA

Francis

109

71

45

Silo

114

75

40

Homer

255

61

43

Northwest Heights

279

61

27

Allen

85

67

54

Calera

132

68

37

Vanoss

97

83

30

Ward Elementary

135

87

29

Roff

77

71

21

Washington Irving

252

63

27

Total

623

--

--

Total

912

--

--


Estimation and Analysis Procedures

We will assess the pre-intervention equivalence of the intervention and control groups based on statistical analysis of the pre-intervention survey data. We will generate frequencies and means and generate tables, including simple tests of association (e.g., t-tests, chi-square tests). In addition to demographic and socio-ecological variables, we will assess baseline levels for the key outcome measures. Factors that are significantly different will become candidate control variables for subsequent statistical assessment.

We will also assess the pre-intervention similarity between study participants who provide post-intervention data and those who do not. This is accomplished by fitting logistic regression models that regress variables of interest on indicator variables that differentiate participants who complete the program and those who do not (program drop-outs). The results of this analysis provides odds ratios comparing non-participants with participants on each variable, highlighting any association between a variable of interest, the likelihood of completing the intervention, and providing data at the post-intervention survey. If significant differences are found, a dummy indicator can be constructed to account for any bias that may be associated with program drop-outs.

We will apply difference in difference models that protect against hidden biases that could otherwise lead to erroneous estimation of program effects. We will begin by looking at the bivariate associations between outcome variables and treatment assignment. Next, we will conduct multivariate general and generalized linear model analyses. As directed by our preliminary analyses, we will include control variables that are not well-distributed across the intervention and control groups.

The analysis will be conducted using models that properly account for the complex and nested structure of the data. Here, children are nested in schools, and schools are nested in conditions (intervention versus control), leading to multiple sources of random variation that need to be accounted for in the model. Our analyses will employ a nested cross-sectional model for paired data. In this model, schools are matched and the number of matched pairs determines the denominator degrees of freedom for the test of the intervention effect. The test of the intervention effect assesses the variation among the adjusted condition means against the variation among the adjusted condition by paired means; the null hypothesis asserts that the variation due to condition is zero.

Finally, analyses will be conducted to deconstruct and contextualize main impact findings. For example, we could compare parents/caregivers on the basis of the number of exposures to the take-home newsletters. This approach transforms the key independent variable from an indicator (intervention versus control) to a continuous measure and affords the analysis greater sensitivity.

Degree of Accuracy Needed for the Purpose Described in the Justification

Table E.2 provides the sample design for the Eagle Adventure evaluation and our assumptions regarding response rate and attrition. We estimated sample size allowing for a Type II error rate of 0.20 (yielding 80 percent statistical power) and a Type I error rate of 0.05, with a two tailed test, with the aim of detecting a change in consumption of servings of fruits and vegetables of 0.30 standard deviation units or better.

Table E.2. Sample Design for the Eagle Adventure Program Evaluation

Group

Number of Schools

Number of Children

Number of Completed Surveys

Pre-Intervention Survey (Number of Parents/ Caregivers)a

Post-Intervention Survey (Number of Parents/ Caregiversb

Intervention

5

623

383

318

Control

5

740c

455

378

a Assumes that 82 percent will consent to providing contact information and a 75 percent response rate for the pre-intervention survey.
b Assumes an 83 percent response rate between the pre- and post-intervention surveys.

c Assumes subsampling of students from larger schools in Bryan County

The Eagle Adventure Program includes a number of constraints to sample size determination that must be considered. There are a total of 10 schools (5 treatment, 5 control) included in the study. School sizes vary, but provide a maximum potential of 1,535children in the first through third grades. Based on the assumed response rates and attrition (specified in Table E.2), we anticipate an average of 70 completed surveys per school for the post-intervention survey.

Appendix H provides our assumptions for sample size estimation; the assumptions include the minimum detectable effect, an estimate of the mean and standard deviation for the main outcome, estimation of intraclass correlation, and reduction to the standard error due to characteristics of the statistical model (e.g., matching, use of repeated measures, and the inclusion of covariates). Additionally, we provide justification as to why these assumptions are realistic for the demonstration projects. Based on the characteristics of the Eagle Adventure Program outlined above, and the assumptions described in Appendix G, our proposed sample design will provide an 82 percent probability of detecting a statistically significant difference between the intervention and control groups if the realized increase in fruit and vegetable consumption is 0.61 servings of fruits and vegetables or greater. To the extent that we have overestimated the interclass correlation coefficient (ICC) or underestimated the benefits of correlated measures and covariate adjustment, statistical power will improve.



University of Nevada Impact Evaluation

Respondent Universe

The study population is parents/caregivers of preschool children (ages 3 to 4) attending Acelero Head Start Centers in Las Vegas, Nevada.

Procedures for the Collection of Information

Statistical Methodology for Stratification and Sample Selection

For the independent impact evaluation of the All 4 Kids intervention, we will employ a quasi-experimental research. Table E.3 shows the centers that will be available for assignment and provides descriptive information for each. The 12 centers will be assigned to either intervention or control groups by the Altarum/RTI team with the assistance of the Implementing Agency (IA).

For this evaluation, a fully randomized design is not appropriate given that two of the centers (Martin Luther King and PDC) have been exposed to the intervention and need to be assigned to the intervention condition. Among the remaining centers, assignment to condition will be random. Because of uniform enrollment criteria, all Acelero Head Start Centers are assumed to be similar across Las Vegas. This assumption will be evaluated prior to random assignment based on enrollment data that will be available in the later summer/early fall of 2009. Selection will also take geographic proximity into account to reduce the likelihood of program spillover.

Within each center, the IA will purposively select three classrooms. To collect information on our outcome measures, we will survey parents/caregivers of children who participate in the All 4 Kids program before and after the intervention.



Table E.3. Descriptive Information on the Centers Available for Assignment for the All Kids Program Evaluation

Center

Number of Classes

Number of Children Enrolled

Zone

Full Day

Morning Only

Afternoon Only

Total

Martin Luther King*

2

3

3

8

142

North

PDC*

2

1

1

4

57

Central

Owens

4

2

2

8

101

North

Reynaldo Martinez

0

5

5

10

166

Central

Spring Valley

4

6

6

16

265

Central

Henderson

4

5

5

14

237

South

Cecile Walnut

0

4

4

8

136

North

East Carey

0

3

3

6

90

North

Yvonne Atkinson Gates 951

0

3

3

6

102

North

Stewart

3

1

1

5

77

South

Sunflower

0

4

4

8

136

South

Herb Kaufman

2

2

2

6

103

South

*Indicates a center previously exposed to the All 4 Kids Program



Estimation and Analysis Procedures

We will assess the pre-intervention equivalence of the intervention and control groups based on statistical analysis of the pre-intervention survey data. We will generate frequencies and means and generate tables, including simple tests of association (e.g., t-tests, chi-square tests). In addition to demographic and socio-ecological variables, we will assess baseline levels for the key outcome measures. Factors that are significantly different will become candidate control variables for subsequent statistical assessment.

We will also assess the pre-intervention similarity between study participants who provide post-intervention data and those who do not. This is accomplished by fitting logistic regression models that regress variables of interest on indicator variables that differentiate participants who complete the program and those who do not (program drop-outs). The results of this analysis provides odds ratios comparing non-participants with participants on each variable, highlighting any association between a variable of interest, the likelihood of completing the intervention, and providing data at the post-intervention survey. If significant differences are found, a dummy indicator can be constructed to account for any bias that may be associated with program drop-outs.

We will apply difference in difference models that protect against hidden biases that could otherwise lead to erroneous estimation of program effects. We will begin by looking at the bivariate associations between outcome variables and treatment assignment. Next, we will conduct multivariate general and generalized linear model analyses. As directed by our preliminary analyses, we will include control variables that are not well-distributed across the intervention and control groups.

The analysis will be conducted using models that properly account for the complex and nested structure of the data. Here, children are nested in centers, and centers are nested in conditions (intervention versus control), leading to multiple sources of random variation that need to be accounted for in the model. Our analyses will employ a nested cross-sectional model for paired data. In this model, centers are matched and the number of matched pairs determines the denominator degrees of freedom for the test of the intervention effect. The test of the intervention effect assesses the variation among the adjusted condition means against the variation among the adjusted condition by paired means; the null hypothesis asserts that the variation due to condition is zero.

Finally, analyses will be conducted to deconstruct and contextualize main impact findings. For example, we could compare parents/caregivers who attended all of the Family Activity Events to those who missed some sessions or did not attend any. This approach transforms the key independent variable from an indicator (intervention versus control) to a continuous measure and affords the analysis greater sensitivity.

Degree of Accuracy Needed for the Purpose Described in the Justification

Table E.4 shows the target number of completed surveys with parents/caregivers. We estimated sample size allowing for a Type II error rate of 0.20 (yielding 80 percent statistical power) and a Type I error rate of 0.05, with a two tailed test, with the aim of detecting a change in consumption of servings of fruits and vegetables of 0.30 standard deviation units or better. The attrition rate of 80 percent between the pre- and post-intervention surveys is based on information provided by the IA.

Table E.4. Sample Design for the All 4 Kids Program Evaluation

Group

Number of Centers

Number of Children*

Number of Completed Surveys

Pre-Intervention Survey (Number of Parents/ Caregivers)*

Post-Intervention Survey (Number of Parents/ Caregivers)**

Intervention

6

360

300

240

Control

6

360

300

240

*Assumes 3 classrooms per center, with an average number of 20 students per classroom.

** Assumes an 83% response rate for the pre-intervention survey.
*** Assumes an 80 percent response/attrition rate between the pre- and post-intervention surveys.

The All 4 Kids Program includes a number of constraints to sample size determination that must be considered. There are a total of 12 Head Start centers (6 intervention and 6 control) included in the study. Center sizes vary from 57 to 265 preschool children. Assuming an 80 percent retention rate, we anticipate an average of 40 completed surveys per center for the post-intervention survey.

Appendix G provides our assumptions for sample size estimation and the justification for these assumptions. Based on the characteristics of the All 4 Kids Program outlined above, and the assumptions described in Appendix G, our proposed sample design will provide an 83 percent probability of detecting a statistically significant difference between the intervention and control groups if the realized increase in fruit and vegetable consumption is 0.61 servings of fruits and vegetables or greater. To the extent that we have overestimated the ICC or underestimated the benefits of correlated measures and covariate adjustment, statistical power will improve.

NYSDOH Impact Evaluation

Respondent Universe

The study population is preschool children in 3- and 4-year-old classes and their parents/ caregivers attending approximately 156 low-income Child and Adult Care Food Program (CACFP) childcare centers throughout New York State, including New York City (NYC) boroughs.

Procedures for the Collection of Information

Statistical Methodology for Stratification and Sample Selection

Randomization of CACFP centers to intervention and control conditions will be conducted by the RTI/Altarum team. The sampling fame will be based on the list of approximately 156 CACFP centers identified by NYDOH IAs for receipt of the EWPHCCS program. The number of pairs of centers in each stratum will depend upon their proportional distribution in NYC and the remainder of the State. Pairs of centers will then be created in a two step process. First, ineligible centers will be identified and removed from the pool of all potentially available centers. Ineligible centers will be defined according to the following exclusion rules:

  1. Any center promised the program during the first (July-August) or second (September-October) cycle.

  2. Any center enrolling fewer than 35, 3- and 4-year olds. These centers are unlikely to provide a sufficient number of parent respondents at the follow up assessment for us to achieve an adequate sample size.

  3. Any center that does not agree to participate in the randomization process.

From among the remaining centers, matches will be based on type of CACFP center (i.e., Head Start centers will be matched to other Head Start centers, non-Head Start centers will be matched to non-Head Start centers), geography (within county, when possible), and center size. Within each stratum, pairs will be ordered using a random number generation, with the pair assigned the lowest random number allocated to the first position, the pair assigned the second lowest number allocated to the second position, and so on. The first six pairs in each stratum will be selected for inclusion. Within each pair a second random number assignment process will determine which center is to receive the EWPHCCS program; the center assigned the lowest random number in each pair will receive the program while the other center will be placed in a wait-list control condition. Both centers in the pair must agree to honor the results of the random assignment process. Centers in the wait-list control must agree not to seek to implement similar programming during the evaluation period. If these conditions are not met, the pair of centers will be removed from consideration and the next pair on the list will be contacted.

To collect information on our outcome measures, we will survey parents/caregivers of children who participate in the program before and after the intervention.

Estimation and Analysis Procedures

We will assess the pre-intervention equivalence of the intervention and control groups based on statistical analysis of the pre-intervention survey data. We will generate frequencies and means and generate tables, including simple tests of association (e.g., t-tests, chi-square tests). In addition to demographic and socio-ecological variables, we will assess baseline levels for the key outcome measures. Factors that are significantly different will become candidate control variables for subsequent statistical assessment.

We will also assess the pre-intervention similarity between study participants who provide post-intervention data and those who do not. This is accomplished by fitting logistic regression models that regress variables of interest on indicator variables that differentiate participants who complete the program and those who do not (program drop-outs). The results of this analysis provides odds ratios comparing non-participants with participants on each variable, highlighting any association between a variable of interest, the likelihood of completing the intervention, and providing data at the post-intervention survey. If significant differences are found, a dummy indicator can be constructed to account for any bias that may be associated with program drop-outs.

We will apply difference in difference models that protect against hidden biases that could otherwise lead to erroneous estimation of program effects. We will begin by looking at the bivariate associations between outcome variables and treatment assignment. Next, we will conduct multivariate general and generalized linear model analyses. As directed by our preliminary analyses, we will include control variables that are not well-distributed across the intervention and control groups.

The analysis will be conducted using models that properly account for the complex and nested structure of the data. Here, children are nested in centers, and centers are nested in conditions (intervention versus control), leading to multiple sources of random variation that need to be accounted for in the model. Our analyses will employ a nested cross-sectional model for paired data. In this model, centers/schools are matched and the number of matched pairs determines the denominator degrees of freedom for the test of the intervention effect. The test of the intervention effect assesses the variation among the adjusted condition means against the variation among the adjusted condition by paired means; the null hypothesis asserts that the variation due to condition is zero.

Finally, analyses will be conducted to deconstruct and contextualize main impact findings. For example, we could compare parents/caregivers who attended all educational sessions to those who missed some sessions or did not attend any. This approach transforms the key independent variable from an indicator (intervention versus control) to a continuous measure and affords the analysis greater sensitivity.

Degree of Accuracy Needed for the Purpose Described in the Justification

Table E.5 shows the sample design for the evaluation of the EWPHCCS Program. We estimated sample size allowing for a Type II error rate of 0.20 (yielding 80 percent statistical power) and a Type I error rate of 0.05, with a two tailed test, with the aim of detecting a change in consumption of servings of fruits and vegetables of 0.30 standard deviation units or better.

The EWPHCCS Program includes a number of constraints to sample size determination that must be considered. Center sizes will vary; however, based on information provided by the IA and the response and attrition rates noted in Table E.5, we anticipate an average of 23 completed surveys per center for the post-intervention survey.

Appendix G provides our assumptions for sample size estimation and the justification for these assumptions. Based on the characteristics of the EWPHCCS Program outlined above and the assumptions described in Appendix G, our proposed sample size of 24 centers (12 intervention and 12 control) will provide a 97.3 percent probability of detecting a statistically significant difference between the intervention and control groups if the realized increase in fruit and vegetable consumption is 0.61 servings of fruits and vegetables or greater. To the extent that we have overestimated the ICC or underestimated the benefits of correlated measures and covariate adjustment, statistical power will improve.

Table E.5. Sample Design for the EWPHCCS Program Evaluation

Group

Number of Centers

Number of Children*

Number of Completed Surveys

Pre-Intervention Survey (Number of Parents/ Caregivers)**

Post-Intervention Survey (Number of Parents/ Caregivers)**

Intervention

12

720

393

275

Control

12

720

393

275

* Assumes two classrooms per center, with an average of 30 children per classroom.
** Assumes that 78 percent will consent to providing contact information and a 70 percent response rate for the pre-intervention survey.
*** Assumes a 70 percent response/attrition rate between the pre- and post-intervention surveys.

PSU Impact Evaluation

Respondent Universe

The audience for the About Eating Program is SNAP eligible women, ages 18 to

45, and living in one of the 34 counties not served by SNAP-Ed or one of the 6 counties

with service consisting only of County Assistance Office activities conducted by the

Pennsylvania Nutrition Education Network. PSU’s target audience will also be English literate and have access to the Internet. Persons with conditions impacting eating competence will be restricted from participating in the study. These conditions include poor health (e.g. diagnosis of diabetes, cancer, heart disease, lung disease within the past 5 years), currently pregnant or nursing mother, and full-time study of nutrition. Participation will require English literacy and access to the Internet.


In the 40 targeted counties, persons eligible for SNAP will be recruited by PSU through the use of Pennsylvania Department of Welfare SNAP databases and postings in County Assistance Offices, as well as venues including laundromats, job services, and discount stores. Postings will include an email address for potential participants to send an indication of interest. SNAP participants identified on Pennsylvania Department of Welfare lists will be sent an email notification (if available) or a post card that includes the project coordinator’s email address to enable SNAP participants to inquire about the study.


Procedures for the Collection of Information

Statistical Methodology for Stratification and Sample Selection

We will employ the same research design being used by PSU to evaluate the About Eating Program. As shown in Table E.6, the study design includes three intervention arms and one comparison arm. Participants who express interest in the study and meet the eligibility criteria will be randomly assigned to the intervention group or the comparison group, with stratification for rural/urban and participation in EFNEP. Participants in the intervention group will complete the Web-based About Eating Program. Participants in the comparison group will receive the link to the USDA SNAP-Ed Connection Web site and will receive the link to the About Eating Program after completing the study.

Table E.6. Research Design for the About Eating Program Evaluation

Group

Treatment

Intervention: (stratified by county location and EFNEP participation)

About Eating Web-based module

30 min daily physical activity*

Five-lesson module, self-selected order, evaluation post-module

<30 min daily physical activity

Five-lesson module with physical activity lesson last, evaluation post-fourth lesson

30 min daily physical activity

Five-lesson module with physical activity lesson last, evaluation post-module

Comparison:

Selection from USDA SNAP-Ed Connection Web site

* PSU may change this measure to be consistent with current USDA guidelines for physical activity.





Consistent with the PSU design, we plan to conduct a pre-intervention survey before the intervention and a post-intervention survey at the conclusion of the intervention.

Estimation and Analysis Procedures

We will assess the pre-intervention equivalence of the intervention and control groups based on statistical analysis of the pre-intervention survey data. We will generate frequencies and means and generate tables, including simple tests of association (e.g., t-tests, chi-square tests). In addition to demographic and socio-ecological variables, we will assess baseline levels for the key outcome measures. Factors that are significantly different will become candidate control variables for subsequent statistical assessment.

We will also assess the pre-intervention similarity between study participants who provide post-intervention data and those who do not. This is accomplished by fitting logistic regression models that regress variables of interest on indicator variables that differentiate participants who complete the program and those who do not (program drop-outs). The results of this analysis provides odds ratios comparing non-participants with participants on each variable, highlighting any association between a variable of interest, the likelihood of completing the intervention, and providing data at the post-intervention survey. If significant differences are found, a dummy indicator can be constructed to account for any bias that may be associated with program drop-outs.

We will apply difference in difference models that protect against hidden biases that could otherwise lead to erroneous estimation of program effects. We will begin by looking at the bivariate associations between outcome variables and treatment assignment. Next, we will conduct multivariate general and generalized linear model analyses. As directed by our preliminary analyses, we will include control variables that are not well-distributed across the intervention and control groups.

Analyses will be conducted to deconstruct and contextualize main impact findings. For example, these analyses will include comparisons based on dosage—for example, between participants who completed all five online lessons and those who did not complete all lessons. This approach transforms the key independent variable from an indicator (intervention versus control) to a continuous measure and affords the analysis greater sensitivity.

We will also assess whether or not improvements in eating competency (as measured by the ecSI/LI) are associated with improvements in intake of fruit and vegetable and low-fat dairy. Eating competency is a behavioral and attitudinal conceptualization of eating characterized by higher levels of comfort, flexibility and efficacy surrounding dietary intake (Lohse, Satter et al. 2007). It has been suggested that individuals with higher levels of eating competence have higher quality diets, including a higher intake of fruits and vegetables. This hypothesis will be assessed using linear regression analyses that regress ecSI/LI scores on an index that measures fruit and vegetable intake. The model will control for baseline levels of eating competency and selected demographic variables,

Degree of Accuracy Needed for the Purpose Described in the Justification

For the external evaluation of the About Eating Program, our main outcome and the focus of sample size estimation is the self-reported change in consumption of fruits and vegetables. We begin with mean and standard deviation estimates from a trial that collected data from 3,122 women, participating in Maryland’s WIC 5-a-day program. In this study population, mean fruit and vegetable consumption was 4.1 servings per day, with standard deviation of 2.9 servings (Havas, Treiman, Langenberg, Ballesteros, et al. 1998). With the aim of detecting a change in consumption of servings of fruits and vegetables of 0.30 standard deviation units or better, the About Eating Program is expected to produce a realized change among intervention participants of + 0.87 servings of fruits and vegetables per day.

The primary outcome variable will be number of servings of fruits and vegetables estimated from data gathered using the Fruit and Vegetable Checklist (Townsend et al., 2003). Due to concerns of the IA’s Principal Investigator, we will not collect dietary intake data at baseline. Accordingly, the model specified will compare post-intervention means between intervention and control participants adjusted for baseline measure of food preference. Food preference is an acceptable proxy that has been shown to have an average correlation of approximately 0.40 with dietary intake (Drewnowski and Hann, 1999).

Table E.7 provides the sample design for the evaluation and provides 145 completed surveys in each arm of the trial at the post-intervention. We estimated sample size allowing for a Type II error rate of 0.20 (yielding 80 percent statistical power) and a Type I error rate of 0.05, with a two tailed test.

Table E.7. Sample Design for the About Eating Program Evaluation: Number of Completed Surveys


Pre-Intervention Survey

Post-Intervention Survey*

Intervention

181

145

Control

181

145

*Assumes an 80% response rate, with 65% completed by Internet and 15% by mail/telephone.

Based on the characteristics of the study outlined above, the evaluation will provide a 91% probability of detecting a statistically significant difference between the intervention and control groups at the post-intervention period so long as the realized difference is 0.87 servings of fruits and vegetables per day or greater. The follow-up survey of the intervention group, when compared to the intervention group’s post-intervention responses, will provide a measure of the extent to which the impact of the intervention is sustained.





References

Drewnowski A. and C. Hann. (1999). Food Preferences and Reported Frequencies of Food Consumption as Predictors of Current Diet in Young Women. American Journal of Clinical Nutrition 70:28-36.

Havas, S., Treiman, K., Langenberg, P., Ballesteros, M., Anliker, J., Damron, D. and Feldman, R. (1998). Factors Associated with Fruit and Vegetable Consumption Among Women Participating in WIC. Journal of the American Dietetic Association 98: 1141-1148.

Lohse B, Satter E, Horacek T, Gebreselassie T, Oakland MJ. (2007). Measuring eating competence: Psychometric properties and validity of the ecSatter inventory. J Nutr Educ Behav 39(5S):S154-S166.

Townsend, M.S., L. Kaiser, L. Allen, A. Block Joy, and S. Murphy. (2003). Selecting Items for a Food Behavior Checklist for a Limited-Resource Audience. Journal of Nutrition Education and Behavior 35:69-82.

0

File Typeapplication/msword
File TitleSNAP–ED DRAFT REVISED STUDY PLAN
SubjectMarch, 23, 2009
Authorlbell
Last Modified Byhwilson
File Modified2010-01-13
File Created2010-01-13

© 2024 OMB.report | Privacy Policy