CFSAN Usability Study for Feedback on Consumer Food Safety Educator's Planning and Evaluation Toolkit

Generic Clearance for the Collection of Qualitative Feedback on Food and Drug Administration Service Delivery

2016 06 24 Draft Eval Toolkit-062416

CFSAN Usability Study for Feedback on Consumer Food Safety Educator's Planning and Evaluation Toolkit

OMB: 0910-0697

Document [docx]
Download: docx | pdf

55



OMB No: 0910-0697 Expiration Date: 09/30/2017


Paperwork Reduction Act Statement: The public reporting burden for this collection of information has been estimated to average 120 minutes per response for reviewing this document. Send comments regarding this burden estimate or any other aspects of this collection of information, including suggestions for reducing burden to [email protected].




Draft: Planning and Evaluation 101 – A

Toolkit for Consumer Food Safety Educators



Table of Contents





































Chapter 1: Overview and Importance of Evaluation



Consumer food safety education and evaluation

Foodborne illness is a serious public health problem in the United States, affecting 48 million people and causing 127,839 hospitalizations and 3,037 deaths each year [5,8]. Cases of foodborne illness come with heavy economic costs, totaling approximately $51-$77.7 billion each year [6,8]. These costs include medical and hospital bills, lost work productivity, costs of lawsuits, legal fees, and a loss of sales and consumers [2,3]. Foodborne illness also causes emotional tolls and burdens on family members when caring for friends and relatives or when experiencing the loss of a loved one [3].



Microbial risk due to improper food preparation and handling by consumers in the home is a preventable cause of foodborne illness. Health educators across the nation have engaged in educational activities to increase the public’s awareness and knowledge about food safe practices and the risk of food borne illness, and to promote safe food handling practices. However, a recent comprehensive assessment of consumer food safety education found several areas of concern [8]. The paper identified a lack of rigorous and evidence based evaluation of educational program activities [8]. Recommendations were also made to improve research designs by ensuring that interventions focus on the specific needs of the target audience and address common influencers of consumer food safety practices, such as specific knowledge, perceived susceptibility, and access to resources [8].



In order to ensure that consumer food safety education programs are effective in achieving their goals and preventing foodborne illness it is important that they incorporate a rigorous and thorough program evaluation. In addition, planning of interventions must be strategic, evidence based, and tailored specifically for target audiences. This toolkit was created to serve as a guide with tips, tools, and examples to help consumer food safety educators develop and evaluate their programs and activities.



Why evaluate?

Evaluating your program is important and beneficial to its overall success. For example:

  • An evaluation can help you identify the strengths and weaknesses of your program, learn from mistakes, and allow you to continuously refine and improve program strategies.

  • By using evaluation data to improve program practices you can ensure that resources are utilized as efficiently and effectively as possible.

  • Evaluation data can provide program staff with valuable insight to help them understand the impact of the program, the audience they are serving, and the role they can play to contribute to the program’s success.

  • There is no way to really know what kind of impact or affect your program or activities are having without a program evaluation.

  • Conducting an evaluation can help you monitor the program and ensure accountability.

  • Sharing what you learned from the evaluation with other consumer food safety educators can help them design their own programs more effectively.

  • Having documentation and data that shows how your program works can help you receive continued or new funding. Evaluation data is usually expected when applying for grants.

  • Demonstrating that your program has an impact can help increase support of activities by other researchers, educators, and the greater community.

  • Showing the target audience how your program works and is effective can help increase interest and participation.



Evaluation standards

The Joint Committee on Standards for Educational Evaluation (JCSEE) has identified five attributes of standards to ensure that evaluations of educational programs are ethical, feasible, and thorough. The five attributes, Utility, Feasibility, Propriety, Accuracy, and Evaluation Accountability, include a list of standards it is important to consider when evaluating your program.



Below are brief descriptions of each attribute and examples of standards which they include [4,7,9] -



Utility standards help ensure that program evaluations are informative, influential, and provide the target audience or stakeholders with valuable and relevant information. These standards require evaluators to thoroughly understand the target audiences’ needs and to address them in the evaluation.



  • Examples of standards related to Utility are – Attention to Stakeholders, Explicit Values, Timely and Appropriate Communicating and Reporting, and Concern for Consequences and Influence.



Feasibility standards are intended to ensure that evaluation designs are able to function effectively in field settings and that the evaluation process is efficient, realistic, and frugal.



  • Examples of standards related to Feasibility are – Project Management, Practical Procedures, and Resource Use.



Propriety standards protect the rights of individuals who might be affected by the evaluation and support practices that are fair, legal, and just. They ensure that the evaluators are respectful and sensitive to the people they work with and that they follow relevant laws.



  • Examples of standards related to Propriety are – Formal Agreements, Human Rights and Respect, Clarity and Fairness, and Conflicts of Interest.



Accuracy standards ensure that evaluation data, technical information, and reporting are accurate in assessing the programs worth or merit.



  • Examples of standards related to Accuracy are Justified Conclusions and Decisions, Valid information, Reliable Information, and Explicit Evaluation Reasoning.



Evaluation Accountability standards encourage sufficient documentation of evaluation processes, program accountability, and an evaluation of the evaluation, referred to as a metaevaluation. This provides opportunities to understand the quality of the evaluation and for continuous improvement of evaluation practices.



  • Examples of standards related to Evaluation Accountability are – Evaluation Documentation and Internal Metaevaluation.



Click for a full list of the program evaluation standards and to learn more about them.

Keep these standards in mind as you plan your program and the evaluation. Remember, it is not only important to evaluate your education program, but also to think about the quality of your evaluation and to ensure that it is ethical, accurate, and thorough. In addition, a key factor for having a successful evaluation is to think about these standards and plan your evaluation before you implement your program, not after.



References

  1. Arnold, M. E. (2006). Developing evaluation capacity in extension 4-h field faculty - a framework for success. American Journal of Evaluation. 27(2), 257-269.

  2. Buzby, J., Roberts T, Jordan Lin CT, Roberts, T, & MacDonald, J. (1996). Bacterial foodborne disease: medical costs and productivity Losses. United States Department of Agriculture–Economic Research Service (USDA–ERS). Agricultural Economics Report No. (AER741) 100 pp.

  3. Nyachuba, D. G. (2010). Foodborne illness: is it on the rise? Nutrition Reviews, 68(5), 257-269.

  4. Sanders, J. (1994). The program evaluation standards: how to assess evaluations of educational programs (2nd ed.). Thousand Oaks, CA: Sage.

  5. Scallan, E, PM Griffen, FJ Angulo, RV Tauxe, & RM Hoekstra. (2011). Foodborne illness acquired in the united states-unspecified agents. Emerging Infectious Diseases. 17, 16-22.

  6. Scharff. RL. (2012). Economic burden from health losses due to illness in the united states. Journal of Food Protection. 75, 123-131.

  7. The Joint Committee on Standards for Educational Evaluation (JCSEE). (2016). Program evaluation standards statements. Retrieved from: http://www.jcsee.org/program-evaluation-standards-statements

  8. White Paper on Consumer Research and Food Safety Education. (DRAFT).

  9. Yarbrough, D. B., Shulha, L. M., Hopson, R. K., & Caruthers, F. A. (2011). The program evaluation standards: A guide for evaluators and evaluation users (3rd ed.). Thousand Oaks, CA: Sage.



Chapter 2: Formative Program Planning



Form a planning and evaluation team.

Before implementing and evaluating your program, it is important to thoughtfully and thoroughly plan out program and evaluation activities. Having a team to focus on planning and evaluation can be beneficial throughout this process. A team can bring unique and diverse ideas to the table, ensure the needs of various stakeholders are being met, and allow for tasks to be divided up and not a burden to only one or two individuals.



Select a team leader

You may decide to select a planning and evaluation team leader to facilitate and coordinate team activities. Having a “go-to” person to co-ordinate activities and provide leadership throughout the development and implementation of the program can help things operate as smooth as possible. The team leader can function as a point person to ensure that important decisions are made when necessary, without the confusion of who has the responsibility to make final decisions. Remember, it is not the job of the team leader to do everything. The team leader has the responsibility to ensure important tasks are taken care of and completed, and this is usually done through delegating to other team members or staff. Below are important things to think about when selecting a team leader.



An effective team leader:

  • Understands the overarching goals and priorities of the program and what tasks must be accomplished.

  • Has strong interpersonal skills and is able to communicate and work effectively with staff and partners.

  • Is good at delegating responsibilities and tasks to others.

  • Is supportive and encouraging of others.

  • Is able to refer to other individuals with specific expertise in program development, evaluation or food safety for recommendations or insight.

  • Has a realistic understanding of program resources and limitations but also encourages staff and partners to be creative and innovative when planning and strategizing.

  • Understands and teaches the importance of utilizing research and evidence based strategies when implementing consumer food safety education campaigns and interventions.

  • Is flexible and able to acknowledge both the strengths and weaknesses of the program.

  • Uses evaluation data and participant, staff, and partner input to continuously improve the program.



Build partnerships

When forming your team, think about stakeholders or partners you might want to work with or have represented. Consider including a key informant or community expert from the target audience to provide valuable insight about the individuals you are trying to reach. Stakeholders may include individuals who will help implement the program, those who the program aims to serve or your target audience, and individuals who make decisions that impact food safety. You may also want to think about inviting or hiring an experienced evaluator to join your team. A benefit of hiring an external evaluator, instead of solely relying on staff members, is the potential of reducing bias and increasing objectivity of throughout the evaluation process [15,17,21]. Below is a list of potential partners or stakeholders to consider inviting to join your team.



Potential partners:

  • Food safety researchers

  • Health professionals

  • Health educators

  • External evaluators

  • Key informants or representatives from your target community

  • Previous participants of your program if applicable

  • Teachers

  • Representatives from local grocery stores or supermarkets

  • Representatives from local health organizations or health departments doing work related to food safety

  • Staff from local food banks, community centers, or WIC clinics



There are nine key elements to a successful partnership. They are: 1. Provide clarity of purpose 2. Entrust ownership 3. Identify the right people with which to work 4. Develop and maintain a level of trust 5. Define roles and working arrangements 6. Communicate Openly 7. Provide adequate information using a variety of methods 8. Demonstrate appreciation 9. Give feedback [6]. Keep these elements in mind, share them with your team members, and incorporate them into your team structure and interactions to foster strong and fruitful partnerships.



[6]




Identify the food safety education needs of the target audience

Before planning program activities you will need to figure out exactly who your target audience is, the barriers and challenges they face in implementing safe food handling practices, and their specific food safety needs so that you can design a need based program.



Identify the target audience

First, identify your target audience or the segment of the community that your program will focus on and serve. The target audience can consist of individuals who are most vulnerable to foodborne illness, individuals who will most benefit of your materials and resources, and/or people who can function as gatekeepers and will utilize your program to not only benefit themselves, but others as well. For example, focusing on parents and increasing their knowledge about food safety may lead them to model and teach safe food practices to their children.



If you already have a behavior in mind, your target audience can be individuals who do not already practice that behavior [5]. For example, if the target behavior is correct handwashing, the target audience can be individuals who wash their hands incorrectly [5]. In this case, since you have already identified the target behavior, you will then need to conduct some research or a community needs assessment to determine what part of the population tends to wash their hands incorrectly. The group that you identify will be your target audience.



Alternatively, you may wish to start by conducting some food safety research in order to identify vulnerable populations that you might want to select as the target audience. For example you may look at some recent research and find that higher socio economic groups are associated with higher incidence of Campylobacter, Salmonella, and E.coli, or that low SES children may be at greater risk of foodborne illness [4,8,29]. This information can help you narrow down the target audience by socio economic status and you can then use tools such as interactive Community Commons maps to identify specific populations or geographic locations in your community to focus on based on chosen characteristics.



You can also select your target audience by identifying a segment of the community or a specific setting to focus on (parents or teenagers in a specific school district, residents in a specific zip code, and residents of a senior center). Once the setting has been selected, conduct a needs assessment to identify what target food safety behaviors you should to focus on.



Examples of populations you could target:

  • Older adults

  • Parents

  • Teachers

  • Food service staff or volunteers

  • Assisted living aids

  • Vulnerable populations: children, older adults, pregnant women, individuals with weaker immune symptoms due to disease or treatment [11]



Conduct a needs assessment

In your needs assessment you will want to find out what factors influence your target audience’s food handling practices. Identifying these factors will help you figure out what strategies you need adopt to address specific barriers and promote safe food handling behaviors. You don’t need to start from scratch. Do some research to find out what information and resources already exist on the behavior or population you want to focus on. This can help you identify important food safety factors to address in the assessment or gaps that you might want to further investigate.

Below are important food safety factors you could explore in your needs assessment and examples of questions you might want to ask about [adapted 30].



  • Access to resources: Are there any resource barriers that prevent the target audience from adopting certain food safe practices? Examples include: lack of thermometers, soap, or storage containers.



  • Convenience: Are there any convenience barriers, such as time or level of ease, the target audience thinks prevents them from implementing safe food handling behaviors?



  • Cues to action: Are there any reminders or cues the audience thinks would motivate them to engage in certain food safe practices? Are there any that they have found to be successful in the past?



  • Knowledge:

  • Inaccurate knowledge or beliefs: Does the target audience have any inaccurate beliefs related to food safety and foodborne illness? For example, does the individual overestimate their knowledge about food safety or underestimate their susceptibility to food borne illness?

  • Specific knowledge: Does the target audience have any specific knowledge gaps related to food safety and food handling behaviors?

  • Knowledge - Why: Does the audience understand why a specific behavior is recommended and how it prevents foodborne illness?

  • Knowledge - When: Does the audience understand exactly when and under what circumstances they should engage in a specific food handling behavior?

  • Knowledge - How (self-efficacy): Does the audience know how to engage in the specific behavior? Does the individual believe they are capable of engaging in the behavior?



  • Public policy: Are there any policies in place that prevents or makes it more challenging for the target audience to engage in a recommended behavior? What existing policies influence the audience’s food safety knowledge, attitudes or behaviors?



  • Sensory appeal: How do the smell, appearance, taste, and texture of a food influence the target audience’s food handling practices?



  • Severity: Is the audience aware of the short and long term consequences of food borne illness?



  • Social norms and culture: What kind of food safety attitudes, beliefs, and behaviors does the target audience’s social circle, including family and friends, possess and engage in?



  • Socio-demographics: What is the demographic and socio economic status of the target audience? Examples of demographic factors include gender, ethnicity, age, and education level.



  • Susceptibility: How susceptible does the audience feel they are to foodborne illness resulting from food handling practices at home?



  • Trust of educational messages: How trusting is the target audience of consumer food safety messages? Do they find sources of these messages to be credible?



For more specific recommendations and strategies on how to address the factors listed above refer to the White Paper on Consumer Research and Food Safety Education (DRAFT).







How They Did It

When planning “Is It Done Yet?” a social marketing campaign to increase the use of thermometers in order to prevent foodborne illness, a specific audience, upscale suburban parents, was identified and targeted. This target audience was carefully chosen for specific reasons such as being more likely to rapidly move through the stages of behavior change to adopt the desired behavior, their tendency to be influencers and trend setters, and because of their propensity to learn and use new information.

Geodemographic research was conducted to get to know this population to learn about their interests, characteristics, and where they access information. Observational research in a kitchen setting was also conducted to learn about how the parents use thermometers and handle foods. This helped program planners identify important barriers that would need to be addressed in the campaign and gather information to help test and develop campaign messages. Messages were pilot tested at a “special event” held in a popular home and cooking store where participants were able to provide feedback on several message concepts. Following the event, additional focus groups were conducted to select the final campaign slogan, “Is It Done yet? You can’t tell by looking. Use a thermometer to be sure.”

Before implementing the campaign, baseline data was collected through a mail survey that aimed to identify where the target audience was in stages of behavior change. Objectives for the campaign were identified as:

- Employ partnerships

- Saturate the Boomburb market with the campaign messages

- Employ free and paid media

- Conduct on-site events at retail stores, schools, festivals, etc.

- Conduct pre- and post-campaign research



United States Department of Agriculture (USDA), Food Safety and Inspection Service (FSIS). (2005). A report of the “is it done yet? Social marketing campaign to promote the use of food thermometers. U.S. Department of Agriculture’s Food Safety and Inspection Service. National Food Safety & Toxicology Center at Michigan State University. Michigan State University Extension. Michigan Department of Agriculture (Funding Provider for Michigan).



Utilize behavior theories

Using a theoretical framework for understanding how and why individuals engage in behaviors can be useful to identify what questions to ask in your needs assessment and to identify effective strategies to use to design and implement your program. Behavior theories can also help you narrow down your evaluation approach and the main questions you want to answer to determine the impact and outcomes of your program.



Below are examples of two behavior theories that have previously been applied to consumer food safety studies:

Transtheoretical Model

The Transtheoretical model, or stages of change theory, provides a framework for the stages an individual can experience before successfully changing behavior [23]. There are five stages: precontemplation, contemplation, preparation, action and maintenance.



  • Precontemplation - when an individual is not at all thinking about changing behavior and may be unaware of the consequences of their actions [14,24]

  • Contemplation - when an individual has not taken any action yet but is seriously considering changing their behavior in the next six months [14,24]

  • Preparation - when an individual is preparing to make a behavior change within one month [14,24]

  • Action - when an individual has been successful and consistent in engaging in the new behavior in a one to six month time period [14,24]

  • Maintenance - when an individual has engaged in behavior change for six or more months [24]








The Transtheoretical or Stages of Change Model [Source]


Understanding what stage of change your target audience is in can be valuable in determining how to strategically communicate with them. For example, the needs of individuals that are in the precontemplation stage and not at all thinking about food safety might be different from individuals in the preparation stage. For those who are in the precontemplation stage you might need to put a greater on emphasize why safe food handling behaviors is important in the first place. Individuals in the contemplation stage may already understand why food safety is important and might require more support in terms of encouragement or helpful tools teaching them how to adopt the new behavior.



Other key constructs of the Transtheoretical Model are decisional balance, or what individuals perceive are the pros and cons of engaging in a specific behavior, and self-efficacy, how confident the individual is in their ability to engage in the behavior [1,24]. Exploring these constructs can help you understand more about your target audience, their beliefs related to consumer food safety, and how to best promote safe food handling practices to the group.



How They Did It

To examine the impact of a food safety media campaign targeting young adults, college students from five geographically diverse universities were recruited to participate in a pre and post-test and a post test only evaluation. Recruitment efforts included Facebook flyers, ads in the school newspaper, and announcements made in student listservs and in class. The objective of the evaluation was to find the campaigns impact related to food safety self-efficacy, knowledge, and stage of change. Stage of change, a construct from the Transtheoretical model, was assessed using a questionnaire item asking participants to identify what statement best described their stage of change related to food handling. Response options included:

1. I have no intention of changing the way I prepare food to make it safer to eat in the next 6 months (precontemplation).

2. I am aware that I may need to change the way I prepare food to make it safer to eat and am seriously thinking about changing my food preparation methods in the next 6 months (contemplation).

3. I am aware that I may need to change the way I prepare food to make it safe to eat and am seriously thinking about changing my food preparation methods in the next 30 days (preparation).

4. I have changed the way I prepare food to make it safe to eat, but I have been doing so for less than the past 6 months (action).

5. I have changed the way I prepare food to make it safe to eat, and I have been doing so for more than the past 6 months (maintenance).



Abbot, J.M., Policastro, P., Bruhn, C., Schaffner, D.W., & Byrd-Bredbenner C. (2012). Development and evaluation of a university campus-based food safety media campaign for young adults. Journal of Food Protection, 75(6)



Theory of Planned Behavior

The Theory of Planned Behavior (TPB) provides another framework for understanding behavior. The TPB is founded on three constructs that influence an individual’s intention to engage in a particular behavior. They are: attitudes towards a specific behavior, subjective norms, and perceived behavioral control, or the perceived ability to make a behavior change or carry out a specific action [2,3].



By focusing on intent we can understand the motivational factors that influence behaviors [3]. In general, the stronger a person’s intent to engage in a specific behavior, the more likely he/she will engage in the behavior, [3]. Intent is influenced by attitudes, which develop as a result of personal beliefs related to a particular behavior [3,12]. A more positive or favorable attitude towards a behavior can result in greater motivation or intent to engage in the behavior [3]. Normative beliefs and subjective norms, also influence intent, because individuals are often concerned about whether or not friends, family or others within their social circle, would disapprove or approve their engagement in a particular behavior [3].



An individual must also have the necessary resources and opportunity to actually engage in the particular behavior. Behavioral control and self-efficacy can be used as predictors of successful behavior change [3]. A person must not only have motivation and intention to make a behavior change, he/she must also have the actual capabilities to make the change and perceive that he/she has the capabilities to successfully make carry out the behavior in question.






The Theory of Planned Behavior Framework [3]

Exploring these constructs can help you understand your target audience’s intentions to engage in safe food handling practices, as well as whether or not, and how, they are motivated to do so. Understanding specific factors related to attitudes, norms, and perceived behavior control can provide insight into what strategies may be effective in motivating your target audience’s intentions to adopt safe food handling behaviors.



Identify core activities and messages

Once you have conducted your research and needs assessment to identify your target audience and the food safety behaviors you will focus on, you can start identifying the main elements of your program, such as what kinds of messaging and activities you will create and implement, and where the program will take place.



Identify program activities.

Taking into account your resources and the needs and interests of your target audience, think about what kinds of strategies and activities will be most effective to achieve your education objectives. Consider using more than one education method to maximize reach and the quality of your program [9]. Most common education approaches in consumer food safety today include educational experiences that are interactive and hands on and mass media/ social marketing campaigns that utilize multiple media channels and target a specific geographic location [30].



Below are ideas of consumer food safety education activities you could implement:

  • Hold a free food safety workshop presentation for parents at a local library or community center. Work with local schools to send out invites and provide onsite childcare to encourage participation.

  • Reach out to primary care physicians and figure how to best inform their patients about food safety. Provide educational messages and materials such brochures for them to hand out to their patients.

  • Meet with teachers and school administrators and inform them of the importance of consumer food safety. Provide them with educational resources they can share with their students and work with them to incorporate food safety education in school curriculums.

  • Develop a social marketing campaign with core messages that address the specific needs and knowledge gaps of your target audience (identified in the needs assessment). Work with local partners and stakeholders to support you in sharing and disseminating the messages.

  • Hold a food safety festival for parents and children during National Food Safety Month in September. Plan interactive activities that are hands on and educational.

  • Partner with local grocery stores and work with them to provide educational messages in their grocery stores.

  • Encourage work places to send weekly newsletters to employees on food safety topics, or create fun and educational food safety competitions [30].



Selecting a setting for your program can also help you narrow down and choose program activities. In this approach you would: select the setting for your program → conduct a needs assessment for the community within that setting → identify appropriate activities and strategies. Examples of settings for consumer food safety education activities include schools, public restrooms, health fairs, workplaces, and grocery stores.

For more consumer food safety education activity ideas, refer to the White Paper on Consumer Research and Food Safety Education (DRAFT).



Identify communication/delivery strategies and core messages

In your needs assessment, consider including questions that can help you identify how your target audience seeks out health information, what media channels they have access to, and what sources they find to be credible and trustworthy. This information is key to ensure that educational messages are heard and trusted.



Potential delivery channels [30]:

  • Traditional media (print, audio, video)

  • “New” media (internet, social media, video games, computer programs)

  • Mass media/social marketing

  • Classroom-style lessons

  • Interactive and hands-on activities

  • Visual cues or reminders

  • Community events and demonstrations



Crafting Your Message



When crafting food safety messages think about what you want people to know and what the primary purpose of your message is. There are three different types of messages you could use to influence food safety behavior [adapted 5]:



Awareness messages increase awareness about what people should do to prevent foodborne illness, who should be doing it, and where people should engage in safe food handling practices. A key role of this type of message is to increase interest in the topic of food safety and to encourage people to want to find out more about how to prevent foodborne illness.



Instruction messages provide information to increase knowledge and improve skills on how to adopt new safe practices to prevent foodborne illness. These types of messages can also provide encouragement or support if people lack confidence in engaging in the new behavior.



Persuasion messages provide reasons why the audience should engage in a new food handling behavior. This usually involves influencing attitudes and beliefs by increasing knowledge about food safety and foodborne illness.



When crafting a persuasive message to encourage consumers to adopt a new behavior there are three things you should think about: credibility, attractiveness, and understandability. It is important that your target audience finds your messages to be [5]:



  1. Credible, accurate and valid. This is normally done by demonstrating you are a trustworthy and competent source and providing evidence that supports your message.

  2. Attractive, entertaining, interesting, or mentally or emotionally stimulating. Consider providing cues to action messages that are eye catching, quick and easy to read, and include some amount of shock value [18].

  3. Understandable, simple, direct, and with sufficient detail.



Use appeals and incentives [25]

Your message should include persuasive appeals or incentives to motivate the target audience to change their behaviors. These can be can positive or negative incentives. Examples of negative incentives include inciting fear of the consequences of foodborne illness or negative social incentives such as not being a responsible care taker or cook for the family. Positive appeals demonstrate the positive outcomes for adopting new food safe practices, such as addressing strong physical health or being a good role model to family and friends. Using a combination of both positive and negative incentives can be a good strategy to influence different types of individuals. This is because it is probable that not all individuals of your target audience will be motivated by negative incentives, and not all by positive incentives.



Consider timing

The timing of when to disseminate messages is also an important factor to consider. Think about national or community events taking place that might peak the target audience’s interest in food safety and help you maximize the effectiveness or reach of your messages. For example, national health observances such as National Food Safety Month or National Nutrition Month, might provide a great opportunity to promote your program or to share educational food safety information. The opening of a new grocery could also a good opportunity to promote food safety. Your organization or program could partner with store owners and staff to provide food safety information during their grand opening and display food safety messages throughout the store.



Don’t reinvent the wheel

When developing core messages and educational materials you don’t always have to reinvent the wheel. Utilize science and evidence based materials that have already been created for consumer food safety education, such as resources developed by the Partnership for Food Safety Education on the four core food safety practices.



Fight BAC! Four Core Practices [Source]







Other food safety education materials you can use:



If you decide to use existing materials don’t forget to think about your target audience and how you may want to refine messages or materials to best serve their needs and interests. You should also take health literacy and cultural sensitivity into account.



Health literacy

Health literacy refers to a person’s ability to access, understand, and use health information to make health decisions. About 9 out of 10 adults have difficulty using everyday health information available to them, such as those in health care facilities, media, and through other sources in their community [10,20,26,28].



Things to consider:

  • Incorporate a valid and reliable health literacy test, such as the Newest Vital Sign by Pfizer, in your needs assessment to help you understand the health literacy levels of your target audience.

  • Use plain, clear, and easy to understand language when writing educational consumer food safety information.

  • Think of creative formats to share food safety messages such as video, multi-media, and infographics.

  • Make sure messages are accurate, accessible, and actionable [7].

  • Avoid medical jargon, break up dense information with bulleted lists, and leave plenty of white space in your documents.

  • Use helpful tools such as the CDC’s Clear Communication Index and Everyday Words for Public Health Communication the Health Literacy Online Checklist to create and refine your content.

  • When in doubt write at a 7th or 8th grade reading level or below [19].

  • Conduct a pilot test of materials with representatives from the target audience to ensure that they are easy to understand and use.



Cultural sensitivity

When working with diverse or ethnic populations it is important to be culturally sensitive in your approach, interactions, and when creating educational content.



Things to consider:

  • It is not only important for you to translate food safety messages into another language when needed, but also to provide tailored information, examples, and visuals that are culturally relevant and appropriate [22].

  • Take into account cultural beliefs, norms, attitudes, and preferences related to health and food safety.

  • Work closely and collaborate with the target population to ensure you are addressing their specific needs [22].

  • Hire members from the community with cultural backgrounds similar to your target population to help with recruiting, administering interviews, or facilitating focus groups to ensure that the data collection process is culturally sensitive and respectful [16,13,27].

  • Request staff or colleagues with cultural backgrounds similar to target audience to review materials and messages for content that might include cultural assumptions or prejudice [16].



More communication tips

A few more tips for communicating consumer food safety information [30]:

  • Make your messages specific to the target audience and address any food safety misconceptions the target audience may have.

  • Provide powerful visual aids to help consumers visualize contamination. Use storytelling and emotional messaging to convey the toll of foodborne illness.

  • Link temporal cues with food safety. For example, help consumers associate running their dishwasher with sanitizing their sponge.



References

  1. Armitage, C. J., Sheeran, P., Conner, M., & Arden, M. A. (2004). Stages of change or changes of stage? Predicting transitions in transtheoretical model stages in relation to healthy food choice. Journal of Consulting and Clinical Psychology, 72(3), 491–499.

  2. Ajzen, I. (1985). From intentions to actions: a theory of planned behavior. In J. Kuhl & J. Beckman (Eds.), Action-control: From cognition to behavior, 11–39.

  3. Ajzen, I. (1991). The theory of planned behavior. Organizational Behavior and Human Decision Processes, 50, 179–211.

  4. Bemis, K. M., Marcus, R., & Hadler, J. L. (2014). Socioeconomic status and campylobacteriosis, Connecticut, USA, 1999-2009. Emerging Infectious Diseases, 20(7).

  5. Borrusso, P., & Das, S. (2015). An examination of food safety research:
    implications for future strategies: preliminary findings. Partnership for Food Safety Education Fight BAC! Forward Webinar.

  6. Cates, S., Blitstein, J., Hersey, J., Kosa, K., Flicker, L., Morgan, K., & Bell, L. (2014). Addressing the challenges of conducting effective supplemental nutrition assistance program education (SNAP-Ed) evaluations: a step-by-step guide. Prepared by Altarum Institute and RTI International for the U.S. Department of Agriculture, Food and Nutrition Service.

  7. Centers for Disease Control (CDC). (2016). Health literacy: develop materials. Retrieved from: http://www.cdc.gov/healthliteracy/developmaterials/index.html

  8. Consumer Federation of America (CFA). (2013). Child Poverty, Unintentional Injuries, and Foodborne Illness: Are Low-Income Children at Greater Risk?

  9. Gabor, G., Cates, S., Gleason, S., Long, V., Clarke, G., Blitstein, J., Williams, P., Bell, L., Hersey, J., & Ball, M. (2012). SNAP education and evaluation (wave I): final report. Prepared for U.S. Department of Agriculture, Food and Nutrition Service, Office of Research and Analysis. Alexandria, VA.

  10. Kutner, M., Greenberg, E., Jin, Y., & Paulsen, C. (2006). The health literacy of America’s adults: Results from the 2003 National Assessment of Adult Literacy (NCES 2006-483). Washington, DC: U.S. Department of Education, National Center for Education Statistics.

  11. Food and Drug Administration (FDA). (2015). Food Safety: it's especially important for at-risk groups. Retrieved from: http://www.fda.gov/Food/FoodborneIllnessContaminants/PeopleAtRisk/ucm352830.htm

  12. Fishbein, M., & Ajzen, 1. (1975). Belief, attitude, intention, and behavior: An introduction to theory and research. Reading, MA: Addison—Wesley.

  13. Fisher, P.A., & Ball, T. J. (2002). The Indian family wellness project: An application of the tribal participatory research model. Prevention Science, 3.

  14. Horwath, C. C. (1999). Applying the transtheoretical model to eating behaviour change: challenges and opportunities. Nutrition Research Reviews, 12, 281–317.

  15. Issel L. (2004). Health Program Planning and Evaluation: A Practical, Systematic Approach for Community Health. London: Jones and Bartlett Publishers.

  16. McDonald, D. A., Kutara, P. B. C., Richmond, L. S., & Betts, S. C. (2004). Culturally respectful evaluation. The Forum for Family and Consumer Issues, 9(3).

  17. McKenzie J. F, & Smeltzer J. L. (2001) Planning, implementing, and evaluating health promotion programs: a primer, 3rd ed. Boston: Allyn and Bacon.

  18. Meysenburg R, Albrecht J., Litchfield, R., & Ritter-Gooder, P. (2013). Food safety knowledge, practices and beliefs of primary food preparers in families with young children. A mixed methods study. Appetite, 73, 121-131.

  19. National Institutes of Health (NIH), U.S National Library of Medicine, MedlinePlus. (2016). How to write easy-to-read health materials. Retrieved from https://www.nlm.nih.gov/medlineplus/etr.html

  20. Nielsen-Bohlman, L., Panzer, A. M., & Kindig, D. A. (Eds.). (2004). Health literacy: A prescription to end confusion. Washington, DC: National Academies Press.

  21. O'Connor-Fleming, M. L., Parker, E. A., Higgins, H. C., & Gould, T. (2006) A framework for evaluating health promotion programs . Health Promotion Journal of Australia, 17(1).

  22. Po, L. G., L. D. Bourquin, L. G. Occeña, and Po, E. C. Po. (2011). Food safety education for ethnic audiences. Food Safety Magazine. 17:26-31.

  23. Prochaska, J. O. (2008). Decision making in the transtheoretical model of behavior change. Medical Decision Making, 28, 845–849.

  24. Prochaska, J.O. , Redding, C.A., & Evers, K. E. (2008). The transtheoretical model and stages of change. In Health behavior and health education theory, research and practice (97-121). San Francisco, CA: Jossey-Bass.

  25. Rice, R. E., & Atkin, C. K. (2001). Public Communication Campaigns. London: Sage publications.

  26. Rudd, R. E., Anderson, J. E., Oppenheimer, S., & Nath, C. (2007). Health literacy: An update of public health and medical literature. In J. P. Comings, B. Garner, & C. Smith. (Eds.), Review of adult learning and literacy, 7, 175–204. Mahwah, NJ: Lawrence Erlbaum Associates.

  27. Stubben, J.D. (2001). Working with and conducting research among American Indian families. American Behavioral Scientist, 44.

  28. U.S. Department of Health and Human Services (HHS), Office of Disease Prevention and Health Promotion. (2010). National Action Plan to Improve Health Literacy. Washington, DC.

  29. Whitney, B. M., Mainero, C., Humes, E., Hurd, S., Niccolair, L., & Hadler, J. L., (2015). Socioeconomic status and foodborne pathogens in Connecticut, USA, 2001-2011. Emerging Infectious Diseases, 21(9).

  30. White Paper on Consumer Research and Food Safety Education. (DRAFT).





Chapter 3: Mapping the Intervention and Evaluation



Steps for a program evaluation

There are six evaluation steps to think about as you plan your intervention and evaluation. The evaluation steps are: 1. Engage stakeholders 2. Describe the program 3. Focus the evaluation design 4. Gather credible evidence 5. Justify conclusion 6. Ensure use and share lessons learned [2]. The next chapters will go through these steps in more detail but it is helpful to have a framework or overview to think about before you begin planning. Applying the Utility, Feasibility, Accuracy, and Propriety evaluation standards discussed in the first chapter to these steps can help ensure that your evaluation is rigorous and thorough.



Evaluation steps and standards [2]





Create a logic model

Creating a logic model can help you describe your program by identifying program priorities, mapping out the components of your program, and understanding how they are linked. A logic model can also help you identify short, medium, and long term outcomes of the program and figure out what to evaluate and when. A logic model can also assist program staff, stakeholders, and everyone else involved with implementation and evaluation, in being aware of the overall framework and strategy for the program.



A logic model generally includes inputs (e.g. resources, materials, staff support), outputs (e.g. activities and participation), and outcomes (short, medium, and long term). As you pinpoint program outcomes you should also identify corresponding indicators. An indicator is the factor or characteristic you need to measure to know how well you are achieving your outcome objectives. Identifying outcome indicators early in the planning process can help clarify program priorities and expectations.



There is no one way to create a logic model. You may decide to make one model for your entire program, or multiple models for each program activity. Make it your own and remember to take into account the needs of your target audience, the program setting, and your resources.

Below is an example of a logic model created for a program aiming to reduce foodborne illness due to cross contamination of foods – [adapted 11,12]. You can create your own logic model using a template provided in Chapter 7.







Budget

When planning and deciding your program budget make sure you take evaluation costs into account. The general recommendation is to use 10% of program funding for evaluation [13]. Your budget can be flexible and you should review it overtime to adjust if needed [15]. Document your spending and keep track of expenses. You should also track and take inventory of program supplies and materials so that you don’t purchase more than needed and can reduce waste and maximize efficiency. You may find it helpful to designate a team member to keep track of spending and funds.



Use the template below to help plan your budget [adapted 15]. You can also use it to keep track of your spending.









Levels of assessment

There are generally seven incremental assessment levels in a program evaluation that you should consider depending on the applicability of each level to your evaluation needs [5,10]. They are:



  1. Inputs such as staff and volunteer time, monetary resources, transportation, and program supplies, required to plan, implement, and evaluate the program.

  2. Educational and promotional activities targeting the target audience. Includes activities with direct contact to more indirect methods such as mass media campaigns.

  3. Frequency, duration, continuity, and intensity of participation or people reached.

  4. Positive or negative reactions, interest level, and ratings from participations about the program. Can include feedback on program activities, program topics, educational methods, and facilitators.

  5. Learning or Knowledge, Attitude, Skills, and Aspiration (KASA) changes. These changes can occur as a result of positive reactions to participation in program activities.

  6. New practice, action, or behavior changes that occur when participants apply new Knowledge, Attitude, Skills, and Aspiration (KASA) they learned in the program.

  7. Impact or benefits from the program to social, economic, civil, and environment conditions.





Each level addresses important elements you can evaluate to understand the impact and outcomes of your program, as well program strengths and challenges.



The below table displays examples of outcomes and indicators for each of the seven assessment levels. The examples are based on a program aiming to reduce foodborne illness due to cross contamination of foods.





Assessment Level

Goal/Target Outcomes

Indicators

Inputs

400 total hours of staff and volunteer time, 500 copies of educational brochures are printed and distributed

Time sheet is completed by staff/volunteers and documents the assignments produced or worked on. A spreadsheet that documents printing and distribution of brochures and other materials is also complete.

Activities

Needs assessment, focus groups to finalize program materials and messaging, educational workshop for parents on cross contamination and safe food handling (video and interactive activity)

Frequency, duration, methods, and content of program activities are documented and reported on

Participation

Target quota for participation filled (n=100), workshop members consist of target audience (parents in school district X), participants stay for the entire duration of the workshop

Participant sign in/sign out sheet for workshop is filled out. Sheet documents the time participant signs in and out and whether or not participant has a child that is a student in school district X

Reactions

Participants find the workshop content and facilitators to be engaging and find the information to be relevant and important to them

Brief follow up survey for participant feedback. Participants provide a high rating level for factors such as topic area, workshop facilitation, workshop format, and content

Learning or Knowledge, Attitude, Skills, and Aspiration (KASA)

Participants gain knowledge and skills on how to avoid cross contamination

# of participants that demonstrated increased knowledge on how to separate raw, cooked, and ready to eat foods during food prep and storage

Actions or behavior

Participants apply skills to avoid cross contamination

# of participants that reported application of new skills and separated raw, cooked, and ready to eat foods during food prep and storage to avoid cross contamination

Impact

Fewer incidents of food borne illness in the community due to unsafe food handling

Data shows reduced incidence of food borne illness in the community due to unsafe food handling



Create a timeline

Create a timeline to display important implementation and evaluation activities and specify when they need to be completed. This timeline can change overtime but it is important for you and other staff and partners to pre-plan and share a common understanding of when important tasks need to be accomplished.



Below is an example of a simple timeline for a year long project and evaluation, using a Gantt Chart. The chart displays important tasks that need to be accomplished, the durations for each task, when they begin and end, and how some tasks overlap with each other.



Project Activity

1

2

3

4

5

6

7

8

9

10

11

12

Form a planning/evaluation team and invite partners

X












Research on topic/target audience

X

X











Implement needs assessment


X











Report needs assessment findings


X

X










Create a logic model, plan out activities, evaluation, and budget


X

X










Focus groups for feedback on program materials/messages



X

X









Conduct a pre-test



X

X









Preliminary analysis of pre-test data




X

X








Train staff and volunteers




X

X








Implement workshop





X

X

X






Post-test follow up round 1








X





Post-test follow up round 2










X



Data analysis










X

X


Evaluation report/ disseminate findings












X



Process evaluation

The first three levels of assessment related to program inputs, activities, and participation are generally referred to as a process evaluation. A process evaluation is usually ongoing and tells you whether program implementation is continuing as planned [9]. It can also help you figure out what activities have the greatest impact given the cost expended [1]. A process evaluation often consists of measuring outputs such as staff and volunteer time, the number of activities, dosage and reach of activities, participation and attrition of direct and indirect contacts, and program fidelity. A process evaluation can also involve finding out:

  • If you are reaching all the participants or members of the target audience [9].

  • If materials and activities are of good quality [9].

  • If all activities and components of the program are being implemented [9].

  • Participant satisfaction with the program [3,4,8,9].



Program fidelity refers to whether or not and to what extent the implementation of the program occurs in the manner originally planned. For example, whether or not staff follow standards and guidelines they receive in a training, whether an activity occurs at the pre-determined location and duration, or whether or not educators stick to the designated curriculum when teaching consumer food safety education.

Below are examples of how to assess program fidelity:

  • Develop and provide training on the standards for data collection, and management. Check in overtime to ensure standards are met.

  • Evaluate program materials to make sure they are up-to-date and effective.

  • Evaluate facilitators or educators to make sure they are up-to-date, knowledgeable, enthusiastic, and effective.

  • Hold regular staff trainings and team meetings and track activities (you can use the Activity Tracker Form in Chapter 7) to gather feedback on implementation of program activities, learn about implementation challenges and how to address them, and provide support to staff, volunteers, and educators.



During the planning phase of your program, it may be helpful to create a spreadsheet template to document program inputs and outputs. Consider providing a form (such as the activity tracker form in Chapter 7) to staff to ensure they keep track of important and relevant information. You should also set up a time or schedule for when staff are required to submit completed forms.



Outcome Evaluation

The last few levels of assessment related to changes to KASA, behaviors, or the environment, are generally part of an outcome evaluation. As discussed in the logic model section, you may want to identify three outcomes for your program, short term, medium term, or long term. When determining outcome objectives make sure they are SMART [1,7]:



  • Specific – Identify exactly what you hope the outcome to be and address the five W’s: who, what, where, when, and why.

  • Measurable – quantify the outcome and that amount change you aim for the program to produce.

  • Achievable – be realistic in your projections and take into account assets, resources, and limitations.

  • Relevant – make sure your objectives address the needs of the target audience and align with the overarching mission of your program or organization.

  • Time-bound – provide a specific date by which the desire outcome or change will take place.



Examples of SMART outcomes:

  • At least 85% of participants in the Food Safe workshop will learn at least two new safe practices for cooking and serving food at home by August 2016.

  • 80% of food kitchen volunteers will wash their hands before serving food kitchen meals after completing the final day of the handwashing training on October 2nd, 2016.

  • 70% of children enrolled in the Food Safety is Fun! summer camp will be able to identify the four core practices of safe food handling and explain at least one consequence of foodborne illness by the end of summer.



Examples of food safety factors you may wish to measure and address in your outcome objectives include: knowledge, attitudes, beliefs, behaviors, and other influential factors such as visual cues or reminders, resources, convenience, usual habits, perceived benefits, taste preferences, self-efficacy or perceived control, and perceived risk or susceptibility [14].

Purpose of the evaluation

As you think about and plan your evaluation, make sure you can clarify the purpose of the evaluation and what kind of information you want to find out. To do this, think about who will use the evaluation data, what it will be used for, and how other stakeholders and partners will use the evaluation findings [13].



Below are examples of questions you might want to ask to help clarify the purpose of the evaluation:

  • Do we need to provide evaluation data to funders to show that the benefits of the program outweigh the costs?

  • Do we need to understand whether the program strategies used are effective in producing greater knowledge about food safety and positive behavior changes?

  • Do we want to figure out whether the educational format and strategies we used can be a successful model for other educators to incorporate in their programs?



Identifying what overarching questions need to be answered, who the evaluation is for and how it will be used, will help you figure out what exactly you need to evaluate and how.



References

  1. Cates, S., Blitstein, J., Hersey, J., Kosa, K., Flicker, L., Morgan, K., & Bell, L. (2014). Addressing the challenges of conducting effective supplemental nutrition assistance program education (SNAP-Ed) evaluations: a step-by-step guide. Prepared by Altarum Institute and RTI International for the U.S. Department of Agriculture, Food and Nutrition Service.

  2. Community Toolbox. (n.d). Evaluating programs and initiatives: section 1. A framework for program evaluation: a gateway to tools. Retrieved from: http://ctb.ku.edu/en/table-of-contents/evaluate/evaluation/framework-for-evaluation/main

  3. Hawe P, Degeling D, & Hall J. (2003). Evaluating Health Promotion: A Health Workers Guide. Sydney: MacLennan and Petty.

  4. Issel L. (2004). Health Program Planning and Evaluation: A Practical, Systematic Approach for Community Health. London: Jones and Bartlett Publishers.

  5. Kluchinski, D. (2014). Evaluation behaviors, skills and needs of cooperative extension agricultural and resource management field faculty and staff in new jersey. Journal of the NACAA, 7(1).

  6. Little, D., & Newman, M. (2003). Food stamp nutrition education within the cooperative extension/ land-grant university system national report – FY 2002. Prepared for United States Department of Agriculture Cooperative State Research, Education, and Extension Service Families, 4-H, and Nutrition Unit. Washington, D.C.

  7. Meyer, P. J. (2003). What would you do if you knew you couldn’t fail? Creating S.M.A.R.T. Goals. Attitude is everything: If you want to succeed above and beyond. Meyer Resource Group, Incorporated.

  8. Nutbeam D. (1998). Evaluating health promotion--progress, problems and solutions. Health Promotion International, 13(1).

  9. O'Connor-Fleming, M. L., Parker, E. A., Higgins, H. C., & Gould, T. (2006) A framework for evaluating health promotion programs . Health Promotion Journal of Australia, 17(1).

  10. Rockwell, K., & Bennett, C. (2004). Targeting outcomes of programs: a hierarchy for targeting outcomes and evaluating their achievement. Faculty Publications: Agricultural Leadership, Education & Communication Department. Paper 48.

  11. United States Department of Agriculture (USDA) and National Institute of Food and Agriculture (NIFA). (n.d). Community nutrition education (CNE) – logic model detail.

  12. United States Department of Agriculture (USDA) and National Institute of Food and Agriculture (NIFA). (n.d). Community nutrition education (CNE) – logic model overview.

  13. U.S. Department of Health and Human Services Centers for Disease Control and Prevention (CDC). (2011). Office of the Director, Office of Strategy and Innovation. Introduction to program evaluation for public health programs: A self-study guide. Atlanta, GA: Centers for Disease Control and Prevention.

  14. White Paper on Consumer Research and Food Safety Education. (DRAFT).

  15. W. K. Kellogg Foundation. (2004). Evaluation handbook. MI.















Chapter 4: Selecting an Evaluation Design



Observational and experimental designs

An evaluation design is the structure or framework you decide to use to conduct your evaluation. There are two main types of evaluation designs: observational and experimental.



An observational study is a non-experimental design in which participants are not pre-assigned whether they will participate in the program (control group) or not (comparison group).



An experimental study involves an intentional assignment of who will be in control group and who will be in the comparison group. This allows the evaluator to alter the independent variable (the program or activity) and be able to control external factors that influence the outcome variable. Experimental designs are not always easy to implement in real life, but are the best option for reducing internal threats to validity.



When to collect data

Deciding when to collect evaluation data is an important part of selecting an evaluation design. To collect data you could:

  • Collect data only one-time, usually in a post-test. A post-test is when you collect data after the program or intervention.

  • Conduct a pre-post test where you collect data before and after the program takes place.

  • Collect data multiple times throughout the evaluation process.

  • Conduct a retrospective pre-test that is administered at the same time as the post test.



More information about the benefits and limitations of each of these options will be listed in the table on the following page.



Internal validity

When deciding how to design your evaluation and when to collect data it is important to think about minimizing threats to internal validity that could bias your data and evaluation findings. Internal validity refers to the extent to which you can ensure or demonstrate that external factors other than the independent variable, or your program, did not influence the outcome variables. This can influence how true or accurate your findings and conclusions are so it is important to protect against internal validity threats to ensure your evaluation is sound and reliable.



Below are descriptions and examples of different types of threats to internal validity:



  • Maturation occurs when participants have matured or developed mentally or emotionally throughout the evaluation process, and this influences the outcome variable.



  • Example: Kids perform better on a post-test foodborne pathogens quiz than on the pre-test simply because they are older and have become better test-takers, not because of the new food safety curriculum at their school (education program/independent variable).



  • History threats happen when events that have taken place in the participants’ lives throughout the program or evaluation process which influence the outcome variable.



  • Example: Participants score high on a household audit because most of them recently watched a documentary on the consequences of foodborne illness, not because of the “Food Safety at Home Reminders” magnet they received in the mail.



  • Testing effect occurs when the participants’ post-test data is influenced by their experience of taking the pre-test.



  • Example: Participants’ understanding about the importance of separating raw meat, poultry, seafood, and eggs from other foods in their shopping cart improved in the post-test because of being exposed to that information in the pre-test, not because they read new signage on cross contamination in their local grocery store.



  • Instrumentation takes place when data on the outcome variable is influenced by differences in the way the pre-test and post-test assessments are administered or collected. Pre-test and post-test assessments need to be the same to prevent instrumentation from occurring.



  • Example: Participants more positively describe their safe food handling practices in the post-test interview because of the way the new interviewer described and interpreted the questions, not because a new training program encouraged them to adopt new safe food handling practices at home.



  • Recall bias takes place when participants do not accurately remember events they have experienced in the past and this influences the accuracy of the data collected.



  • Example: Participants do not remember how long they generally take to wash their hands so they guess a number of seconds that is inaccurate. The finding that most participants wash their hands for at least 20 seconds is due to participants incorrectly recalling how long they wash their hands, not due to them reading new handwashing messages posted all over social media.



  • Social desirability bias occurs when participants provide responses they believe will be pleasing to the interviewer and will make them seem more favorable. This often leads to over reporting of behaviors or information that participants believe are positive or “good” and an under reporting of behaviors or information participants believe are negative or “bad.”



  • Example: In one on one interviews participants share that foodborne illness is of great concern to them and that they always try their best to practice safe food handling practices at home because they want to impress the interviewer and think that is the “correct” answer, not because it is actually true.



  • Attrition bias happens when a loss of participants in the control or comparison group, influences the evaluation data. Attrition can be due to reasons such as loss in follow up, death, or moving away.



  • Example: The control group loses about a third of participants for the post-test survey and this negatively impacts the overall knowledge testing score on safe storage of foods. As a result, change in score is mostly not related to the educational video that participants watched.



  • Selection bias occurs when differences in the data collected from the control and comparison group are due to differences between the individuals in each group, not because one group participated in the program and the other did not.



  • Example: Participants in the control group demonstrate greater motivation to adopt safe cooking practices at home because the group is comprised of more risk averse personalities, not because the control group was exposed to interactive TV ads on safe cooking.



Use of a comparison/control group

One way to protect your evaluation from validity threats is to use a comparison or control group of individuals who do not participate in the program, to compare to participants who do participate in the program.



The best way to choose a control group and prevent selection bias is to randomly choose who will participate in the program and randomly choose who will be in the control group and not participate in the program. This is called random assignment and through this method both groups will be theoretically alike.



If random assignment is not possible you can collect demographic information about individuals in each group in the evaluation so that when analyzing data you can adjust for differences between each group [2]. Remember, the longer you wait before collecting data after the program or intervention, the more likely it is that both groups will regress towards the mean and have fewer differences in regards to the outcome variable [2]. When possible it is best to collect your post-test data not long after the program is implemented.



How They Did It

To evaluate the effectiveness of web-based and print materials developed to improve food safety practices and reduce the risk of foodborne illness among older adults, a randomized control design was used with a sample of 566 participants. 100 participants were in the web site intervention group, 100 in the print materials group, and 100 in the control group. Participants took a web based survey that was emailed to them before the intervention and about 2 months following the intervention.

To measure food safety behavior, participants were asked to report their behaviors when they last prepared specific types of food. To assess perception of risk of foodborne illness, participants were asked to rate agreement to the following statement with a 4 point Likert scale: “Because I am 60 years or older, I am at an increased risk of getting poisoning or foodborne illness.” Participants were also asked about how satisfied they were with the educational materials and about how informative and useful they found them. Overall findings showed insignificant difference in the changes between the control and comparison groups, demonstrating that the materials did not impact food safety behavior.

Kosa, K. M., Cates, S.C., Godwin, S.L., Ball M., & Harrison R. E. (2011) Effectiveness of educational interventions to improve food safety practices among older adults. Journal of Nutrition in Gerontology and Geriatrics, 30(4)



Now that you have learned about threats to internal validity, you can weigh out the benefits and limitations of different evaluation designs given your resources and evaluation needs. Below is a table of common evaluation designs and the benefits and limitations of each option [adapted 2]. In general the design options increase in rigor as you go down the table. It is important to note that some of these designs are frequently used to evaluate health programs, but are generally weak in terms of being able to tell you whether change in the outcome variable can be attributed to the program.

Evaluation Design

Description/Example

Benefits

Limitations

One group post-test only

-Collect data after implementing the program.

Example: You implement a food safety workshop and then hand out a survey before participants leave.

-Good to use if a pre-test might bias the data collected/findings or when unable to collect pretest/baseline data.

-Generally inexpensive.

-Easy to understand and for staff with little training to implement.

-May be the only design option if you do not plan ahead and decide to evaluate once the program has already begun.

-Weak design because you do not have baseline data to be able to determine change.

-Not very useful in understanding the actual effect of the program.

-Examples of potential validity threats: history and maturation because you only have information regarding a single point in time and don’t know if any other events or internal maturation took place to influence the outcome.

One group, retrospective pre- and post-test

-Collect both pre and post-test data after implementing the program. The pre-test will involve participants thinking back to their experience before the program.

Example: You implement a food safety workshop and then hand out a two part (pre and post) survey before participants leave.

-Good to use when you are unable to collect traditional pretest/baseline data.

-Generally inexpensive.

-May be the only design option if you do not plan ahead and decide to evaluate once the program has already begun.

-May demonstrate more accurately how much participants feel they have benefited from the program [1].

-Without a comparison group this design is not very useful in understanding whether a change in the outcome variable is actually due to the program.

-Examples of potential validity threats: Recall bias and social desirability. Social desirability can be more influential in a retrospective pre-test than a traditional pre-test [1].

Comparison group post-test only

-You have a comparison group of individuals that do not participate in the program. Following the program you collect data from the program participants and the comparison group.

Example: You implement a food safety workshop and then hand out a survey to participants before they leave. You also give the same survey to individuals in the comparison group who have not participated in the workshop. You later compare survey results of program participants and comparison group.

-Good to use if a pre-test might bias the data collected or when unable to collect pretest/baseline data.

-May be the only design option if you do not plan ahead and decide to evaluate once the program has already begun.

-Statistical analysis to compare both groups is fairly simple.

-Comparison group must be available.

-Target audience must be large enough to have both a control and comparison group.

-Weak design because you do not have baseline data to be able to determine change and whether differences between groups are actually due to the program.

-Example of potential validity threats: selection bias.

One group pre-post test

-You collect data before and after the program or intervention takes place. Usually data is linked for each single individual to assess amount of change.

Example: You implement a food safety workshop and survey participants before and after they participate in the workshop. A survey knowledge score is calculated for each individual participant to find out if scores improved after participating in the workshop.

-Able to identify change before and after the program.

-Generally easy to understand and calculate.

-Must be able to collect pre-test/baseline data.

-Without a comparison group this design is not very useful in understanding whether a change in the outcome variable is actually due to the program.

-Demonstrates greater evaluation rigor and validity when seeking funders or sharing outcome findings with partners than when relying on a single post test or retrospective pre and post test [1].

-Examples of potential validity threats: testing affect and instrumentation.

One group, repeated measures or time series

-You collect data more than once before program implementation and at least two more times following intervention, overtime. The optimal number of times to collect data is five times before and after the program, but this will vary depending on your sample and evaluation needs [3].

Example: You implement a food safety workshop and survey participants a few times before and a few times after they participate in the workshop, over the following months. A survey knowledge score is calculated for each time participants took the survey to find out how scores improved after participating in the workshop and how much information was retained over time.

-Able to identify change before and after the program.

-By tracking change repeatedly over time you have a greater opportunity to observe external factors that might influence findings and address threats to internal validity.

-Generally beneficial for large aggregates like schools or populations.

-Must be able to collect pre-test/baseline data.

-Examples of potential validity threats: history (major threat), maturation, and instrumentation.




Two group pre-post test

-You collect data from program participants and a comparison group before and after the program takes place. Usually data is linked for each single individual to assess amount of change.

Example: You implement a food safety workshop and survey workshop participants and the comparison group before and after the workshop takes place. A survey knowledge score is calculated for each individual participant to find out if or how scores changed and how scores of program participants’ and the comparison group differ.

-Able to identify change before and after program/intervention.

-Statistical analysis to compare both groups is fairly simple.

-Comparison group must be available.

-Target audience must be large enough to have both a control and comparison group.

-Must be able to collect pre-test/baseline data.

-Demonstrates greater evaluation rigor and validity when seeking funders or sharing outcome findings with partners than when relying on a single post test or retrospective pre and post test [1].

Two or more -group time series

-You collect data from program participants and at least one comparison group more than once before the program and at least two more times after the program, overtime. The optimal number of times to collect data is five times before and after the program, but again, this will vary depending on your sample and evaluation needs [3].

Example: You implement a food safety workshop and survey workshop participants and two comparison groups a few times before and a few times after workshop implementation, over the following months. A survey knowledge score is calculated for each time an individual took the survey to find out how scores change overtime and how scores of program participants’ and the comparison group differ.

-Able to identify change before and after program/intervention.

-By tracking change repeatedly over time you have a greater opportunity to observe external factors that might influence findings and address threats to internal validity.

-Generally beneficial for large aggregates like schools or populations.

-Comparison group must be available.

-Must be able to collect pre-test/baseline data.

-Target audience must be large enough to have both a control and comparison group.

-Can require more complex statistical analysis to interpret data.

-Examples of potential validity threats: history (major threat) and selection.

Two group pre-test/post-test, with random assignment

-You randomly assign who will participate in the workshop and who will not (comparison group). You then collect data from program participants and comparison group before and after the program takes place. Usually data is linked for each single individual to assess amount of change.

Example: You randomly choose who will participate in the food safety workshop and who will be in the comparison group. You provide a survey to workshop participants and the comparison group before and after the workshop takes place. A survey knowledge score is calculated for each individual participant to find out if or how scores changed and how scores of program participants’ and the comparison group differ.

-Able to identify change before and after program/intervention.

-Best option for outcome evaluation and preventing internal threats to validity.

- Demonstrates greater evaluation rigor and validity when seeking funders or sharing outcome findings with partners [1].

-Statistical analysis to compare both groups is fairly simple.

-Greater chance that comparison and group and program participants are equivalent and reduced risk that differences between the groups might bias findings.


-Random assignment must be possible.

-Comparison group must be available.

-Target audience must be large enough to have both a control and comparison group.

-Must be able to collect pre-test/baseline data.

-When using random assignment you must consider ethical concerns about not including high risk individuals who are more vulnerable to foodborne illness in the program if they wish to participate. A possible way to address this problem is to provide the comparison group with the opportunity to participate in the program once the post-test data is collected or when the evaluation is complete.

-Possible challenge: differences in attrition.



When selecting your evaluation design consider what is feasible and realistic given resources, time or staff support limitations, and accessibility to your target audience. Your evaluation questions and the outcome variables or indicators you wish to observe should also influence your decision. It is also important to think about the best timing to measure your indicators, because measuring too early or too late could lead to data and conclusions that are incorrect about how effective your program is.





How They Did It

An exhibit about food safety and thermometer use was evaluated using a retrospective pre and post-test evaluation about food safety knowledge and behavior. Data was collected from 75 participants at three different events, a community hospital health fair, a county fair, and a county health fair for employees. Questions asked participants to rate their agreements with statements before seeing the exhibit and after the exhibit. The evaluation demonstrated an increase in knowledge about thermometer use and planned behavior changes.

McCurdy, S. M., Johnson, S., Hampton, C., Peutz, J., Sant, L, and Wittman, G. (2010). Ready-to-go exhibits expand consumer food safety knowledge and action. Journal of Extension, 48(5).

Sampling

How you collect your sample, the level of participation in the evaluation, and your sample size can influence external validity of your findings. External validity refers to how accurately your evaluation findings can be generalized to the general population or target audience. It is generally better to have a large sample size and important to try to make your sample as representative, or similar, to the general population or target audience as possible.



To figure out how to collect your sample it can be helpful to start by identifying who your theoretical population is, who within the theoretical population you have access to, and then how you will create a sampling frame from which you will select your final sample.



Probability and non-probability sampling

There are two different types of sampling methods you could use to select your sample: probability or non-probability. A probability sample, one in which the members of your sample have an equal chance of being selected, is usually the best option for a rigorous evaluation and to assess a causal relationship between the program and the outcome variables. However, probability samples are usually time consuming and expensive. In addition, when working with a small, very specific, or hard to reach target audience, a probability sample may not be possible. In this case, you would choose a non-probability sample in which members do not have an equal chance of being selected. Non-probability samples are generally easier to select but are not as representative of the population, which can make research findings less generalizable.



Below is are descriptions of the different types of probability and non-probability methods you could use to collect your sample:



Probability sampling:

  • Simple random sampling – Each person in the sampling frame has an equal chance of being chosen. For example, you randomly select names from a hat or use a random number table. This method is easy to use when the sampling frame is small, homogenous, and easily accessible.



  • Systematic random sampling – You create a list of individuals in your sampling frame and then select the Kth number throughout the list. For example, you select every fourth individual on the list.



  • Stratified random sampling – You divide the sample into different subgroups based on factors of interest (factors that you think might influence food handling practices such as age, gender, or socioeconomic status). Each group will be homogeneous in regards to the factor or characteristic you choose and you then randomly select individuals from each of the groups. This can you help you ensure that each characteristic is represented in your sample.



  • Cluster area sampling – You divide the accessible population into different subgroups or clusters (e.g. can be based on geography or different schools), then randomly select clusters from which you include all members into your sample.



  • Multi-stage sampling – This is similar to cluster area sampling but with additional stages. You randomly select subgroups or clusters, then within the selected clusters, randomly select another round of clusters from which your sample will be chosen from. You can add as many stages as needed. This option may be more useful if you have a larger sampling frame. For example, you randomly select counties within a state, randomly select villages, randomly select neighborhoods, then randomly select households.



Non-probability sampling:

  • Convenience – You select participants who are the easiest, or most convenient, to choose.



  • Purposive – You sample participants that are easy to access “purposively” with target characteristics in mind to address your evaluation needs.



    • Modal instance: you sample individuals who you think are “typical” of the target audience. It can sometimes be challenging to define what characteristics make up a typical or average case.



    • Expert: You recruit a team or panel experts on the topic of interest, such as food safety researchers with expertise in handwashing, to be included in your sample.



    • Quota: You pre-determine main characteristics of the target audience, and then proportionally or non-proportionally (include a minimum sample number instead) select individuals with those characteristics until the sample quotas are filled.



    • Heterogeneity: The main aim in this approach is to ensure your sample is diverse. You select individuals that represent different views or characteristics (factors that you think might influence food handling practices) without considering whether representation within the sample is proportional to the population.



    • Snowball: You find a few individuals that fulfill your pre-determined criteria to participate in the sample and ask them to suggest potential sample participants who are then contacted and recruited. This approach can be beneficial when the target audience is hard to access or reach.



    • Respondent driven: You find a few individuals that fulfill your pre-determined criteria to participate in the sample and then ask these members to recruit additional sample participants. This can be beneficial when the target audience is hard to access or reach.



Sample Size

One way to determine your sample size is to find out the minimum size needed to detect change with a certain degree of confidence by conducting a statistical power analysis. You can make the calculation using programs such as G Power or SAS or work with a statistician to conduct the power analysis. If you do not have sufficient funds to pay a statistician consider still reaching out, explaining your situation and the purpose of your evaluation, and asking if they might be willing to volunteer their time for this task. You can reach out to statistic professors, teachers, or even graduate students in the area. You can also ask program stakeholders or partners to find out if they or someone they know have any expertise or experience conducting a power analysis.



If you are not able to conduct a power analysis you can also use the table below to determine your sample size based on the size of the population or target audience to which you are generalizing. The table is based on a 5% error rate.





































Sample Size [Source]



Nonresponse

Response levels can also influence the external validity of your findings. One way to address nonresponse issues is to select a larger sample size that the minimum required. This can help you make up for non-response issues such as death, loss of follow up, or drop outs which all contribute to attrition. In addition, it is important to remember that not all members of the target audience will be eligible to participate in the program or evaluation, and not all of those eligible will be willing to participate. Keep this in mind as you recruit individuals for your sample and aim for a larger sample size to help alleviate these challenges.



Steps for collecting the sample:

  1. Determine your theoretical population or your target audience.

  2. Figure out who from your theoretical population is accessible and develop a sampling frame.

  3. Select a sampling method based on your resources, limitations, target audience, and evaluation needs.

  4. Determine your target sample size.

  5. Begin recruitment and implement the sampling method.

  6. Collect data.

  7. Consider threats to internal or external validity or biases.

  8. Generalize findings to theoretical population/target audience.



How They Did It

To evaluate the impact of a food safety curriculum, Hands on: Real-World Lessons for Middle School Classrooms, researchers wanted to find out: 1. To what extent did the curriculum impact students’ self-efficacy of food safety and 2. To what extent did a relationship exist between self-efficacy changes and changes in food safety behavior. When selecting participants, special attention was paid to ensure the sample was diverse in terms of race and ethnicity, in order to promote external validity. A total sample of 1,743 students and 48 teachers participated in the evaluation. Participation was voluntary, dependent on parental informed consent, and did not include any incentives. A previously validated assessment was used in a pre-test administered a week prior to implementing the program, a week post program implementation, and in a follow up test 6-8 weeks after the program.

Beavers, A. S. , Murphy, L., & Richards, J. K. (2015). Investigating change in adolescent self-efficacy of food safety through educational interventions. Journal of Food Science Education, 14(2).



References

  1. Betz, D. L., & Hill, L. G. (2006). Real world evaluation. Journal of Extension, 44(2).

  2. Issel L. (2004). Health Program Planning and Evaluation: A Practical, Systematic Approach for Community Health. London: Jones and Bartlett Publishers.

  3. Turkey, J. W. (1997). Exploratory data analysis. Reading, MA: Addison-Wesley.











Chapter 5: Data Collection



How to collect data

In addition to planning and selecting your evaluation design, you also need to figure out how to collect evaluation data. There are two main types of data: quantitative and qualitative.



Quantitative data is quantifiable, numerical, and is particularly useful when trying to establish causality between an independent variable (program or activity) and dependent variable (food safety knowledge, attitudes, behavior etc.) or if you want to obtain some sort of score or rating on a topic, such as a knowledge score. It is best to use quantitative methods when the subject topic is well researched and when you have a valid and reliable data collection tool. This method is usually considered to be more objective and less biased than qualitative methods. In addition, it can be easier to demonstrate the validity and reliability of quantitative data than with qualitative data.



Qualitative data is generally non numerical and more exploratory in nature. It is used to identify important themes related to a particular topic and to gather detailed insight into more complex issues [11]. Qualitative data collection methods can provide valuable information about personal thoughts, experiences, feelings, and interpretations that can often be over looked when using quantitative methods. It is a particularly good method to use when there is little is known about research topic.



Below are different data collection methods you could use to for your evaluation. Consider the benefits and limitations of each option in relation to your resources, the purpose of your evaluation, and your target audience.



Collection Method

Description

Benefits

Limitations

Survey/Questionnaire

Self-reported.

-Usually a series of questions that can be provided online or on paper to collect numerical data.

-It is important to use a survey that has been pre-tested and proven to be reliable and valid.

-Good for collecting quantitative data that can be statically analyzed.

-Good for assessing food safety knowledge.

-Inexpensive and requires less time, staff training, and support.

-Can be a good method to use to establish causality between an independent variable (program or activity) and dependent variable (food safety knowledge, attitudes, behavior etc.).


-May overlook deeper personal meanings attitudes or perceptions related to food handling and why people think or behave the way they do.






Focus groups

Self-reported/descriptive.

-Usually consists of a group of 8 to 12 individuals that come together to answer questions and have a discussion on pre-determined topics as a collective. A facilitator is usually present to facilitate dialogue and guide the discussion. Descriptive data is collected and later analyzed.

-It can be helpful to keep some questions open ended in order to gather information that is relevant and important but that may not have been considered when focus group questions were developed.

-Focus group discussions are often recorded and later transcribed. It can also be beneficial for the facilitator to take notes on relevant nonverbal expressions to supplement recording transcriptions.

-Most common method for collecting qualitative food safety information [25].

-Good for collecting qualitative data.

-Can provide rich and valuable information about thoughts, attitudes, perceptions, experiences, values, personal interpretations, and meanings that can often be over looked when using quantitative methods.

-Useful to explore topics on which little is known.

-Interactions between group members may provide valuable insight on the topic which can be overlooked when focusing only on individuals.


-Can be a less expensive and less time consuming way to collect qualitative data.

-Need a skillful facilitator to encourage a productive discussion.

-Can sometimes face scheduling difficulties with finding a time suitable for all participants to meet.

-Analysis can be time consuming.

-May be difficult to find participants that are willing to openly share their personal thoughts and feelings in a group setting.

-One or a few individuals may dominate the discussion so it is important for the facilitator to encourage equal participation.

One on one interview

Self-reported.

-Interviewer usually meets one on one with the interviewee either in person or via phone to ask pre-determined questions. Usually takes longer than when providing a written questionnaire.

-Can be good to develop an interview training and script for all interviewers for consistency.

-May be helpful to keep some questions open ended in order to gather information that may be relevant and important but that may not have been considered when focus group questions were developed.

-Structured interviews can be good for collecting quantitative data with additional insight into why participants respond the way they do.

-Good for collecting qualitative data.

-Useful method to explore topics on which little is known.

-Can provide rich and valuable information about thoughts, attitudes, perceptions, experiences, values, personal interpretations, and meanings that can often be over looked when using quantitative methods.

-May be difficult to find participants that are willing to openly share when one on one.

-Can require a skillful interviewer to encourage a productive discussion or response to questions, particularly with more sensitive topics.


Household audit

Observed behavior.

-An audit tool is generally used to visually examine and score households based on factors related to safe food handling practices. For example an audit can examine resources needed for proper cleaning, cleanliness of the kitchen, or storage of foods [25].

-It is important to use an audit tool that has been pre-tested and proven to be reliable and valid.

-Observing food safety behaviors might provide more accurate and objective information than when relying on self-reported information [25].

-Can be a good complement to other forms of data collected, such as self-reported data.

-Participants may not be willing or feel comfortable allowing auditors to come into their homes.

-Participants may prep their home before the audit, making the household environment less realistic (social desirability).

-An audit score may not provide the entire picture for why participants scored the way they did.

-Can be subject to rater bias if some auditors score more lenient or harsher than others.

Observations in model or consumer home or kitchen

Observed behavior.

-Participants are observed practicing a behavior or carrying out a specified task in a model or consumer home or kitchen.

-Observing food handling behaviors, might provide more accurate and objective information than when relying on self-reported information [25].

-Can be a good complement to other forms of data collected, such as self-reported data.

-Can be difficult to implement, time consuming, and expensive.

-May be difficult to find participants willing to be observed when demonstrating food safety behaviors.

Collect microbial data in homes or kitchens

-Microbial samples are collected in the participant’s homes or kitchens and then taken to a lab and analyzed.

-Provides quantifiable data and information that cannot be collected via other quantitative and qualitative methods.

-Can be a more objective method of collecting food safety information.

-Can be a good complement to other forms of data collected, such as self-reported data.

-Can be a good complement to other forms of data collected.

-Does not directly provide information on food safety behavior or KASA.

-Can provide insight on the presence and persistence of pathogens in domestic kitchens that can be valuable in developing recommendations for safe food handling practices at home [25].

-Participants may not be willing or feel comfortable allowing data collectors to come into their homes.



Mixed methods

Consider a mixed methods approach and using a combination of data collection methods data to evaluate your program and gather insight into food safety knowledge, attitudes, and behavior. For example, collecting data via a questionnaire to assess food safety behavior and digging deeper into the topic via focus groups can provide a more well-rounded picture of outcome changes instead of solely relying on data from the questionnaire. Using qualitative methods to supplement quantitative methods can provide more background information on the topic and a greater understanding about why individuals responded the way they did quantitatively. Using mixed methods can also help you identify inconsistencies or inaccuracies when having to rely on self-reported data.

Self-reported data

When collecting self-reported data on food safety behaviors it is important to minimize potential threats to validity, such as recall and social desirability bias, to ensure that the data collected is reliable, consistent, and true. If possible, consider also using an observational method to collect the same behavior data in order to compare and analyze potential discrepancies between self-reported and observed behaviors.

Below are examples of how studies have used mixed methods to gather food safety data:

  • To identify sanitation and food handling of ‘Chicken and Salad’ in Puerto Rican households, food and kitchen surface microbial samples were collected at different stages of food preparation. In addition, household observations were collected to observe storing, thawing, handling, and cooking practices. Observations and microbiological results were then compared to understand the impact of different food handling practices and risk of microbial contamination [13].

  • To learn about how consumers prepare and cook ground beef for hamburgers, video footage of 199 volunteers in Northern California were analyzed for compliance with recommendations from the U.S Department of Agriculture and the FDA’s Food Code 2009. Following the filming of each session, questionnaires about food safety attitudes and knowledge were provided to each volunteer. When describing findings from the video observations, researchers provided further insight into why participants engaged in specific practices in the videos by providing personal statements [21].

  • To explore home food safety knowledge, practices, and risk perception among Mexican-Americans, ten focus groups with 78 participants were conducted in New York and Texas. Focus group findings were then used to inform a probability based survey that was administered to 468 Mexican-Americans who cook for their families. Findings from the focus groups and online surveys consistently identified several food safety concerns such as low use of thermometers, knowledge gaps about cross-contamination, and unsafe thawing practices [20].



Data collection tools

When thinking about what data collection tool to use it is important to do some research to find out if any tools have already been developed, validated, and used to address a topic similar to yours, with a similar population. Instead of starting from scratch consider using existing tools or adapting them to suit your needs.

Below are examples of existing validated and reliable food safety tools:

  • Audit tool for domestic kitchens [3,6]

  • Food safety psychosocial questionnaire for young adults [7]

  • Stages of change questions to assess consumer readiness to use a food thermometer when cooking small cuts of meat [22]

  • Food safety knowledge and attitude scales for consumer food safety education [19]

  • Checklist for observing food safety behavior for sample young adults [5]

  • Consumer food behavior questionnaire [18]





Instrument/survey development

If you are developing you own instrument and questions for the evaluation there are many things you should keep in mind such as including demographic questions, the length of your survey, health literacy and cultural sensitivity, and more. Below is a list of helpful tips to guide you throughout the instrument development process:



  • Include demographic questions in your survey. Responses can later be analyzed to find out how different characteristics influence food safety knowledge, attitudes, behaviors etc. Gathering demographic information can also help you figure out who your program works best for. Alternately you might find out that your program doesn’t work well for a group of individuals and that you may need to adjust the program slightly for a certain group. You could also find out that you need to provide different versions of your activities or materials for different groups. For example, you may find that some messages are resonating well with females but not as well with males and that you need to review and adjust materials given out to male participants. Collecting demographic information can also be helpful when analyzing data because you can adjust for certain characteristics to reduce threats to validity.



  • Demographic information you may want to ask for include: age, gender, ethnicity, education level, number of individuals or children in household, and income. Some questions might be more sensitive in nature so it may be helpful to restate that responses are confidential and anonymous (if that is the case) before asking sensitive questions such as income. It may also be beneficial to leave sensitive demographic questions towards the end of the assessment to allow participants to warm up and feel more comfortable before having to respond to more personal questions.



  • When using quantitative data collection methods make sure you use measures that are sensitive to change and can provide you with sufficient and useful information on the topic. For example, using a 5 point Likert scale can usually provide more valuable information on a topic than when using dichotomous variables such as yes or no options.

  • Provide the option “I don’t know” so that participants are not forced to pick a response that might not be true for them.



Sample questions using a Likert scale and an “I don’t know” response option:



  • Consider the length of the survey and the time it takes a person to complete it. Keep interviewer administered surveys to 15-20 minutes and self-administrated surveys to 5-10 minutes [9].

    • If you are not able to use a previously validated survey or questionnaire, consider using previously validated survey items and scales when possible.



  • When assessing food safety knowledge use learning objectives to develop questions. For example, if the learning objective of a workshop or lesson curriculum is “participants will understand severity and susceptibility of foodborne illness” then start there to identify corresponding survey questions such as “what are some of the consequences of foodborne illness?” or “what populations are most vulnerable to foodborne illness?”

  • Remember health literacy and cultural sensitivity. Make sure questions are clear, direct, and easy to understand, and that you take into account reading levels of participants. When thinking about cultural sensitivity consider data collection methods that are culturally appropriate, applicable to the target audience, and sensitive to cultural norms. For example, some populations might be more receptive to female interviewers or feel more comfortable being interviewed by individuals from their own community or that speak their native language. Take time to learn and understand what interview strategies will work best for your target audience.

  • Pilot test your survey instrument and adjust and refine your tool based on feedback and reactions. Consider using cognitive interviews and the think aloud method when pilot testing. Cognitive interviewing is a technique that allows individuals to verbalize their feelings and thought processes [2]. Find out whether or not and how participants comprehend the questions, are able to retrieve information for their answer, judge whether or not their information is an accurate or relevant answer, and respond to the question [15,10,12,17,26]. Incorporate open ended probes such as “what thoughts are going through your mind right now?” or “what could we do to improve this question?” to gather feedback and reactions to the questions.

  • Test for reliability to ensure your tool will provide consistent responses and results when used repeatedly. For example, assess face reliability by administering the same test to the same individuals over a period of time extent to ensure consistency in the results.

  • Test for validity to ensure that your tool actual measures what it is supposed to measure. There are four types of internal validity usually measured when creating an instrument [14]:

Face validity: the degree to which an instrument appears to measure the concepts or constructs you wish to measure. This can be the weakest form of validity because it is subjective and not evidence based. You could assess face validity with a group of stakeholders or representatives of the target audience by asking if the group finds that the questions are relevant and address the constructs or topics you are interested in.

Content validity: is the degree to which measures in the tool contains a reasonable sample of the constructs of a concept. This can be measured by having a panel of qualified judges identify all the behaviors or attributes of a concept or construct and asses how representative the measure is of the larger concept, such as by using as 1-9 scale with one being extremely inappropriate and 9 being extremely appropriate.

Criterion validity: is the degree to which a measure can accurately predict the dependent variables. It can be found by comparing a measure with other measures to find how much they correlate with each other. A measure has criterion validity if a high correlation is found with one or more criterion measures.

Construct validity: refers to how much an instrument is able to assess the theoretical construct, such as self-efficacy, it is meant to measure. Construct validity is present when a specific measure of a concept or construct is associated with one or more other measures in way that is consistent with the theoretical hypotheses [8].

  • Provide training for inexperienced interviewees and an opportunity for them to practice and role play with representatives from the target audience. Prepare a data collection manual that describes procedures and information such as project background, recruitment methods, and data collection schedules procedures, materials, and submission requirements [9]. Provide each interviewer with their own manual and go over its contents during training. More details on what to include in a data collection manual can be found on page 45 of the USDA’s Addressing the Challenges of Conducting Effective Supplemental Nutrition Assistance Program Education (SNAP-Ed) Evaluations: A Step-by-Step Guide[9].

  • Include an interview script to introduce and conclude the survey. Also include instructions and explanations to guide participants through the questions.

  • When introducing the survey, provide an explanation on why it is important and how the participants’ feedback can help their community.

Sample: “Thank you for participating in this interview. Your feedback will help us learn more about food safety education and how we can improve our program to best serve your community and reduce foodborne illness. The purpose of this interview is to find out what you know about food safety, how you feel about food handling practices, and how you store and prepare foods at home. Remember, there are no right or wrong answers so please feel free to say anything that comes to mind.”

  • Make sure questions are relevant and address your evaluation objectives.

  • Think about whether you are assessing inputs, outputs, and outcomes when developing evaluation questions. Below are examples of input, output, and outcome questions to evaluate a social media campaign on Twitter [4]:

  • Input: How many pilot tested Twitter posts have been developed?

  • Output: How many messages were posted throughout the campaign (Oct-December)? How many Tweets were retweeted? How many Tweets were clicked as a favorite? How many new followers were gained?

  • Outcome: How many teenagers in the county learned about the new Germ Wars campaign?



How They Did It

To evaluate a health education initiative in Georgia elementary schools, pre and post-test questionnaires were developed to evaluate the program’s effectiveness in increasing knowledge about proper handwashing. Keeping the target audience in mind, test questions were developed to use a similar format to those commonly used in elementary schools. Questionnaires were even distributed in test packets students were accustomed to using when taking standardized tests. Extension food safety educators and child development specialists examined the questions for content validity, readability, and to ensure they were appropriate for the age and grade level of the sample.

Harrison, J. (2012). Teaching children to wash their hands – wash your paws, Georgia! Handwashing education iniative. Food Protection Trends. 32(3).



Ways to administer a questionnaire

There are many ways to administer a questionnaire. When deciding which option is best you should think about your audience and which methods they will be more receptive to and comfortable with. Also consider what method will best address your evaluation needs and is feasible given time, resources, and staff support. Below are some options you could choose from and the benefits and limitations of each:



Mailed

Benefits:

  • Participants might find it more convenient to have the ability to take the survey at their own pace and time and in their own home.

  • Reduces the need for participants to travel and can avoid transportation challenges.

  • Does not require interviewers which can save time and less staff support.



Limitations:

  • Need a list of mailing addresses.

  • Printing and mailing may be costly.

  • Unable to track or confirm whether surveys were actually received.

  • Participants might ignore the mailed questionnaire and not respond.

  • Cannot track how long it takes participants to complete the survey.

  • Can miss visual cues and reactions that might be valuable and informative.

  • Participants might go online or ask someone for assistance with answering questions and there is no way to track this.

  • Participants might not return and mail back surveys in a timely fashion.



Email or web based

Benefits:

  • Can be a convenient and a good option for a tech savvy audience.

  • Participants might find it more convenient to have the ability to take the survey at their own pace and time and in their own home.

  • Reduces the need for participants to travel and can avoid transportation challenges.

  • Low cost to set up and maintain.

  • Easy to administer and requires minimal on the ground staff support.

  • Ability to track percentage of emails that are viewed and monitor status of the survey using programs such as Qualtrics or Survey Monkey.

  • Can reduce time needed to input survey responses into a spreadsheet as survey programs will often provide that service.

Limitations:

  • Need to have a list of emails or be able to contact participants to send the survey or a link to the survey.

  • Need to ensure participants are comfortable taking online surveys (may not be best for older populations).

  • Need to ensure participants have reliable access to internet and a computer or smart phone.

  • Participants might go online or ask someone for assistance with answering questions and there is no way to track this.

  • Can miss visual cues and reactions that might be valuable and informative.



In person group-administered

Benefits:

  • Participants might feel more comfortable taking the survey in a group setting than one on one.

  • Can take up less time and require fewer interviewers by implementing all at once.

  • Can be easy to implement following a program activity such as workshop, training or event, when all participants are in the same location at the same time.

  • Participants can share concerns or ask questions in real time.

  • Allows for a more personal experience and interviewer can document visual cues and reactions.

Limitations:

  • Need to be able to find a time and location where participants are all together – may be difficult if the program does not already provide opportunities for this to take place.

  • Can require more resources (staff, time, transportation).



In person one-on-one

Benefits:

  • The target audience may prefer more personal face to face interactions.

  • Allows for a more personal experience and interviewer can document visual cues and reactions.

  • Participants can share concerns or ask questions in real time.

Limitations:

  • If going door to door – participants may not feel comfortable allowing a stranger into their home or open their doors for someone they don’t know.

  • Can require more resources (staff, time, transportation).

  • Can face scheduling difficulties.



Phone

Benefits:

  • Participants might find it more convenient to be interviewed in their own home or any location of their choice.

  • Can be easier to schedule.

  • Reduces the need for participants to travel and can avoid transportation challenges.

  • Participants can share concerns or ask questions in real time.

Limitations:

  • Need a list of phone numbers.

  • Must ensure participants have access to phones.

  • Participants may not feel comfortable answering an unfamiliar number.

  • Can miss visual cues and reactions that might be valuable and informative.



Recruitment and Retention

Another factor that contributes to the success of an evaluation is participation. Below are some tips to help you recruit and retain participants:



  • Incentives Incentives Incentives! Provide incentives to participants who complete the evaluation. Do this each time you administer an assessment (at the pre-test and the post-test). Emphasize the incentive opportunity as you recruit and advertise. Involve members from the target audience when selecting the incentive (gift card, discount, coupon, freebies) to make sure it is something that people actually want and will motivate them to participate.

  • Use key informants or individuals from the target audience to help with the recruitment process.

  • Be flexible when scheduling and plan around the participants’ availability.

  • If possible provide services such as transportation or day care.

  • Be open and honest when explaining the purpose of the interview. Keep it short and let people know the process won’t take more than X amount of their time.

  • Always be respectful, friendly, and maintain a good reputation in the community or with your target audience. Listen and be attentive to questions or concerns. Creating and maintaining a good reputation can help ensure that people are willing to work with you for future opportunities as well.

  • Say thank you! Always thank participants for their time and valuable feedback.

  • Don’t lose touch with participants if you need to do a follow up test. Ask for the best way to reach them and get in touch in advance to inform them of the next survey time.

  • Follow through with any promises or commitments you make to participants. For example, if you advertise a specific incentive as a thank you gift, make sure the same incentive is provided to participants. Or, if you tell participants they will be contacted within a week with an answer to their question or to follow up on the evaluation process, make sure it is done so within the promised time frame.

  • Keep participants in the loop. Let them know if or when you will be sharing evaluation findings. Consider provide an open presentation of the final data for any interested participants and allow them to invite friends or family. This can help the target audience feel more actively involved in the process and can demonstrate how important and valuable their feedback is for the program.

Ethics

Throughout the evaluation, and even the needs assessment, it is important that you think about protecting the rights of the participants. This is particularly significant when interacting with vulnerable populations which include children under 18, prisoners, pregnant women, or anyone who is at risk of being coerced. Make sure that you are sensitive to the culture, needs, and rights of the target audience throughout the program implementation and evaluation, instead of focusing solely on evaluation or research goals.



Consider using a community based participatory research approach. This approach involves forming collaborative partnerships between researchers and community members to ensure equitable involvement throughout the process. This can be a great way to empower your target audience to be active participants throughout the needs assessment, implementation of the program, and the evaluation. A community based participatory research approach can also help you be aware of any ethical concerns throughout the program and evaluation and learn about how to best address any potential challenges. Working closely with your target audience as an equal partner can also build valuable trusting relationships that are beneficial to both parties and can foster co-learning [24].



IRB

The U.S Department of Health and Human Resources defines research as “a systematic investigation, including research development, testing and evaluation, designed to develop or contribute to generalizable knowledge” [23]. If your evaluation research fits this definition and if you plan on sharing or publishing evaluation findings as generalizable knowledge then you will need to apply for IRB (Internal Review Board) approval. You will also need approval if you are receiving any kind of federal funding [26]. The purpose of an IRB review is to ensure the protection of the rights of human subjects and participants in research. To obtain IRB approval you will need to submit an IRB application to a local or private IRB committee and demonstrate that you will be following federal guidelines, such as those related to research ethics and informed consent.



The length of time it takes to obtain IRB approval generally depends on factors such as the sensitivity of the topic, the target audience, and the level of risk involved with the research. There are 3 types of internal review you may be eligible for:

  1. Exempt – No risk or less than minimal risk to participants

  2. Expedited – Minimal risk to participants

  3. Full review – More than minimal risk to participants



Informed consent

Whether or not you need to obtain informed consent depends on how you plan to use the evaluation data and the requirements of your organization and/or the program or evaluation funders [16].If you are required to obtain informed consent, here’s what you need to include in the informed consent form:





Consider the following best practices when obtaining informed consent to make sure participants fully understand their rights and the information they are consenting to [1]:

  • Recognize the importance of time – do not make the informed consent process too long.

  • Train staff on the importance of informed consent.

  • View and treat participants as part of the decision making process.

  • Consider your audience: tailor the informed consent process to address cultural differences, health literacy levels, language needs, and demographic factors.

  • Using plain and simple language and provide information at an 8th grade reading level or below.

  • Think about using alternative methods to convey information such as video, visual handouts, or PowerPoint.

  • Assess and confirm comprehension by using the teach back or teach to goal method. This involves participants saying back to you the information you shared with them until they demonstrated that they fully understand the information.





HIPPA

You must also ensure that you do not violate any HIPPA laws, or the Health Insurance Portability and Accountability Act, when conducting any research or your evaluation. HIPPA protects an individual’s right to keep information about healthcare they receive private. Depending on the information collected the evaluation, you may need participants to sign a form providing permission for you to share their medical information. HIPPA regulations generally apply to healthcare organizations that provide medical services, which might not always be applicable for consumer food safety education programs. However, it is important to keep HIPPA in mind if you plan to ask questions related to the health status of participants or the kinds of health services they have received.



References

  1. Aldoory, L., Ryan, K.B., & Rouhani, A. (2014). Best practices and new models of health literacy for informed consent: review of the impact of informed consent regulations on health literate communications. Institute of Medicine.

  2. Beatty, P. C., & Willis, G. B. (2007). Research synthesis: The practice of cognitive interviewing. Public Opinion Quarterly, 71(2), 287–311.

  3. Borrusso P, Quinlan JJ. Development and Piloting of a Food Safety Audit Tool for the Domestic Environment. Foods. 2013; 2(4):572-584.

  4. Brodalski, D., Brink, H, Curtis, J., Dia, S., Schindelar, J., Shannon, C., & Wolfson, C. (2011). The health communicators social media toolkit. Electronic Media Branch, Division of News and Electronic Media, Office of the Associate Director of Communication at the Centers for Disease Control and Prevention (CDC).

  5. Byrd-Bredbenner, C., Maurer, J. Wheatley, V., Cottone, E., & Clancy, M. (2007). Observed food safety behaviors of young adults. British Food Journal, 109(7):519-530.

  6. Byrd-Bredbenner, C., Schaffner, D. W, & Abbot, J. M. (2010). How food safe is your home kitchen? A self-directed home kitchen audit. Journal of Nutrition Education and Behavior. 42:286-289.

  7. Byrd-Bredbenner, C., Wheatley, V., Schaffer, D., Bruhn, C., Blalock, L., & Maurer, J. (2007). Development of Food Safety Psychosocial Questionnaires for Young Adults. Journal of Food Science Education; 6(2):30 – 37.

  8. Carmines, E. G., & Zeller, R. A. (1979). Reliability and Validity Assessment. Thousands Oaks, CA: Sage Publications.

  9. Cates, S., Blitstein, J., Hersey, J., Kosa, K., Flicker, L., Morgan, K., & Bell, L. (2014). Addressing the challenges of conducting effective supplemental nutrition assistance program education (SNAP-Ed) evaluations: a step-by-step guide. Prepared by Altarum Institute and RTI International for the U.S. Department of Agriculture, Food and Nutrition Service.

  10. Collins, D. (2003). Pretesting survey instruments: An overview of cognitive methods. Quality of Life Research, 12, 229-238.

  11. Creswell, J. W. (2007). Chapter 3: Designing a Qualitative Study. Qualitative Inquiry and Research Design: Choosing among Five Approaches, 35–41.

  12. Daugherty, S. D., Harris-Kojetin, L., Squire, C., & Jael, E. (2001, August 5-9). Maximizing the

quality of cognitive interviewing data: An exploration of three approaches and their

informational contributions. Proceedings of the Annual Meeting of the American

Statistical Association.

  1. Dharod, J. M., Perez-Escamilla, R., Paciello, S., Venkitanarayanan, K., Bermudez-Millan, A., & Damio, G. (2007). Critical control points for home prepared 'Chicken and Salad' in Puerto Rican households. Food Protection Trends, 27(7).

  2. Grembowski, D. (2001)..The practice of health program evaluation. London, U.K.: Sage Publications.

  3. Haeger, H., Lambert, A. D., Kinzie, J., & Gieser, J. (2012). Using cognitive interviews to improve survey instruments. Indiana University Center for Postsecondary Research - Paper presented at the annual forum of the Association for Institutional Research.

  4. Issel, L. Michele. (2014) Health program planning and evaluation: a practical and systematic approach for community health Sudbury, Mass.: Jones and Bartlett Publishers.

  5. Jobe, J. B. (2003). Cognitive psychology and self-reports: Models and methods. Quality of Life

Research, 12, 219-227

  1. Kendall, P. A., Elsbernd, A., Sinclair, K., Schroeder, M., Chen, G., Bergmann, V., & Medeiros, L. C. (2004). Observation versus self-report: Validation of a consumer food behavior questionnaire. Journal of Food Protection, 67(11), 2578–86.

  2. Medeiros, L.C, Hillers, V. N, Chen, G., Bergmann, V., Kendall, P. & Schroeder, M. (2004) Design and development of food safety knowledge and attitude scales for consumer food safety education. J Am Diet Assoc. 104(11): 1671–1677. 

  3. Parra, P. A., Kim, H., Shapiro, M. A., & Gravani, R. (2014). Home food safety knowledge, risk perception, and practices among Mexican-Americans. Food Control, 37(1).

  4. Phang, H. S., & Bruhn, C. M. (2011). Burger preparation: what consumers say and do in the home. Journal of Food Protection, 74(10).

  5. Takeuchi, M. T., Edlefsen, M., McCurdy, S. M., & Hillers, V. N. (2006). Development and validation of stages-of-change questions to assess consumers’ readiness to use a food thermometer when cooking small cuts of meat. Journal of the American Dietetic Association, 106(2), 262–6.

  6. U.S Department of Health & Human Services. (2009). Code of federal regulations, title 45, public welfare, part 46 protection of human subjects. Retrieved from http://www.hhs.gov/ohrp/regulations-and-policy/regulations/45-cfr-46/index.html

  7. Wallerstein, N. B., and Duran, B. (2006) Using community-based participatory research to address health disparities. Health Promotion and Practice. 7(3):312-23.

  8. White Paper on Consumer Research and Food Safety Education. (DRAFT).

  9. Willis, G. B. (1999). Cognitive interviewing: A “How To” guide. Research Triangle Institute.

1999 Meeting of the American Statistical Association. Research Triangle Park, NC:

Research Triangle Institute.

Chapter 6: Data Analysis



Analyzing quantitative data

There are several steps to analyzing quantitative data such as cleaning the data set, coding or re-naming variables, gathering descriptive data, and analyzing correlations. Below are things to consider including in your analysis process which can differ based on your evaluation design, the data you collected, and what you want to find out from the evaluation. Remember that statistical analyses can sometimes be complicated and challenging if you have not had previous experience analyzing quantitative data. You can always reach out to a statistician or someone with more experience for assistance with this part of your evaluation. You could also consider taking a statistics course at a local college or online to learn how to analyze data using a basic spreadsheet and your own calculations, or by using a statistical software program.



Input data

  • Before inputting your data, determine who will have access to the data files, when the data will be entered after it has been collected, and where files will be stored. Consider creating a data entry schedule if the data collection process is ongoing or occurs more than one point in time.



  • Type data into a spreadsheet such as Excel or into a statistical software program like SAS, SPSS or EPIiNFO, a free program by the Centers for Disease Control and Prevention (CDC) for public health professionals.



  • Go through and clean your data. Check for any errors (typing, spelling, upper or lower case inconsistencies etc.), duplicates, missing values, and inconsistent or invalid responses. Excel even provides tools and instructions to help you clean your data.



  • If participation in your assessment is anonymous, make sure the data set does not include any identifiable information. You can create an identification number for each response to keep track and refer back to them when needed.



  • Protect or restrict access to the data set to ensure that the information cannot be retrieved by someone who does not have permission or authority to access it. Take special precautions if the evaluation is confidential.



  • Back up your data by saving a duplicate version of the data set.



Clarify your objectives and approach

  • Think about what you want to find out from the data based on your evaluation objectives and the questions you asked evaluation participants. You may want to meet with stakeholders or partners before analyzing the data to ensure everyone is on the same page.

  • Look at program activities, your outcomes and indicators, the evaluation design, and data collection tools to help you identify what you need to analyze and how. You can use a table such as the one below to help you plan.



Program Activity

Outcome Objectives

Outcome Indicators

Evaluation Design

Data Collection Tool

Data Analysis Method























  • Decide on a consistent way to code and organize the data set. Re-name variables or categorize questions and responses based on your evaluation needs.



  • Determine whether or not you want to group or dichotomize certain variables. For example you may want to group open-ended responses to “how many individuals live in your household?” into two categories: [1-3 individuals] and [4 or more individuals].



  • Decide if you want to include outliers, or unusual and often extreme values that significantly differ from the rest of the data points, for each variable. Outliers can sometimes significantly impact your findings so it is helpful to think about why outliers may have occurred, whether they are accidental, or if they are valid data points that should be included in the analysis.

Gather descriptive data

  • Explore demographic characteristics of evaluation participants. Start by identifying what you want to find out, such as: is the sample diverse in age, gender, and socioeconomic status? Does the sample consist of mostly one gender or individuals from one geographic location?



  • If your evaluation design included a control group you can compare demographic data between groups to find out if they are equivalent in terms of the characteristics you collected data on.



  • When applicable, look at the evaluation participants’ pre and post scores. Start by clarifying what you want to find out. For example you may want to ask: how were the scores distributed? Did participants perform better in the post test? How many participants reached the target score? What percentage of participants performed below the target score?



  • Look for data on different measures such as the mean, median, mode, percentage, distribution, and range variance. Here’s what each of the measures can tell you:

    • Mean: Provides the average value. This is the sum of all the values divided by number of values.

    • Median: Tells you the middle value in the whole set of values.

    • Mode: This is the value that occurs most frequently.

    • Percentage: Can tell you what percentage of participants reached a target value, or hit higher or below a certain value.

    • Distribution: Can tell you the frequency or range of values for a certain variable. For example you can divide food safety knowledge scores into different categories (0-25%, 35-50%, 50-75%, 75-100%) to find out what percent of participants scored within that range of scores.

    • Dispersion: Tells you about how the values are spread around the mean. There are three ways to look at the dispersion: through the range, or the lowest value subtracted by the highest value, the standard deviation, or the variance, which take into account how outliers affect the spread of values. To calculate the variance you: 1. Find the mean 2. Subtract the mean from each value and square the result 3. Find the sum of all the squared differences 4. Divide the sum by number of values. The standard deviation is the square root the variance.



  • Find out if the data has a normal distribution. In a normal distribution the mean, median, and mode all equal 0, the standard deviation is 1, and 50% of the values are less than the mean and 50% are more than the mean.



  • If your data is normally distributed, find confidence intervals to know how likely it is that true population results would be outside the range of values you have found by confidence limits. For example a 95% confidence interval can tell you that you are 95% confident that the true population mean is between x standard deviations from the mean.

  • Link and compare data. Examples of ways you could link different data sets include:

    • Linking control and comparison group data to find any differences in the outcome variable.

    • Linking factors such as program participation or exposure to food safety messages with data on the outcome variable to find out how different factors influence the results.

    • Comparing microbial samples to self-reported data on food handling to explore validity of data collected.



Examine change and association

  • Measure differences between pre and post-test results and find out whether the outcome variable increased, decreased, or stayed the same. Keep in mind that if your outcome variable was high when collecting baseline or pre-test data it will be difficult to measure and notice any change.

The table below explains how to calculate a change in score (e.g. knowledge test or household audit score) for different evaluation designs [3].





Evaluation Design

Measure of Change in Score

One group, pre-post test design

Sum (each posttest score - each pretest score) ÷ Number of paired scores = amount of change

Nonequivalent group, post-test only design

Mean participants’(control group) posttest score – mean non participants’ (comparison group) posttest score = amount of change

Two group, pre-post test design

(Mean participants’ posttest scores – mean participants’ pretest score) – (mean non participants’ posttest score - mean non participants’ pretest score = amount of change



  • Look at associations or correlations. Remember that you can only assess a causal relationship if you used an experimental design [3]. Find out whether there is a positive or negative, association or correlation between your program and the outcome variable, or if it is a null outcome, which would occur if your program shows no effect on a particular variable.

  • Find out how different characteristics or demographic factors influence the outcome variable.



  • When making comparisons and examining associations, different statistical tests are needed depending on whether you are using parametric or interval data (temperature of refrigerator) or nonparametric or categorical data which can be nominal (gender) or ordinal (rating score of workshop: 1. poor 2. satisfactory 3.good 4. excellent).



The table below shows commonly used parametric and nonparametric statistical tests for comparison or association tests [3]:



Type of Data

Comparison Tests

Association Tests

Parametric

Interval and normal distribution

•Difference scores

•T-tests of difference of means

•Analyses of variance (ANOVA, ANCOVA)

•Correlation

•Hierarchical analyses

Non Parametric

Nominal or Ordinal

Chi-square tests based on contingency tables

•Chi-square tests based on contingency tables

•Odds ratio

•Relative risk

•Other (sign test, Wilcoxon, Kruskal-Wallis)



  • Explore the statistical significance of your findings to find out the probability of an observed result occurring by chance. Statistical significance is usually described as p values of <0.05 or <0.01 to show that there is a 5 % or 1% possibility of an observed finding occurred by chance.



  • Explore what moderating or mediating variables may have influenced the outcome data when examining associations. A moderating variable is a variable that influences the strength or direction of the relationship between the independent and outcome variable. For example, a past experience of food poisoning can be a moderating variable if its occurrence results in a stronger association between participating in a food safety workshop and reports of safe food handling practices.



A mediating variable is a variable that is part of the causal pathway between the independent and outcome variable. For example, possessing a thermometer at home can be a mediating variable between learning about safe cooking of meats and using a thermometer to check the internal temperature of cooked meat, poultry, and egg dishes. Owning a thermometer is a mediator in this case because a person cannot use a thermometer without owning one.



Explore other factors

  • Analyze attrition to find out how loss to follow-up might have affected or biased your findings. An attrition estimate can be calculated by dividing the number of individuals at follow up by the number of individuals at baseline (# of evaluation participants at follow up ÷ # of evaluation participants at baseline). This should be less than 10%, and if it is greater than 10% you can run a logistic regression model to examine if attrition is related to any demographic variables [1].



  • Explore change related to a target outcome (effectiveness) or related to intervention effort (efficiency).



You can test the effectiveness and efficiency of the program using the calculations below [3]:



Evaluation Design

Measure of Change in Score

Effectiveness ratio

(Posttest score – pretest score)/ (Target score – pretest score) = effectiveness ratio

Intervention efficiency

(Mean participants posttest score) – (mean nonparticipants posttest score) ÷ (amount of intervention or program participant group) -(amount of intervention or program nonparticipant or control group) = intervention efficiency





How They Did It

An evaluation of a Fight BAC! food safety campaign on an urban Latino population in Connecticut examined campaign coverage, consumer satisfaction, and influence on food safety knowledge, attitudes, and behaviors. A cross-sectional pre and post-test survey was provided to 500 Latino consumers. Analysis of the data was conducted using SPSS and categorical responses with more than two categories, were recoded into only two categories. For example when coding data, response options “hardly ever” and sometimes” were combined in one category and “frequently” and always” in the second category.

Analyses showed that recognition of the Fight BAC! logo increased from 10% to 42% (P<.001). Participants exposed to campaign messages were more likely to have an “adequate” food safety knowledge score and were more likely to self-report proper procedures for defrosting meat (14% vs 7%; P = .01). In addition, a dose-response association was found between exposure to the campaign and awareness of the term “cross-contamination.” Out of four media sources (radio, television, newspaper/magazine, and poster) television and radio had the highest levels of exposure.

Dharod, J. M., Perez-Escamilla, R., Bermudez-Millan, A., Segura-Perez, S., & Damio, G. (2004). Influence of the fight BAC! Food safety campaign on an urban latino population in Connecticut. Journal of Nutrition Education Behavior, 36.





Analyzing qualitative data

There are several steps to analyzing qualitative data which can differ depending on the data collected and your evaluation needs. Below are steps often included in a qualitative analysis:



  • Transcribe audio recorded or videotaped evaluation data, when applicable. You can transcribe responses into a Word document, spreadsheet, or into a software program such as Atlas.ti or Ethnograph.



  • If participation in the evaluation is anonymous, do not include any identifiable information in the transcription. You can create an identification number for each evaluation response instead of using names of the participants.



  • Decide if you want to have a starting list of relevant and important themes to look for in the data transcripts. For example, you may have conducted some background research and found that financial constraints can significantly influence food safety practices and you might want to include financial constraints as a theme to look for in the data.



  • With qualitative data, reliability is often confirmed through multiple analyses of interview transcripts by more than one researcher [2]. Consider finding another researcher or staff member to independently review the transcripts and identify themes.



  • Read through the entire transcript and identify important reoccurring patterns and themes.



  • Following the initial review of transcripts and identification of themes, researchers should meet with each other to review findings and come to an agreement on themes that are meaningful and important.



  • Categorize identified themes and patterns. Share the identified themes with stakeholders and partners to identify priority themes and to refine and confirm final themes that will be used in the analysis. You may want to include subthemes within each theme.



  • Consider re-organizing data under final themes and subthemes. You may want to go through the transcript to mark and code the data based on the each theme or subtheme.



  • Develop a narrative of the research findings and create a report that discusses final themes. Consider including direct quote examples and descriptions of the importance and implications of each theme.

  • Interpret identified themes and subthemes to form conclusions about your program.



Share your findings

  • Use color coded graphs and tables to share findings, especially for quantitative data. Excel and other data analysis programs usually offer tools to help you translate data into tables, charts, and graphs.



  • Determine what parts of your analysis are most relevant or important to share and include in a final report. You may want to create multiple reports for different stakeholders. For example, you may want to have slightly different versions for funders, partners, and community members depending on what is most relevant to them and what they want to know.



  • Strengthen your evaluation report by using mixed methods to share findings. Provide quantitative findings that are supplemented by qualitative personal narratives that offer further insight into participants’ perceptions, behaviors, and their responses. Use direct quotes to tell a story about how your program impacted participants.



  • Facilitative a discussion about your findings with other staff, partners, and stakeholders. Reflect on what parts of the program worked, what didn’t, and how evaluation data can be used to improve the program. Use evaluation findings to answer questions such as:

    • Can any changes be made to better address the needs of the target audience and any barriers they are facing related to food safety?

    • Do any education strategies and methods need to be modified for the program to be more effective in influencing food safety knowledge, attitudes, skills, aspirations (KASA), and behavior?

    • Do program staff or volunteers require additional training or support in a particular area?

    • Are there any additional resources or partnerships we could use to maximize our impact?

    • Does additional evaluation or research need to be gathered to formulate better conclusions about the program?



  • Publish your findings and share what you have learned with other educators and decision makers, when applicable. There are current research gaps in consumer food safety education research making it important that new insights and learnings are shared and published, such as through peer-reviewed journals [4]. Examples of publications you could submit articles to include:



  • Remember that it is not only important to share what strategies work, but also what strategies don’t work. By sharing failures and challenges in addition to strengths and successes, other educators can learn about best practices and what methods to avoid. It is also valuable to share any validated instruments or resources you have developed for your program so that other educators and researchers do not have to reinvent the wheel when useful and applicable tools already exist.



  • Be creative when it comes to sharing your findings and what you learned from the evaluation. In addition to submitting an article to a peer reviewed journal you can:

  • Post evaluation data on your website or blog and link to it through social media channels such as Twitter or Facebook.

  • Create a PowerPoint presentation of findings to share with local educators at a workshop event.

  • Host and plan a webinar to share your findings with individuals who may not be able to attend an in person event.

  • Submit conference abstracts to present your research at food safety or education conferences such as the Consumer Food Safety Education Conference, the International Association for Food Protection’s annual meeting, or the Society for Public Health Educations’ annual meeting.

References

  1. Cates, S., Blitstein, J., Hersey, J., Kosa, K., Flicker, L., Morgan, K., & Bell, L. (2014). Addressing the challenges of conducting effective supplemental nutrition assistance program education (SNAP-Ed) evaluations: a step-by-step guide. Prepared by Altarum Institute and RTI International for the U.S. Department of Agriculture, Food and Nutrition Service.

  2. Creswell, J. W. (2007). Qualitative inquiry and research design (2nd Ed.) Thousand Oaks, CA: Sage Publications.

  3. Issel L. (2004). Health Program Planning and Evaluation: A Practical, Systematic Approach for Community Health. London: Jones and Bartlett Publishers.

  4. White Paper on Consumer Research and Food Safety Education. (DRAFT).

Chapter 7: Tips and Tools



You may not always have the time or resources to plan and implement a rigorous program evaluation using experimental designs and comparison groups. However, it is always better to do what you can and collect some evaluation data than nothing at all. Below are tips and tools you can use to evaluate activities in a time crunch and with limited resources. You can adapt each of the tools to suit your own evaluation needs and priorities.



The resources below include:

  • A tip sheet for evaluating educational presentations of consumer food safety information (e.g. workshop, class, training, or webinar)

  • A tip sheet for sharing and evaluating educational information online (e.g. brochures, fact sheets, infographic)

  • A logic model template for planning and identifying program outcomes and indicators [adapted 3,4]

  • A participant evaluation form you can use to evaluate educational presentations (e.g. workshop, class, training, or webinar) [adapted 2]

  • An activity tracker form for process evaluation and to track inputs and outputs [Question 18 - 1]

  • A budget tracker form that you can use to plan your program budget or to track spending and expenses [adapted 5]

  • Table templates to track web and social media metrics

  • An educational material feedback Form to gather usability data and feedback about your materials



References

  1. Little, D., & Newman, M. (2003). Food stamp nutrition education within the cooperative extension/ land-grant university system national report – FY 2002. Prepared for United States Department of Agriculture Cooperative State Research, Education, and Extension Service Families, 4-H, and Nutrition Unit. Washington, D.C.

  2. Prince George's County Health Department's Health Enterprise Zone evaluation sheet of Health Literacy Workshops. (2015).

  3. United States Department of Agriculture (USDA) and National Institute of Food and Agriculture (NIFA). (n.d). Community nutrition education (CNE) – logic model detail.

  4. United States Department of Agriculture (USDA) and National Institute of Food and Agriculture (NIFA). (n.d). Community nutrition education (CNE) – logic model overview.

  5. W. K. Kellogg Foundation. (2004). Evaluation handbook. MI.

Quick Tips and Tools for evaluating an educational presentation of consumer food safety information (e.g. workshop, class, training, or webinar) – when time and resources are limited

*Underlined terms will be hyperlinks to the actual tool/template



Before Activity/Program

  • Clarify the purpose of the evaluation and what you want to find out. Refer back to your program objectives or create a logic model like the one below to identify education goals and outcome indicators.



  • Decide on how you want to collect evaluation data such as via in person interviews, questionnaire, online surveys, or focus groups. The format or setting used for your program activities can help you determine the best data collection method to use. For example, if the activity is an in person workshop, handing out a written questionnaire immediately following the workshop might be your best option. For an online activity such as a webinar, you may find it best to create an online survey using free programs such as Qualtrics or Survey Monkey.

  • Select your evaluation instrument and think about what questions you want to ask. When possible, use pre-validated instruments such as these. You can also use the Participant Evaluation Form template and adapt it to fit your needs.

  • Remember health literacy and cultural sensitivity. Make sure the questions you want to ask are clear, direct, and easy to understand, and that you take into account reading levels of participants. When in doubt, write at a 7th or 8th grade reading level or below.

  • Think about the length of time it takes to complete the survey and keep it brief. Keep interviewer administered surveys (e.g. in person interview) to 15-20 minutes and self-administrated survey (e.g. questionnaire) to 5-10 minutes.

  • Pilot test your survey instrument and make sure your test is valid and reliable. Adjust and refine your questions based on the feedback you receive in the pilot test.

  • Decide when and how often to collect evaluation data. When possible, conduct a pre and post-test to collect data before and after the activity. You can also use a multiple time series approach to collect data at least twice before and twice after the activity, to examine trends over time. If only a one time post test is feasible, consider selecting a few priority questions to ask evaluation participants before the activity as a partial pre-test.

  • Create a brief interview script or outline talking points to explain to participants why the evaluation important and how their feedback will be used to improve the program.

  • When applicable, apply for IRB approval and create an informed consent form for participants. Make sure the informed consent process is easy for participants to understand and consider using the teach back or teach to goal method to assess and confirm comprehension.

During Activity/Program

  • Provide a sign in sheet for participants. Consider adding a column to ask participants if they would like to be contacted with any additional information about consumer food safety. You can use this list for the promotion of additional food safety education materials or future activities. If participants show hesitation in providing their personal information in the sign in sheet, make it a voluntary request.

  • Conduct a process evaluation. You can use tools such as the Activity Tracker Form or the Budget Tracker Form.















After Activity/Program

  • Thank participants for their time and feedback. Consider providing a small thank you gift such as a gift card, coupon, or small health or food safety related freebies like a thermometer or toothbrush. If financial constraints are a barrier, reach out to local businesses to request donations for the incentives.

  • Let participants know if they will be receiving any follow up information with the evaluation findings and follow through with any promises or commitments made to participants.

  • Input evaluation data or responses into a spreadsheet such as Excel or into a statistical software program like SAS, SPSS, or EPIiNFO to analyze the data.

  • Use color coded graphs and tables, such as the one below created on Excel, to share your findings.

  • Share what you learned with stakeholders, staff, program participants, or other consumer food safety educators. Reflect on the data to explore how your program might be improved. Discuss questions such as:

    • Can any changes be made to better address the needs of the participants?

    • Do any education strategies and methods need to be modified for the program to be more effective in reaching education objectives?

    • Do program staff or volunteers require additional training, resources, or support in a particular area?

















Quick Tips and Tools for sharing and evaluating educational information online – when time and resources are limited

*Underlined terms will be hyperlinks to the actual tool/template

Plan and Prepare

  • Create a promotion plan and timeline that includes the launch date of the new educational resource, a description of how materials will be promoted, and a list of potential partners that can help with promotion.

  • Utilize social media to promote your online resource. Designate hashtags related to the resource topic and draft sample Tweets and posts for Twitter and Facebook that link to and promote your materials.

  • Partner with organizations or businesses in your area that have a similar target audience to support the promotion of your new materials through their own social media channels and connections. Potential partners include teachers, grocery stores, food banks, or community centers. You can also reach out to local journalists or bloggers (such as food or mommy bloggers) to let them know about your latest efforts and ask them to help spread the word about your new resources.

Evaluate

  • Use these tables to track web and social media metrics. You can modify the tables by removing or adding new columns. You can also use the tables to compare data, such as the monthly average of unique visits, in months prior and after the release of your new materials or activity. For example, you might want to compare web page views 3 months before and 3 months after your launch.

  • Conduct key informant interviews with other food safety educators or partners you are working with to supplement the web analytics you collect. You can also create a brief online survey using the Educational Material Feedback Form template to gather feedback from individuals who have used your materials.

  • Analyze and share evaluation findings with staff, stakeholders, and program participants. Highlight challenges and successes and reflect on the data to determine how materials or promotion strategies might be improved.







[Insert name of organization/activity and logo]

Participant Evaluation Form

Your feedback is important and will help us to improve the [INSERT program/activity]. Please take a few minutes to fill out this evaluation form.


How much do you agree or disagree with the items below:


Strongly
Agree

Agree

Neutral

Disagree

Strongly
Disagree

1. The [INSERT program/activity] lived up to my expectations.

2. The [INSERT program/activity] taught me about food safety and [INSERT program/activity topic].

3. The information I learned in the [INSERT program/activity] was useful and relevant to me.

4. I feel confident that I can apply what I learned when [INSERT behavior - e.g. cooking or grocery shopping].

5. I plan to apply what I learned when [INSERT behavior - e.g. cooking or grocery shopping]

6. The presenter was knowledgeable and engaging.

7. I plan to share what I learned with friends and family.


8. What part of the [INSERT program/activity] was most interesting or useful to you?



9. How would you rate the [INSERT program/activity] overall?

Excellent Good Average Poor Very poor


10: How would you improve the [INSERT program/activity]?



11. Are you the main food preparer in your household? Yes No


If yes, how many people live in your household? #_____



12. Any additional comments?


Thank you!

Activity Tracker Form

Conduct a process evaluation and track program inputs and outputs by providing this form to staff or volunteers to complete for each activity. You can adapt this form to suit your program’s needs. When collected input responses into a spreadsheet to keep track of program activities. Don’t forget to reflect on the information gathered to see how activities can be improved.

  1. Name:

  2. Date activity took place:

  3. Describe the type of activity implemented (e.g. workshop/brochure development and distribution/webinar).





  1. Describe the main objectives of the project and the food safety topic addressed.







  1. List all materials and resources used for this activity.

  1. Provide the names of staff or volunteers that worked on this project and # of hours worked.

Name: Hours:







  1. List any equipment, printed materials, or tools acquired and used for the activity:





  1. Other resources used:





  1. Cost breakdown:

$ for

$ for

$ for

$ for

$ TOTAL

  1. How was the activity advertised (include duration of promotion and where it was advertised)?







  1. What was the target participation goal for this activity? ____________

How many individuals actually participated in the activity? ____________



  1. How many educational handouts or materials were distributed?







  1. How many participants filled out the sign in sheet and checked that they would like to continue to receive follow up information? ______________



  1. How many evaluation forms were filled out and collected?









  1. Based on the evaluation form what was the average overall rating of the activity?



  1. Describe participant reactions to the activity and information shared.







  1. Do you think the activity was implemented or planned or intended? Why or why not?







  1. Do you feel the program has enough resources to provide people with the food safety information they need?







  1. What challenges did you face with planning and implementing the activity?







  1. What were the strengths of the activity? Please provide specifics about what worked well in the planning and implementation of the activity.







  1. Do you think any aspect of this activity could be improved? How?







  1. Please provide demographic information on participants or contacts.





Shape1

Month/Year Month/Year Month/Year Month/Year Month/Year

Budget Tracker



Shape10 Shape9 Shape8 Shape6 Shape7 Shape5 Shape4 Shape3 Shape2











Incentives





Supplies and Equipment





Printing Materials







Communication







Travel







Consultants







Evaluation Staff









Shape11 $ $ $ $ $



$ $ $ $ $

\



$ $ $ $ $



$ $ $ $ $



$ $ $ $ $





$ $ $ $ $



$ $ $ $ $





$ $ $ $ $





$ $ $ $ $

Shape12

TOTAL









[Insert name of organization/activity and logo]


[Insert Name of Material] Feedback Form


Your feedback is important and will help us improve our food safety education materials. Please take a few minutes to fill out this evaluation form.


  1. Please share how useful the information provided in the [Insert Name of Material] is to you.


Extremely Useful Very Useful Somewhat Useful Not Very Useful Not At All Useful


2. Do you intend to use the information you learned in the [Insert Name of Material] when handling foods at home?


Yes No Maybe


3. What food safety topics do you want to learn more about and would you like to see included in our food safety materials?





4. How can the [Insert Name of Material] be improved?





5. If you plan to or have already shared and distributed the [Insert Name of Material] please write how many have or will be distributed. Also, describe how and to whom.





13. Would you like to receive email updates with of our latest food safety activities and information? Yes No


If yes, please provide your email address: _________________________________


11. Any additional comments?




Thank you!



File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorRouhani, Ayma
File Modified0000-00-00
File Created2021-01-24

© 2024 OMB.report | Privacy Policy