Evaluation and Measurement Strategy Report Form

Environmental Public Health Tracking Network (Tracking Network)

Att5e_Evaluation&PMStrategyReport_Form

Evaluation and Performance Measurement Strategy Report

OMB: 0920-1175

Document [docx]
Download: docx | pdf


Shape1

Form Approved

OMB No. 0920-xxxx

Exp. Date xx/xx/201x







Environmental Health Tracking Evaluation Guide

CDC Division of Environmental Health Hazards and Health Effects















Developing an Evaluation Plan























Shape2

CDC estimates the average public reporting burden for this collection of information as 20 hours per response, including the time for reviewing instructions, searching existing data/information sources, gathering and maintaining the data/information needed, and completing and reviewing the collection of information. An agency may not conduct or sponsor, and a person is not required to respond to a collection of information unless it displays a currently valid OMB control number. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden to CDC/ATSDR Information Collection Review Office, 1600 Clifton Road NE, MS D-74, Atlanta, Georgia 30333; ATTN: PRA (0920-xxxx).








Department of Health and Human Services

Centers for Disease Control and Prevention

National Center for Environmental Health

Introduction


The Environmental Health Tracking Branch (EHTB) Evaluation Guide is an evaluation technical assistance tool developed by the Centers for Disease Control and Prevention (CDC) to assist in the evaluation of grantee environmental health tracking programs. 1


Program evaluation is a tool that can be used to document activities, determine how well those activities are being completed, and show how those activities contribute to the program’s mission. This guide is intended to offer program evaluation guidance and aid skill building on performance and outcome measurement. It has been developed with the assumption that grantees have a diverse range of experiences with program evaluation, as well as varied resources allocated to program evaluation. This guide will clarify the approaches and methods for evaluation, in addition to providing an example specific to the scope and purpose of grantee environmental health tracking programs.


Preparing for Successful Evaluation


The CDC Evaluation Framework provides basic guidance to develop evaluation strategies appropriate to public health programs. This guide utilizes the CDC Evaluation Framework as an organizing principle; the framework is introduced below in Figure 1.1 and Tables 1.1 and 1.2.


Figure 1.1 CDC Framework for Program Evaluation
























Table 1.1 Six Steps in the CDC Framework for Evaluating Public Health Programs

Step

Description

Step 1

Engage Stakeholders

Evaluation stakeholders are people/ organizations that are invested in the tracking program, are interested in the evaluation results, and/or have a stake in what will be done with evaluation results. Representing their needs throughout the process is critical to program evaluation.

Step 2

Describe the Program

A thorough program description clarifies the need for the program, the activities undertaken to address this need, and the program’s intended outcomes. This focuses the evaluation on a limited set of objectives of central importance. Note that in this step the program is described and not the evaluation. Various tools (e.g., logic and impact models) will be introduced to help depict the program and the anticipated outcomes. Such models help stakeholders reach a shared understanding of the program.

Step 3

Focus the Evaluation Design

Focusing the evaluation involves determining the most important evaluation objectives, and the most appropriate design for an evaluation, given time and resource constraints. An entire program does not need to be evaluated all at once. Rather, the “right” focus for an evaluation will depend on what questions are being asked, who is asking them, and what will be done with the resulting information.

Step 4

Gather Credible Evidence

Once the program has been described and focused the evaluation, the next task is to gather data to answer the evaluation questions. Evidence gathering should include consideration of each of the following: indicators, sources of evidence/methods of data collection, quality, quantity, and logistics.

Step 5

Justify Conclusions

When agencies, communities, and other stakeholders agree that evaluation findings are justified, they will be more inclined to take action on the evaluation results. As stated in the CDC Framework, “Conclusions become justified when analyzed and synthesized evidence is interpreted through the ‘prism’ of values that stakeholders bring, and then judged accordingly.” This step includes analyzing the data you have collected, making observations and/or recommendations about the program based on the analysis, and justifying the evaluation findings by comparing the evidence against stakeholder values that have been identified in advance.

Step 6

Ensure Use and Share Lessons Learned

The purpose(s) identified early in the evaluation process should guide the use of evaluation results (e.g., demonstrating effectiveness of the program, modifying program planning, accountability).To help ensure that evaluation results are used by key stakeholders, it is important to consider the timing, format, and key audiences for sharing information about the evaluation process and findings.


Table1.22 Standards included in the CDC Framework for Evaluating Public Health Programs

Standard

Description

Utility

Who needs the evaluation results? For what purpose do they need the evaluation results and/or why are they interested in the evaluation? Will the evaluation provide relevant information in a timely manner for them?

Feasibility

Are the planned evaluation activities realistic given the time, resources, and expertise at hand? How can planned evaluation activities be implemented with minimal program disruption?

Propriety

Does the evaluation protect the rights of individuals and protect the welfare of those involved? Does it engage those most directly affected by the program and changes in the program, such as participants or the surrounding community?

Accuracy

Will the evaluation produce findings that are valid and reliable, given the needs of those who will use the results?



Developing a Strategic Evaluation Plan


A strategic evaluation plan can be considered a program’s evaluation portfolio. It lays out the rationale, general content, scope, and sequence of evaluations you plan to conduct during the cooperative agreement funding cycle. The strategic evaluation plan serves as a roadmap for evaluation activities. The plan should be a fluid document that will change based on budget, resources, work plan objectives, accomplishments, and expectations. It should be developed to include both process evaluation and outcome evaluation. Process evaluation focuses on quality and implementation of activities, while outcome evaluation assesses the achievement of expected outcomes of program activities. Outcome evaluation builds on process evaluation. Over time, the evaluations the program conducts will demonstrate how well the program is working and what changes are necessary to ensure the program works better. To provide a sense of the program’s overall workings, the strategic plan should address all major program components.


A strategic evaluation plan differs from an individual evaluation plan as it is a proposal for how multiple evaluations will be conducted and coordinated over the funding period. A strategic evaluation planning process requires development of high level details about what each individual evaluation may look like, including aspects such as objectives, potential evaluation questions, and data collection methods, as well as a way to estimate the scope, timing and resources involved in the evaluation.

In contrast, an individual evaluation plan focuses on just one of the multiple evaluations proposed as part of the strategic evaluation plan, and provides specifics regarding how the evaluation will be implemented.

The benefits of a strategic evaluation plan are that by systematically planning the overall evaluation framework, the resources invested in the process can be utilized to provide information which support program planning and improvement, as well as providing preliminary content for each of the individual evaluation plans to be developed. The strategic evaluation plan development process also allows programs to anticipate the data and resources necessary for the overall evaluation process, allowing evaluation capacity to be built over a longer time period. See figure 2.1 for a flowchart on developing a strategic evaluation plan.


Figure 2.1 Strategic Evaluation Planning Process and Product




































Forming an Evaluation Team


The strategic evaluation planning process should begin by forming a small evaluation planning team (4-6 people) who will form the strategic evaluation plan document. This team should be involved with this process on an ongoing basis by reviewing and updating the plan, and should monitor progress in plan implementation. Program team members should include stakeholders knowledgeable about the program, goals and objectives, and program improvement and evaluation.


Developing a Program Description


In developing a description of the program for the strategic evaluation plan, preliminary activities should be undertaken such as reviewing program documents, sharing findings with the evaluation planning team, and working with the team to finalize a description of the key program activities. Documents which can be useful in summarizing planned activities and potential program outcomes include performance measurement reports, funding applications, and associated work plans. A logic model can be useful in developing a program description which graphically displays how the program is expected to work.


Determining Evaluation Objectives


Following a program description, the evaluation team should in a systematic manner prioritize what programmatic issues to be evaluated. One potential method to determine prioritization include the nominal group technique, were the evaluation group votes and individually prioritizes program elements. This allows decisions to be arrived at quickly, while allowing the evaluation group to participate fully. As program elements are being prioritized, determine SMART objectives to identify the results to be achieved. These objectives should describe what the program expects to accomplish regarding the specific element being considered.


SMART objectives are:

Specific- concrete, identifying what should change for whom

Measurable- able to quantify or otherwise measure activities or results

Attainable- reasonable based on available resources

Relevant- relate to the overall program goals

Time bound- achievable within a specific time period


Creating SMART objectives allows evaluation questions to be identified which are targeted, and relevant to the program evaluation needs.


Identifying Evaluation Questions


Determining evaluation questions is the next step after prioritizing program elements to evaluate. This stage requires determination of a broad evaluation strategy and an assessment of the resources required. For each priority evaluation objective, evaluation questions should be created, a potential evaluation design should be determined, and required resources and evaluation feasibility should be determined. When determining evaluation questions, it is important to consider how the question helps the program, how important the question to program staff or stakeholders, and whether the question leads to program improvement. Evaluation questions generated should consider the entire continuum of the logic model, considering both process and outcome questions. Process questions determine whether the activity was implemented as intended, how the implementation differed from the original plan, barriers to implementation, how implementation could be improved, and whether adequate resources where in place for the activity. Outcome questions determine to what extent the activity led to successful achievement of the program goals, the type of outcomes that have been achieved, the types of long term outcomes attributable to the activity, and the cost of the activity in comparison to the benefit.



Defining Evaluation Design

After determining evaluation questions, the next step in developing an evaluation strategy is to sketch out possible methods to answer the potential evaluation questions identified. This includes determining evaluation designs, data collection methods, and timelines.

Evaluation designs range from experimental designs such as randomized controlled trials, quasi-experimental designs which include pre-posttest with a comparison group, and non-experimental designs which include descriptive designs, case studies and post-test evaluation designs. As part of the evaluation design, data collection methods should be considered, such as the use of existing data collected by the tracking program or other agencies, data from available documents, and new data collection from surveys, interview, or focus groups. The evaluation team should focus on what data will be considered credible evidence. For example, depending on the program element being evaluated, quantitative data such as performance measurement metrics may be more credible than qualitative data such as stakeholder surveys. Mixed method designs that combine quantitative and qualitative data collection methods should also be considered. Timelines for data collection should be considered. The optimal time for data collection will depend on factors including information need, when programmatic decisions are pending that the evaluation will be useful to inform. Resource requirements and feasibility of data collection should also be considered. Table 2.1 can be useful for the evaluation planning team in determining designs, timelines and resources.

Completing one per major program component (Science Development, IT, and Communications) may be beneficial when developing individual evaluation plans.

Table 2.1 Example Evaluation Design and Data Collection Summary (example partially completed)

Objective: By August 2016, ensure all required NCDMs are submitted, and develop two more non NCDMs to include in tracking website compared to the previous year.

Question

Possible Evaluation Design(s)

Potential Data Collection Methods

Possible Data Sources

Data Collection Begins

Final Results Due

Resources Required

Science

How many NCDMs have been submitted to CDC? How many non-NCDMs are displayed on the program tracking website?

Descriptive

Document review;

Website review

Surveillance work plans; Program epidemiologists;

Public health tracking website

Ongoing

Year 3

Low

IT

What measures have been taken to identify gaps in data call completion?


Descriptive

Document review (meeting logs, agendas);

Email records

Staff calendars Meeting notes

CDC Data records

Ongoing

Year 3

Modest

Communications
































Developing an Cross-Evaluation Strategy

After having identified and prioritized program elements to evaluate, the evaluation team should have identified potential evaluation questions, designs, collections methods, resource needs, and feasibility of the strategy. Developing a cross-evaluation strategy involves packaging the information considered into a cohesive package. Elements of this process involve ensuring that a mix of process and outcome evaluation questions have been developed, prioritization evaluation candidates have been considered to identify data collection efficiencies, a timeline has been developed to carry out the proposed evaluations and related data collection activities, as well as in the context of the overall grant cycle. Table 2.2 summarizes the considerations involved in the overall proposed evaluations for cohesion.

Table 2.2 Issues to Consider When Looking Across Proposed Evaluation Strategies

Area

Definition

Issues to Consider

Evaluation Design

What evaluation designs are proposed?

Will a proposed evaluation design be suitable for answering multiple evaluation questions?

Data Collection: Target Audience

From whom is information being collected?

If several data collection strategies have the same target audience, can you collect information for more than one purpose using a single data collection tool?

Are data collection activities concentrated too heavily on one target audience?

Can burden be shared more equitably?

Data Collection: Timeline

When is information being collected?

How can evaluation data collection needs be integrated into the program timeline? For example, if baseline data need to be collected, program activities may need to be delayed.

If information on different evaluation activities needs to be collected at the same time, do you have the resources to conduct multiple evaluation activities simultaneously?

Data Collection: Source

From where is information being collected?

Can the same data source be used for multiple evaluation activities?

Can a single source be modified or enhanced to support your strategies for the future?

Who

Who will conduct the evaluation activity?

Do you have the personnel and resources to conduct the evaluation strategies you prioritized?

Do they have the necessary skills and expertise or how could they obtain these skills?

Can you leverage additional evaluation assistance from partners?

Analysis

How will the information from the evaluation be analyzed?

Who will do the analysis?

Do they have the necessary skills and expertise or how could they obtain these skills?

Can you leverage additional analytic capability from partners?

Use

How will the information from the evaluation likely be used?

Will the information be provided in time to inform decisions?

Who will use the information provided?

Are there capacity-building activities that need to be conducted with intended users to increase the likelihood the results will be utilized?





Strategic Evaluation and Communication

The strategic evaluation plan provides the opportunity to design and conduct evaluations which will have the best ability to be beneficial to the tracking program. The results of the evaluations should be used to support program improvements; communication is essential in this regard. Considering how key audiences will be communicated with regarding the progress of the strategic evaluation plan activities is important to ensure usage of evaluation findings over the grant cycle. Developing a communication plan can be useful in promulgating evaluation findings effectively. This plan can be linked to the strategic evaluation plan activities, and can allow audiences to learn the program’s progress in conducting evaluation, as well as determine how findings are used following evaluations.


Communication Strategy


Audiences for program evaluations include the CDC Environmental Public Health Tracking Program, the local evaluation planning team, other state and local tracking programs, comparable programs within the state health departments, as well as state health department leadership. An over-arching communications strategy should be developed which focuses on high level information regarding the strategic evaluation plan itself, as well as findings regarding the individual evaluations conducted.


A draft of the strategic evaluation plan should be shared with the CDC Project Officer and SME’s prior to wider distribution. Evaluation is a dynamic process; the strategic evaluation plan is therefore a living document subject to new information and unanticipated events. It is crucial to review the strategic evaluation plan regularly to revise the plan for prevailing conditions. The same should be done for the individual evaluation plans conducted.



Individual Evaluation Plan

After developing a strategic evaluation plan, the next step is to develop details for the individual evaluations to be conducted as part of that plan. The individual evaluation plan should be a detailed plan that provides clarity regarding the objective for the specific evaluation to be performed, evaluation questions that resolve the objective, forming a comprehensive map for those working on the evaluation, and ensuring a shared understanding on the evaluation’s purpose, questions, design, data analysis, and dissemination of findings. Plans can be developed according to the timeline for the evaluation sequence in the strategic evaluation plan, and can utilize existing individual evaluation plans. Table 3.1 provides an example of a consolidated evaluation plan format for an individual evaluation plan.


Table 3.1: Consolidated Evaluation Format


Objective:


Evaluation Questions

Indicators

Data Sources

Data Collection

Timeframe

Data Analysis

Communication Plan

Staff Responsibilities



What the program wants to know

What type of data the program will need

Where the program will get the data

How the program will get the data

When the data will be collected

What the program will do with the data

When and how the program will share results

Program staff who will ensure this is completed





















Developing an Individual Evaluation Plan

An individual evaluation plan is developed similarly to the strategic evaluation plan. A small evaluation planning team should work to create each individual evaluation plan, which should include stakeholders with interest in the specific evaluation being planned. Multiple categories of stakeholders should be considered for inclusion in the individual evaluation planning process, from primary stakeholders who are involved in program operations, and would utilize the evaluation findings to make changes to the program, to secondary stakeholders who use the program’s products, and may be affected by changes based on evaluation findings, as well as tertiary stakeholders, who may not be directly affected by program changes, but have a general interest in the result, such as other tracking programs.


Program Description


After engaging stakeholders, a clear description should be developed of what will be evaluated in the individual evaluation plan. This includes developing a logic model for the particular evaluation, just as a logic model was developed for the overall strategic evaluation plan.


It is also useful to develop a textual synopsis of the logic model, which can explain concisely how what is being evaluated accomplishes intended outcomes and important features of the evaluation.

After describing the individual evaluation, the evaluation design should be focused, refining the general ideas considered in the strategic evaluation plan. When focusing the evaluation design it is important to determine who is most likely to use the information, and what type of information, such as quantitative measures, will be most useful to them.


Data Sources


Gathering credible evidence is the next step after the evaluation design has been focused. The evaluation team should work with stakeholders to identify data collection methods and sources to answer the evaluation questions. Indicators to judge success should also be determined, such as number of non-NCDMs present on the tracking website, or percent of data calls successfully met.


Data Analysis


While in the individual evaluation planning phase, the evaluation team should also plan for data analysis and interpretation. This involves determining how the data collected will be analyzed, the methods that will be used to analyze data, and who will interpret results. A component of planning for data analysis involves establishing standards of performance, or performance benchmarks, against which to compare the identified indicators. Benchmarks for indicators could include determining number of non-NCDMs on website to be considered successful, or percent of data call deadlines met.

In addition to planning for data analysis, the team should also plan for developing conclusions at the completion of the individual evaluation. Issues to consider include how well the evaluated element performs compared to the standards established by the individual evaluation plan, and changes to be made as a result of evaluation findings.


Dissemination of Results


As with the strategic evaluation plan, determining how the results of the individual evaluation will be disseminated is important. Issues to consider for this component include what information should be communicated, who should be included in the communication and the method by which communication will take place.



1Parts of the Environmental Health Tracking Evaluation Guide have been adapted from the CDC State Asthma Program Evaluation Guide, the CDC Rape Prevention and Education Evaluation Guide, and the National Institute of Environmental Health Sciences Evaluation Metrics Manual.


2 These standards were originally developed by the Joint Committee on Standards for Educational Evaluation.



File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorPhilipose, Abhilash (CDC/OSTLTS/OD)
File Modified0000-00-00
File Created2021-01-23

© 2024 OMB.report | Privacy Policy