Midterm and Final Evaluation

8_Midterm and Final Evaluations ME Policy_FINAL Submission.docx

USDA Local and Regional Food Aid Procurement Program

Midterm and Final Evaluation

OMB: 0551-0046

Document [docx]
Download: docx | pdf

OMB Control No. 0551-New



Monitoring and Evaluation Policy

Food Assistance Division, Office of Capacity Building and Development



3/31/2016





Background


USDA is committed to ensuring a strong culture of evaluation and learning from experience. The policy described in this document sets forth an ambitious agenda for monitoring and evaluation in the Foreign Agriculture Service (FAS) and demonstrates the Agency’s will to achieve results that make positive changes for people living in poverty. The Agency places a high level of importance on managing for results, and to this end, the Office of Capacity Building and Development (OCBD) adheres to a Results Oriented Management (ROM) approach that supports the Agency’s capacity to manage public resources thoughtfully, to ensure accountability and transparency, and to help ensure that programming is driven by evidence and not by anecdote.


The purpose of this monitoring and evaluation policy is to institutionalize results oriented management into the programs administered by OCBD, in particular the McGovern-Dole International Food for Education and Child Nutrition (McGovern-Dole), Food for Progress, and the Local and Regional Procurement Programs managed by the Food Assistance Division (FAD). This policy will guide the integration and implementation of monitoring and evaluation systems and processes into FAD programs and will serve to inform Agency staff and stakeholders of its expectations regarding program monitoring and evaluation. The policy outlines the purpose of monitoring and evaluation, the range of methods used to monitor and evaluate programs, the roles and responsibilities of Agency staff, program participants, and other key stakeholders, and the ways in which monitoring and evaluation information will be used and disseminated to inform decisions regarding program management and implementation.


This policy also seeks to address the findings from external reviews that have been focused on USDA food assistance programs. In 2007, and again in 2011, GAO conducted an assessment of the effectiveness and efficiency of U.S. Government food assistance programs.1 These reports noted the need for improvements in monitoring and evaluating USDA’s food assistance programs. In response to these reports and previous reports conducted by GAO and the USDA Office of the Inspector General (OIG), FAS established a Monitoring and Evaluation Unit within OCBD in FY 2007. The Monitoring and Evaluation Unit (M&ES) is responsible for managing and providing technical assistance in performance management and evaluation of OCBD programs, including food assistance programs.


OCBD’s monitoring and evaluation policy as it is described in this document, is based on various laws and policies that guide performance management and the review of food assistance programs. The Government Performance and Results Act (GPRA) established in 1993 and the subsequent GPRA Modernization Act established in January 2011, require agencies to develop and regularly report on Agency goals and objectives, including outcome oriented goals, performance indicators, targets and their links to U.S. Government priorities.2


Furthermore, USDA adheres to the Paris Declaration Principles on Aid Effectiveness3, as well as the Accra Agenda for Action, which reconfirmed and amplified the principles of ownership, mutual accountability and managing for results. The Agency’s evaluation policy also draws significantly from guidance established by the American Evaluation Association4 on a more effective government and the Organization for Economic Cooperation and Development’s (OECD) Development Assistance Committee (DAC). The OECD/DAC Evaluation Network aims to increase the effectiveness of international development programs by supporting robust, informed and independent evaluation through improving evaluation policy, sharing good practice and supporting the development of operational and policy lessons.5


The monitoring and evaluation policy is also guided by food assistance program legislation. Food for Progress, McGovern-Dole, and Local and Regional Procurement (see 7 CFR Part 1499.13, 7 CFR Part 1599.13, and 7 CFR Part 1590.13) requires, unless otherwise specified in an agreement, independent, third party midterm and final evaluations.6 The legislation governing the monitoring and evaluation requirements for these programs is further established and defined in this policy.


Beginning in 2009, the Food Assistance Division (FAD) of USDA/FAS began to undertake a strategic course of action to develop and institute a comprehensive Results Oriented Management (ROM) System to support the achievement of Division and Agency-wide program goals. ROM focuses on higher-level program results such as the outcomes and the impact of programs, while also monitoring program activities, inputs and outputs. It promotes management decision-making at a more strategic level than can be achieved through tracking activities, collecting anecdotes and documenting individual success stories. ROM can help to improve internal and external program coordination and ensure that funds are allocated to programs that achieve results and have the greatest impact. To this extent, FAD’s ROM System is integrated into key management structures and processes within the Division including, strategic planning, performance and accountability reporting, policy formulation, project management, financial and budget management and human resource management.


This policy is effective from the date established and will be applicable to all food assistance programs. Projects funded in FY2010 and FY2011 will use the policy as a guiding principle in fulfilling the established requirements of their current agreement while projects funded in FY2012 and beyond will comply with all requirements as specified in the policy.


Definitions and Purpose - Monitoring and Evaluation


All food assistance projects will support this monitoring and evaluation policy and the relevant ROM program frameworks by developing and implementing a range of monitoring processes and structures including, a results framework outlining the project’s causal logic and the critical assumptions underpinning the project’s strategy, a performance monitoring plan that includes performance indicators and a data collection plan, and an evaluation plan. This approach is complementary to FAD’s operational guidelines related to project design and implementation including, inter alia, the use of project audits, work plans and financial plans.


Monitoring involves connecting relevant information to strategic decisions. Monitoring is used by program management and key stakeholders to assess performance and use of program resources. It assists in the oversight and continuous review of program implementation and the assessment of progress in meeting program objectives and results. Monitoring should be based on systematic data collection of established performance indicators including process, output and outcome indicators. Performance monitoring is necessary for project management but it is only one part of a ROM system.


Monitoring is complementary to evaluation and both processes support FAD’s ROM system. As such, monitoring and evaluation plans should be developed in coordination with one another to ensure the most efficient and effective use of resources and information.


Evaluation is the systematic and objective assessment of both on-going and completed projects with regard to a project’s design, implementation and results. Evaluations are used to deepen the Agency’s understanding about how and why things work or do not work, to provide evidence of success, and to strengthen future programming and strategic planning. Specifically, evaluations aim to assess the relevance, effectiveness, efficiency, sustainability and impact of a project or program.


Shape1

Accountability: Obligation to demonstrate that work has been conducted in compliance with agreed rules and standards or to report fairly and accurately on performance results vis a vis mandated roles and/or plans. This may require a careful, even legally defensible, demonstration that the work is consistent with the contract terms.

--OECD/DAC

Evaluation is viewed as a tool for learning and accountability. Accountability is understood as involving two responsibilities or duties: the responsibility to undertake certain actions and the responsibility to provide an account of those actions. The four primary audiences of accountability include donor accountability, which emphasizes financial accounting and results attainment, beneficiary accountability, which involves project implementation, practice, policies and outcomes, internal accountability, which pertains to organizational mission, values, members, supporters and staff, and finally horizontal accountability, which comprises peer agencies and institutions of practice.


As stewards of public resources, USDA is accountable to the American people and to program beneficiaries and stakeholders. Of primary concern is that the resources reach the target beneficiaries and that they actually produce the intended changes to reduce food insecurity, improve literacy, increase agricultural productivity and expand trade. When rigorous and carefully designed evaluations are transparent and made publicly available, they help to ensure that public resources are used as effectively and efficiently as possible.


To be accountable also implies the need to learn from programmatic successes and failures. Organizational learning is a key focus of evaluations in FAS with the primary audience including USDA, program participants, other key stakeholders and national and local governments where the programs are implemented. Important in the learning process is the translation from evaluation findings and recommendations to changes in the design and implementation of programs and program planning and management. USDA will also ensure the sharing of lessons learned to the broader group of stakeholders through the publication of evaluations.


USDA strives to have an integrated system for reporting and follow-up on evaluation findings and recommendations. The system will seek to enhance and improve learning within USDA, among and across regions, programs and sectors and to ensure that where applicable, lessons learned about programs in Latin America, for example, are shared with Agency staff and organizations managing and implementing programs in Africa and Asia.


Guiding Principles for Monitoring and Evaluation


This monitoring and evaluation policy adheres to a number of guiding principles. Taken together, the principles are mutually reinforcing and complimentary to ensure that the monitoring and evaluation policy and its supporting processes and systems meet the desired purpose of learning and accountability. The monitoring and evaluation processes and systems underpinning this policy will become an integral component of project design and management.


Shape2

Evaluations will be consistent with the following criteria:

  • Independence

  • Utility

  • Transparency

  • Relevance

  • Partnerships

  • Credibility

  • Rigor

  • Timeliness

Monitoring will be conducted throughout the duration of each project. The monitoring data and information will serve to inform the performance monitoring reports and support management decisions and ongoing, organizational learning. Project management, including program participants, USDA program staff, and other key stakeholders, will be responsible for the continuous use of monitoring and evaluation information in the implementation of the projects. Such information will assist project management in identifying opportunities and challenges and whether or not mid-course project alterations need to be made, what changes need to be made and how such changes should be implemented.


Regular monitoring and evaluation information will also be used by FAS to meet its regular reporting and accountability requirements. This includes the Department’s annual Performance and Accountability Report7, annual budget requests, interagency reports, and Congressional, OIG and GAO reviews as well as public requests.


To the extent possible and feasible, evaluations will be timed in order to inform project funding decisions. This will help to ensure that management decisions regarding future project funding are evidence based and strengthen the link between results and resource allocation. An expanded body of knowledge about effective interventions and necessary conditions for project success and sustainability will also improve future project design and strategy.


Reflective of USDA’s commitment to ownership and mutual accountability, monitoring and evaluation principles will, to the extent possible, seek to build and enhance partnerships, build the capacity of organizations to conduct rigorous monitoring and evaluation, and increase the knowledge base on lesson learned and good practices in international efforts to address food insecurity.


Evaluation efforts managed by FAS focusing on strategic areas of interest, special studies, and impact evaluations will be undertaken in partnership with other USG agencies, other donor governments and foreign governments to the extent possible and feasible. Such an approach will support the Paris Declaration principles on harmonization and partnership and the US Government efforts to ensure a whole of government approach.


USDA will support the use of multiple evaluation designs depending on the purpose of the evaluation. As a general principle evaluations should be designed using the most rigorous evaluation methodology appropriate and feasible and with due consideration to available resources. The selection of evaluation methods should depend on the purpose of the evaluation, the questions being asked, the level of rigor and evidence required and project design.

Shape3

Counterfactual: The situation or condition which hypothetically may prevail for individuals, organizations or groups were there no development intervention.

--OECD/DAC


Impact evaluations, using quasi-experimental and experimental designs, including randomized evaluations, will be supported by USDA as appropriate. Impact evaluations aim to assess changes in program participants’ behaviors or wellbeing and seek to establish a cause and effect relationship. Direct and indirect impacts will be assessed as well as intended and unintended impacts.


Impact evaluations implemented by USDA and program participants must include a well-defined counterfactual or control group and seek to assess whether, for example, a school feeding program led to observed changes in learning and school performance or whether the observed changes in school performance were a result of other changes in the implementing environment. Impact evaluations should aim to identify attribution of the program interventions to the outcomes observed.



As specified in regulations (see 7 CFR Part 1499.13, 7 CFR Part 1599.13, and 7 CFR Part 1590.13), evaluations will be independent and conducted by a third party. Specifically the regulations specify that the third party conducting the evaluation:

  • Is financially and legally separate from the participant's organization;

  • Has staff with demonstrated methodological, analytical capability, cultural and language skills, and specialized experience in conducting evaluations of international development programs involving agriculture, trade, education, and nutrition;

  • Uses acceptable analytical frameworks such as comparison with non-project areas, surveys, involvement of stakeholders in the evaluation, and statistical analyses;

  • Uses local consultants, as appropriate, to conduct portions of the evaluation; and,

  • Provides a detailed outline of the evaluation, major tasks, and specific schedules prior to initiating the evaluation.


Independence of the evaluation function from program design and management is a core principle of USDA evaluation. Independence helps to ensure both credible and objective evaluations. USDA supported evaluations should be conducted by people who are not involved in the design and implementation of the project and the evaluation process must be free from political influence and organizational pressure.


USDA supports projects that incorporate and support rigorous and robust monitoring and evaluation systems from the design or proposal stage, throughout the project duration, and, to the extent possible, post-project implementation.


Project Design/Proposal Development


Results Frameworks

At the design or proposal development stage (pre-award) organizations will be responsible for clearly identifying and articulating how the proposed project will contribute to USDA food assistance program results frameworks. USDA Food Assistance Results Frameworks can be found on the Food Assistance Division’s website.8 Proposals should clearly identify the project strategy and what result(s) the project expects to achieve. Proposals, therefore, must include a project specific results framework that a) identifies the project’s logic and expected results at various levels and b) clearly links to the USDA program results frameworks.


The proposed project strategy and expected results should be clearly grounded in the country context and knowledge of existing relevant national and local programs. For example, a proposal submitted in support of USDA’s McGovern-Dole program focused on improving literacy of school age children may focus on the intermediate results for improving quality of literacy instruction and improving attentiveness and exclude project activities focused on improving student attendance if the proposal can clearly justify that school attendance is not a hindering factor in improving literacy. Countries, for example, with high rates of school enrolment and attendance and access to schools may not necessarily warrant project activities focused on this intermediate result.


The project-level results framework will be used to guide project monitoring and evaluation.


Performance Monitoring Plans

In addition to submitting a project-level results framework, the proposal must include a draft plan for monitoring project performance, unless otherwise specified in the program solicitation. The performance monitoring plan (PMP) should identify indicators for monitoring progress in achieving results and present a strategy for collecting performance data.9 The plan must include the FAD standard indicators and should include custom (project-specific) indicators if applicable. FAD standard indicators have been identified in the Policy and Operational Guidance Manual.10 Standard indicators are used by USDA to measure progress in achieving USDA’s program results. The standard indicators will allow USDA to report progress among all of its projects across results areas (i.e. literacy, good health and dietary practices, agricultural productivity and trade) or country specific achievements. Projects are required to report on both Feed the Future and Program standard indicators where relevant to the project’s strategy.


In addition, proposals may include additional indicators that the proposing organization deems key to monitoring program performance and accountability. As a good practice, these custom (project-specific) indicators should be based on broad stakeholder input. Although not required, proposals should include custom indicators that have been developed through a participatory approach involving key stakeholders. The proposing organization may wish to hold a stakeholders meeting to develop the project’s proposed results framework, performance monitoring plan and performance indicators. Using a participatory approach will help to ensure that all stakeholder’s requirements and needs are met, comprehensive knowledge of the implementing environment and country needs, knowledge of existing data collection tools and activities for performance data collection, institutionalization and ownership of the results framework and project strategy, and clearly articulated roles and responsibilities.


In the development of standard and custom indicators, USDA believes indicators should meet the following criteria:


Direct – the indicator should, as closely as possible, measure exactly the relevant result.

Objective – the indicator should be precise and unambiguous about what is being measured and how. There should be no doubt on how to measure or interpret the indicator.

Adequate – the indicator(s) should sufficiently capture all of the elements of a result.

Practical – the data can be obtained to inform the indicator in a timely and efficient manner and the data are of high-quality.


The full set of indicators selected to monitor project performance should be kept to the minimum necessary to inform project management and oversight. They should also be realistic in terms of project resources allocated to performance management including data collection, analysis and reporting.


Evaluation Plans

Finally, proposals should include a preliminary evaluation plan with a description of required evaluation activities, including proposed design, methodology, timeframe, and management of evaluation activities. Proposals should include a detailed description of its evaluation management function and budget allocation for monitoring and evaluation.


USDA recognizes the range of project sizes, scopes and durations across the Division’s programs. As described above, USDA will support the use of multiple evaluation designs depending on the project characteristics and purpose of the evaluation. In support of USDA’s general principles for evaluations, evaluations should be designed using the most rigorous methodology appropriate and feasible taking into account available resources, project strategy, current knowledge and evaluation practices, and the implementing environment. Proposals should aim to include strong evaluation design, including impact evaluation that seeks to advance the knowledge base and lessons learned in Food Assistance.


Organizations submitting proposals under any of the food aid solicitations may propose to engage with organizations with strong expertise in evaluation to assist in the evaluation design, implementation, data collection and analysis. Proposing organization should also consider the appropriate costs for the management and implementation of monitoring and evaluation activities. Proposals may include monitoring and evaluation key personnel and proposing organizations must allocate, at a minimum, three percent (3%) of the project budget towards monitoring and evaluation. The minimum three percent is exclusive of the Applicants’ M&E employee staff costs. For evaluation plans which include the conduct of impact evaluations, USDA expects the M&E costs to range between five to ten percent (5-10%) of the project budget.


Project Implementation


Shape4

During project implementation the project will be responsible for:

  • Submitting a revised performance monitoring plan three (3) months after project award

  • Submitting a revised evaluation plan three (3) months after project award

  • Establishing annual indicator baselines and targets in project reports

  • Reporting performance on indicators and targets semi-annually in project reports

After project award, project monitoring and evaluation plans will be finalized in coordination and cooperation with FAD program staff and M&ES within three (3) months after agreement signature. FAD program staff and M&ES will work with the program participant to finalize the performance monitoring and evaluation plans. This may include refinements to the plan to ensure that the definitions for the USDA standard indicators are clearly articulated, evaluation methodologies are clearly defined, indicators selected and identified are appropriate and consistent with USDA expectations, and plans for performance measurement, evaluation and reporting meet USDA requirements.


Projects will be responsible for establishing indicator baseline information and targets for which the project will regularly measure performance against. The baseline information for indicators must be measured and established prior to the start of program activities. Annual targets for select indicators may be established in project agreements. Annual targets for other indicators identified in the PMP should also be established whenever possible and appropriate. Established targets should be realistic and ambitious.


The establishment of targets and baseline information must be submitted to USDA within six months of the project award.11 Projects are required to report progress and achievements in meeting the targets established for each of the standard and custom indicators in the semi-annual performance reports. Such information will help project management, FAD staff, and key stakeholders determine whether the project is on track to achieve its intended results. Discussion of the performance indicators must include a narrative description, as outlined in the PMP, of how the project used the information for project management.


Baseline data and information for all future evaluation activities must also be collected and reported to USDA within six months of the project award. Baseline methodologies including sampling and questionnaire design, sources of data, data limitations and challenges, and an analysis of the evaluation baseline data should be included in the report. Analyses of the intervention and control group populations should also be included in the baseline report if an impact evaluation design is utilized.


Following submission of the project reports, FAD and M&E staff will review the reports and provide, in writing, any follow-up observations or questions for the project team. FAD may request, for example, additional information or clarifications regarding the performance indicator data submitted or seek to discuss challenges or opportunities that may have arose during the reporting period. FAD may request a conference call with or a written response from the project team to discuss the project reports.


The project reports and monitoring data and information will help to inform project interim and final project evaluations.


Interim Evaluations

The purpose of interim evaluations may vary across projects and will depend on the evaluation design outlined in the evaluation plan. In general, however, interim evaluations should be used to assess progress in implementation; assess the relevance of the interventions; provide an early signal of the effectiveness of interventions; document lessons learned; assess sustainability efforts to date; and discuss and recommend mid-course corrections, if necessary. A variety of methodologies may be used to carry out interim evaluations and may include external reviews, implementation or process evaluations, evaluability assessments, or other special studies.


All food assistance projects are required to carry out an interim evaluation. The purpose of the evaluation is to critically and objectively review and take stock of the project’s implementing experience and the implementing environment, assess whether targeted beneficiaries are receiving services as expected, assess whether the project is on track in meeting its stated goals and objectives, review the project-level results frameworks and assumptions, document initial lessons learned, and discuss necessary modifications or mid-course corrections that may be necessary to effectively and efficiently meet the stated goals and objectives. The interim evaluation must address standard evaluation criteria including relevance, effectiveness, efficiency, sustainability and impact (see definitions provided under “final evaluations”.


Shape5

Interim project evaluation process and timing:

  • Prepare for interim project evaluation

Approximately 4 months after implementation of key project activities


  • Identify internal project evaluation team

  • Develop project evaluation TOR, including methodology

  • Submit TOR to USDA for review and comment


  • Identify external consultant


  • Conduct assessment and collect stakeholder input

  • Submit final interim evaluation report to USDA

Within 60 days following evaluation fieldwork and no more than 15 days after evaluation report completion


  • Discuss actions to address findings and recommendations with USDA project manager

Following evaluation fieldwork and no later than 30 days following submission of final interim evaluation report


  • Report on implementation of follow-up actions

Ongoing, in future project reports as appropriate


The project will be responsible for managing, conducting and allocating sufficient funds for the interim evaluation. The interim evaluation must be conducted by an independent, third party. According to the food assistance program regulations, the independent, third party conducting the evaluation must be financially and legally separate from the organization.12 The purpose of contracting with an independent consultant is to bring an independent and unbiased perspective to the evaluation process and to bring specialized skills or experiences to the project evaluation process where necessary.


If the organization maintains an evaluation unit, USDA requires that the evaluation is managed by the organization’s evaluation unit. If the organization does not have a dedicated evaluation unit, the evaluation should be managed by a project staff person or organizational staff person with significant knowledge and expertise concerning evaluation. Ideally, the organization would maintain an evaluation unit that was separated from the staff or line management function of the project being evaluated. Such a structure helps to ensure the independence and impartiality of the evaluation process and report of findings, conclusions and recommendations.


When conducting the interim evaluation, the project must consider participatory approaches to involving key stakeholders including implementing partners or sub-contractors, local and national government partners, project beneficiaries and other donor partners. The project shall also invite USDA to participate in the evaluation, particularly during the review of the evaluation terms of reference, discussions related to mid-course corrections or changes in strategy, results frameworks, and critical assumptions.


The evaluation may occur precisely at the mid-point in project implementation (i.e. for a 30 month project the mid-term review may occur during month 15) or earlier depending on the project work plan and implementation timeline. The project may determine the most strategic timing of the evaluation, however, the timing should allow for sufficient time for the implementation of project activities. The project should allow at least four months of implementation of key project activities before developing the terms of reference (TOR) for the interim evaluation. The project is required to keep USDA up to date on the scheduling of the interim evaluation through the submission of project reports.


The program participant’s evaluation unit, should develop the TOR for the interim evaluation which includes the purpose and scope of the evaluation, specific issues or questions to be addressed in the evaluation, prospective approach and methodology, timing and work plan of the evaluation, ethical considerations, and evaluation management and selection of evaluation team. The evaluation TOR must be submitted to FAD for input and comment prior to the selection of the evaluation team and implementation of the evaluation. As a general practice, the draft evaluation TOR should be submitted to USDA no later than three (3) months prior to the start of the evaluation activities. The final TOR for the evaluation must be submitted to USDA at least one (1) month prior to the start of the evaluation activities.


Unless identified in the project proposal, the independent evaluation consultant(s) should be selected through a competitive procurement process. The selection of the evaluation contractor or consultant(s) must be based on professional competency, experience in relation to the evaluation tasks, independence from the program participant, avoidance of conflict of interest, and experience and knowledge of the country in which the evaluation will be conducted. The program participant must also provide a written certification to FAD that there is no real or apparent conflict of interest on the part of any recipient staff member or third party entity designated or hired to play a substantive role in the evaluation of activities under the agreement. If a conflict of interest does exist, the program participant must provide a corrective action plan including consideration for hiring an alternative evaluator or evaluation team.


As the final output of the evaluation, the project is required to submit a detailed report outlining the purpose of the evaluation, methodology, primary questions, findings, lessons learned to date, and recommendations. The final interim evaluation report should include proposed actions the project deems appropriate to address the review findings and recommendations. The project is required to submit the interim evaluation report to USDA project managers. The final report must be submitted to USDA within 60 days following the evaluation fieldwork and 15 days within the finalization of the interim evaluation report.


Within 30 days receipt of the final interim evaluation report, USDA will engage collaboratively with the project staff to discuss the proposed actions that need to be taken to address the findings and recommendations. The participating organization must include information on the progress of implementation of the agreed upon actions in future semi-annual performance reports.


Final Evaluations


Each project is required to undergo a comprehensive, independent final evaluation. The purpose of the final evaluation is to assess whether the project has achieved the expected results as outlined in the results framework. The final evaluation should assess areas of project design, implementation, management, lessons learned and replicability. It should seek to provide lessons learned and recommendations for USDA, program participants and other key stakeholders for future food assistance and capacity building programs. The evaluation will likely use mixed methods approaches as outlined in the agreed upon evaluation plan. In general the final evaluation should assess:


Relevance-The extent to which the project interventions met the needs of the project beneficiaries and is aligned with the country’s agriculture and/or development investment strategy and with USDA and US Government’s development goals, objectives, and strategies. Relevance should also address the extent to which the project was designed taking into account the economic, cultural and political context and existing relevant program activities.


Effectiveness-The extent to which the project has achieved its objectives. Effectiveness should also assess the extent to which the interventions contributed to the expected results or objectives.


Efficiency-The extent to which the project resources (inputs) have led to the achieved results. An assessment of efficiency should also consider whether the same results could have been achieved with fewer resources or whether alternative approaches could have been adopted to achieve the same results.


Impact-Assessment of the medium and long-term effects, both intended and unintended, of a project intervention. Effects can be both direct or indirect and positive or negative. To the extent possible, the evaluation should assess the extent to which the effects are due to the project intervention and not other factors.


Sustainability-Assessment of the likelihood that the benefits of the project will endure over time after the completion of the project. Sustainability should also assess the extent to which the project has planned for the continuation of project activities, developed local ownership for the project, and developed sustainable partnerships.


In addition to the focus on relevance, effectiveness, efficiency, sustainability and impact as described above the evaluation may focus on other areas of particular interest to USDA, project staff or key stakeholders. Input on the scope and purpose of the evaluation therefore must be solicited from key stakeholders during the planning stages of the evaluation as described.


The organization will be responsible for allocating sufficient funds, managing, and contracting with an independent consultant(s) to conduct the final evaluation. As with interim evaluations, if the organization maintains an evaluation unit, USDA requires that the evaluation is managed by the organization’s evaluation unit. If the organization does not have a dedicated evaluation unit the evaluation should be managed by a project staff person or organizational staff person with significant knowledge and expertise concerning evaluation. Ideally, the organization would maintain an evaluation unit that was separated from the staff or line management function of the project being evaluated. Such a structure helps to ensure the independence and impartiality of the evaluation process and report of findings, conclusions and recommendations. In cases where the organization does not have a dedicated evaluation unit or organization staff with significant expertise in evaluation, USDA may decide to manage the project level evaluation through its own monitoring and evaluation unit.


The timing of the final evaluation should be established at the start of the project and included in the project work plan and updated as appropriate. In general, the evaluation should be timed to inform new programming decisions and strategies. Final project evaluations should be planned at least six (6) months prior to the completion of a project.


USDA supports a participatory evaluation process. This helps to ensure the quality, validity, utility and mutual ownership of the evaluation findings and recommendations. As a result, USDA staff, as well as, relevant program participant staff and key stakeholders should be involved cooperatively in the design and implementation of the evaluation to the extent possible and appropriate including but not limited to the evaluation preparation and planning, as a key informant and key stakeholder, reviewing findings, conclusions and recommendations to ensure factual accuracy of the evaluation report and discussing and addressing evaluation recommendations.


The organization’s evaluation unit, must develop a TOR for the evaluation which includes the purpose and scope of the evaluation, specific issues or questions to be addressed in the evaluation, prospective approach and methodology, timing and work plan of the evaluation, ethical considerations, and evaluation management and selection of evaluation team. The evaluation TOR must be submitted to USDA for input and comment prior to the selection of the evaluation team and implementation of the evaluation. As a general practice, the draft evaluation TOR should be submitted to USDA no later than three (3) months prior to the start of the evaluation activities. The final TOR for the evaluation must be submitted to USDA at least one (1) month prior to the start of the evaluation activities.


Unless identified in the project proposal, the independent evaluation consultant(s) should be selected through a competitive procurement process. The selection of the evaluation contractor or consultant(s) must be based on professional competency, experience in relation to the evaluation tasks, independence from the program participant, avoidance of conflict of interest, and experience and knowledge of the country in which the evaluation will be conducted. The program participant must also provide a written certification to FAD that there is no real or apparent conflict of interest on the part of any recipient staff member or third party entity designated or hired to play a substantive role in the evaluation of activities under the agreement.


The final evaluation report must be submitted to USDA within at least three months following the evaluation field work and before the project closes (before the project end date). The final project evaluation will be made public as described below.


Other Evaluation Activities


FAD, in cooperation with M&ES, may identify additional evaluation activities of strategic interest to the Agency. This may include higher-level country-based or thematic evaluations. FAD may focus specific evaluation activities, for example, on understanding the impact of microfinance activities or agricultural extension programs on agricultural productivity.


USDA managed evaluations may also include impact evaluation activities as defined above. Such activities require collaboration with the program participants and therefore will be defined in more detail in the solicitation process.


When selecting projects to undergo impact evaluation OCBD/FAD will consider:

  • Projects that have the potential or expectation to scale-up or receive future funding;

  • Projects that propose new interventions, where little evidence on their effectiveness exists;

  • Projects that are considered “pilot” projects; and

  • Projects or interventions receiving a significant amount of USDA funds.


USDA may also decide to conduct an evaluation after project completion. Such an evaluation may seek to assess the long-term effects and sustainability of a project.


In order to ensure the availability of adequate data and information to support a post-project evaluation USDA may require a project to submit any quantitative data that is collected by the project, in particular data that is collected for evaluation purposes. The data are required to be submitted in a user-friendly readable format with accompanying data documentation. Data submitted should not be aggregated but should be individual level record data. The data and proper documentation should be provided in a format that is sufficiently useable and readable by USDA or its evaluation contractor.


Data Quality Standards and Assessment


USDA and program participants utilize monitoring and evaluation data to inform current and future funding activities, assess the performance of its programs, and report on the results of its programs to external stakeholders including Congress, other USG partner agencies, OMB, GAO, other external stakeholders including partner countries and the public. Therefore, USDA places a strong emphasis on ensuring a high level of data quality for its performance measures.


The following criteria should be considered when assessing data quality13:


Accuracy – Data are correct. Deviations in data can be explained or are predictable. Measurement error is kept to a minimum and within acceptable margins.

Validity – Data measure the result or outcome it is intended to measure.

Reliability – Data collected over time are comparable. Trends are meaningful and allow for measurements of progress over time. Data collection methods and analyses are consistent over time.

Timeliness –Data are collected in a timely manner to inform management decision-making and strategic planning. The expectation is that data are reported semi-annually.

Integrity – Data quality is routinely monitored. Data quality assessments are integrated into data collection processes and procedures to ensure data are not erroneously reported or intentionally altered.


All final project PMP plans should include a discussion on how the project will ensure and maintain the quality of monitoring and evaluation data at all levels involved in data collection from data collected by field staff/monitors to analysis and reports of performance data in project reports. Projects should develop tools and guidelines for project staff and implementing partners to ensure that all relevant partners understand the definitions of the performance measures, data collection methods, and reporting processes and procedures. Projects are required to develop a process for verifying and validating data to ensure that the data submitted in the project reports meets the criteria above. The process should be outlined in the PMP. USDA may request to review data quality assessments or may wish to conduct a data quality assessment in cooperation with the project during a project site visit.


If after conducting a data quality assessment the project identifies weaknesses or concerns with the accuracy or quality of the data the project should provide this information to USDA in the semi-annual performance reports. The project may request to revise or correct previously submitted data to USDA and should provide such information in subsequent semi-annual performance reports. The project should include a narrative noting the data quality issues experienced and describe corrective action the project has taken to ensure such reporting errors do not affect future semi-annual performance reports.


Facilitating the Exchange of Information and Enhancing Learning


In support of the USDA open government initiative14 and to increase transparency and learning, all USDA final evaluation reports will be made publicly available on the FAS website. In addition, USDA will regularly publish information on project and program level results and accomplishments. This will ensure that the widest audiences as possible are reached and that other organizations learn from FAS’s experiences. Principled exceptions may be made where classified, personal or proprietary information is concerned.


USDA hopes that the facilitation and exchange of lessons learned and good practices will lead to improved program design and effectiveness of its current and future efforts in food assistance and capacity building. USDA also supports and encourages its partner organizations in efforts to increase transparency and learning.


Annex A. Sample Performance Monitoring Plan

PERFORMANCE INDICATOR

INDICATOR DEFINITION AND UNIT OF MEASUREMENT

DATA SOURCE

METHOD/APPROACH OF DATA COLLECTION OR CALCULATION

DATA COLLECTION

ANALYSIS, USE & REPORTING

WHEN

WHO

WHY

WHO

Immediate Objective 1: Increased business sector activity in target areas

Areas” may include regions, communities, groups, administrative units, associations, organizations, enterprises, countries, or special populations.

  1. Number and percent of existing firms that expanded businesses over the past year

Definition: Firms included are those receiving training and/or seed funds directly under LED or QZ programs and those vendors/suppliers who are indirectly involved in LED, LMAC/RR or EC.


Business expansion is self reported using a survey that asks Y/N if expansion has occurred.


Disaggregated by LED, LMAC/RR and EC, based on direct and indirect involvement.


Unit: # of assisted firms that report business expansion; among firms assisted, # of firms that report expansion as a % of total firms assisted

Project Survey

Data will be collected for each firm 1 year after seed funds are received. One year is counted after the last disbursement of funds. Data will be collected from all qualifying firms (i.e. not a sample survey).


Survey will include questions about net revenues. This data may be used ultimately in this indicator in lieu of expansion questions.

Quarterly, to capture all results from firms whose one year post-service delivery period terminates in that period.

Local specialists to administer survey to be reviewed by regional coordinators.

Periodic management reviews (semi-annual)


Technical Reports (semi-annual)

Regional coordinator in conjunction with Project Director.


5 For more information see: www.oecd.org/dac/evaluationnetwork.

7 For more information see: http://www.ocfo.usda.gov/usdarpt/usdarpt.htm

8 Please see the Food Assistance Division website at: http://www.fas.usda.gov/food-aid.asp.

9 For a sample PMP and key components of a PMP, please see Annex A.

10 Please see the Food Assistance Division website at: http://www.fas.usda.gov/food-aid.asp.

11 For example, if the performance plan or PMP is finalized in December 2011, the targets and baseline information must be submitted in the May 2012 project report.

12 OECD/DAC Working Party on Aid Evaluation defines a review as “an assessment of the performance of an intervention, periodically or on an ad hoc basis” and notes that the use of the term “evaluation” tends to refer to a more comprehensive or in-depth assessment than a “review”. Reviews tend to emphasize operational or implementation aspects of a project. FAD subscribes to this definition and the focus on implementation issues and considers a project review to satisfy midterm evaluation requirements. For more information on the evaluation definitions please see: http://www.oecd.org/dataoecd/29/21/2754804.pdf.

13 Definitions have been drawn from USAID ADS Chapter 203, Assessing Learning, Revision Date 04/02/2010 and MCC Policy and Evaluation of Compacts and Threshold Programs Version DCI-2007-55.2, dated May 12, 2009.

14 For more information and the USDA Open Government Initiative please see: http://www.usda.gov/open.

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File TitleMonitoring and Evaluation Policy
SubjectFood Assistance Division, Office of Capacity Building and Development
Authorholzaepfele
File Modified0000-00-00
File Created2021-01-24

© 2024 OMB.report | Privacy Policy