Form 1 Core Curriculum Pre and Post Tests

Generic Clearance for the Collection of Qualitative Feedback on Agency Service Delivery

Core Curriculum Pre.Post questions

Pre- and Post-test for Knowledge Management System

OMB: 3045-0137

Document [docx]
Download: docx | pdf

Core Curriculum Pre/Post Questions:

NEW COURSE QUESTIONS

Laying the Groundwork:

  1. Which of the following provides the most plausible order of the steps along the evidence continuum of a program growing from evidence-informed to evidence-based?

  1. Assess program outcomes; identify a strong program design; obtain evidence of positive program outcomes; ensure effective implementation; attain causal evidence of positive program outcomes

  2. Identify a strong program design; ensure effective implementation; assess program outcomes; obtain evidence of positive program outcomes; attain causal evidence of positive program outcomes

  3. Assess program outcomes; obtain evidence of positive program outcomes; attain causal evidence of positive program outcomes; identify a strong program design; ensure effective implementation



  1. Implementing your program may lead to changes in your logic model and/or how your program operates

    1. True

    2. False


  1. When refining your logic model, which of the following is NOT an important step?

    1. Achieve shared agreement about what the model is and is not

    2. Include only ideal and standardized components

    3. Ensure all pieces of the model are plausibly connected

    4. Document any variation in implementation

    5. Drop parts of your model that are not plausible or outcomes that may not be realistic

  2. Which of the following is a key component of building a high quality data collection system?

    1. Select data collection instruments

    2. Build data collection system

    3. Build data management system

    4. Obtain access to administrative data (if applicable)

    5. Ensure data quality

    6. All of the above


  1. Performance measures should be

    1. Used for program improvement

    2. Used to understand how your program is working

    3. Used to set targets that are ambitious yet achievable

    4. Used only to report to CNCS for compliance

    5. A, B and C


  1. Performance measurement and evaluation are two separate and distinct activities for measuring my program

    1. True

    2. False


  1. When considering research questions for evaluation, good resources include

    1. Logic model

    2. Performance measurement data

    3. Both

    4. Neither

  2. Which of the following is NOT an example activity of a learning organization?

    1. Utilizes a logic model that has never been revisited

    2. Uses data to inform decision making

    3. Makes adjustments to programs and processes based on data

    4. Is not afraid to question assumptions

    5. Sets a research agenda for the future


  1. Which of the following best summarizes the utility of the evaluability assessment checklist?

    1. Assess a program’s readiness to participate in a rigorous impact evaluation

    2. Use for assessment, planning and communication

    3. Both

    4. Neither


  1. By the end of your first grant cycle, you can expect to achieve which of the following milestones?

    1. Refined your program and ensured effective implementation

    2. Built and refined data collection systems

    3. Utilized accurate performance measures

    4. Built staff capacity and defined responsibilities

    5. Prepared a plan for your first evaluation

    6. Become a learning organization

    7. All of the above

















Research Questions:

  1. Which of the following is NOT a reason for why research questions are important?

    1. Foundation of a successful evaluation

    2. Foundation of a program logic model

    3. Define the topics the evaluation will investigate

    4. Guide the evaluation planning process

    5. Provide structure to evaluation activities


  1. Put the steps for developing research questions into the correct order

    1. Develop a logic model to clarify program design and theory of change; Define the evaluation’s purpose and scope; Determine the type of evaluation design; Draft and finalize evaluation’s research questions

    2. Develop a logic model to clarify program design and theory of change; Determine the type of evaluation design; Define the evaluation’s purpose and scope; Draft and finalize evaluation’s research questions

    3. Develop a logic model to clarify program design and theory of change; Draft and finalize evaluation’s research questions; Determine the type of evaluation design; Define the evaluation’s purpose and scope


  1. Research questions should test some aspect of the program’s theory of change as depicted in a logic model.

    1. True

    2. False


  1. Why should each evaluation your organization conducts have a primary purpose around which it can be designed and planned?

    1. To drive expectations and set boundaries for what the evaluation can and cannot deliver

    2. Identify why the evaluation is being done and how the information collected and reported by the study will actually be used and by whom

    3. To set parameters around the data you will collect and methods you will use

    4. All of the above


  1. Research questions should be structured to evaluate your whole program at once

    1. True

    2. False


  1. Research questions and evaluation scope should align with the resources (budget, staff, time) available

    1. True

    2. False


  1. How does a process evaluation differ from an outcome evaluation?

    1. A process evaluation goal is to inform changes or improvements in the program’s operations

    2. A process evaluation documents and explores how consistently the program has been implemented, whereas an outcome evaluation looks at the results of a program's activities.

    3. Both A and B

    4. Neither A or B


  1. How does an outcome evaluation differ from a process evaluation?

    1. An outcome evaluation’s main goal is to inform changes or improvements in the program’s operations

    2. An outcome evaluation measures program beneficiaries' changes in knowledge, attitude(s), behavior(s) and/or condition(s) that result from a program

    3. An outcome evaluation documents how consistently the program has been implement as intended


  1. What is a comparison group?

    1. Program beneficiaries

    2. Study participants that do not receive program services

    3. AmeriCorps members

  2. Which of the following is NOT a basic principle for an outcome evaluation research question?

    1. Examines fidelity of implementation to the logic model

    2. Examines changes, effects, or impacts

    3. Specifies the outcome(s) to be measured































Budgeting for Evaluation

  1. Which of the following is NOT a recommended evaluation budget guideline?

    1. The evaluation's budget should be commensurate with stakeholder expectations and involvement

    2. The evaluation's budget should be appropriate for the research design used and key questions to be answered

    3. The evaluation's budget should be adequate for ensuring quality and rigor

    4. The evaluation's budget should be about 5-10% of your program budget

    5. The evaluation's budget should consider the level of program and organizational resources available


  1. What are some common factors that influence budget estimates? Select all that apply.

    1. Program Factors

    2. Evaluation Design

    3. Reporting, Dissemination and Use

    4. Stakeholder interest and support

    5. A-C


  1. Which of the following is a common activity to account for Evaluation staff salary/benefits and consultant time?

    1. Evaluation planning

    2. Instrument selection, development, validation

    3. Institutional Review Board approval

    4. Data collection, processing, analysis, reporting

    5. Project administration

    6. All of the above


  1. Which of the following is NOT a common cost driver for data collection?

    1. Fielding surveys

    2. Conducting interviews

    3. Accessing administrative data

    4. Transcribing data


  1. Other direct costs include expenses such as conference call lines, purchasing datasets and/or survey instruments or incentives

    1. True

    2. False


  1. Overhead costs are accounted for differently across evaluation firms

    1. True

    2. False


  1. If hiring an external evaluator, no additional internal resources will be needed

    1. True

    2. False


  1. After beginning the evaluation, you should assess the evaluation budget regularly and plan for contingencies

    1. True

    2. False


  1. Which of the following are some common challenges resulting from underfunded evaluations?

    1. Lack of continuity

    2. Lack of appropriate expertise

    3. Under-powered study

    4. Poor communication

    5. Too many unanswered research questions

    6. All of the above


  1. Which of the following are some common opportunities to lower evaluation costs while retaining quality?

    1. Utilize existing program data and administrative data

    2. Build data collection into routine program operations

    3. Develop internal staff capacity for evaluation work

    4. Engage pro bono experts

    5. Build a long-term research agenda so that each evaluation builds upon previous work

    6. Consider replicating an evidence-based program

    7. All of the above

































Managing an External Evaluator

  1. Which of the following is not a characteristic of an external evaluator?

    1. Is qualified to carry out aspects of the selected evaluation design

    2. Has no conflicts of interest related to the program or the evaluation

    3. Is objective and impartial to the evaluation results

    4. Previously worked as a non-profit program staff member

  2. Which of the following are useful considerations to determine whether to conduct an internal evaluation, external evaluation or a hybrid approach?

    1. Expertise

    2. Funder requirements

    3. Cost

    4. All of the above


  1. Which of the following is NOT a consideration in deciding who among program staff will be responsible for managing the evaluation?

    1. Basic knowledge of standard evaluation terms and research practices

    2. Interaction with AmeriCorps members

    3. Strong communication skills

    4. Authority to make decisions about the evaluation

    5. Supervisory skills

    6. Capacity/time to assume additional responsibilities


  1. Which of the following is NOT relevant in defining the evaluation’s purpose, scope, and timing?

    1. The program’s logic model

    2. Funder requirements

    3. Budget and available resources

    4. Performance measurement targets

    5. Determine qualifications and skills needed from an evaluator to complete the evaluation tasks

    6. Determine the level of effort required from an evaluator


  1. Which of the following is NOT one of the eight elements of a solicitation to hire an external evaluator?

    1. Purpose and scope of the evaluation

    2. Program background: theory of change and supporting research evidence, logic model

    3. Detailed description of the work plan (project tasks, requirements, and deliverables)

    4. Timelines (project’s period of performance, key milestones, and due dates for deliverables)

    5. Minimum eligibility requirements (skills, knowledge, and experience required of the evaluator)

    6. Resources and/or data to be made available to the evaluator

    7. Internal staff capacity to conduct the evaluation

    8. Estimate of the funds available for the work (optional)

    9. Contract vehicle


  1. Which is an appropriate method for attracting responses to your solicitation?

    1. Post the solicitation on your program/organization’s website

    2. Share solicitation or advertise the work in other evaluation/research outlets

    3. Contact prospective evaluator(s) directly

    4. All of the above


  1. Which of the following is NOT a standard element of an evaluation contract?

    1. Scope of work

    2. Payment/invoicing

    3. Point of contact for both parties

    4. Product ownership and rights

    5. Program background: theory of change and supporting research evidence, logic model

    6. Other special terms or conditions (e.g., modifications or termination of contract)


  1. Regular, ongoing meetings to keep the evaluation moving in a timely and efficient manner are only necessary for evaluations conducted by internal evaluators

    1. True

    2. False


  1. Which of the following is NOT a common evaluation deliverable?

    1. Performance measurement outputs

    2. Evaluation design plan

    3. Instruments

    4. Monthly or quarterly progress reports

    5. Interim/final reports

    6. Memos or research briefs


  1. The program’s input and feedback is critical to the success of the evaluation

    1. True

    2. False

















Data Collection:

  1. Which of the following is NOT a key question to consider prior to selecting a data collection method for your evaluation?

    1. What is the purpose/objective of the evaluation?

    2. What are the research questions?

    3. What are needed improvements to the program’s theory of change?

    4. What is the type of evaluation design?

    5. What resources are available for the evaluation?


  1. Which of the following may NOT potentially decrease data collection costs?

    1. Utilize existing data already being collected

    2. Hire an external evaluator to conduct all aspects of data collection

    3. Program staff complete some or all data collection


  1. Existing data, or data already being collected for your program, cannot be used for evaluation purposes

    1. True

    2. False


  1. Qualitative data cannot be used for evaluation purposes

    1. True

    2. False


  1. Which of the following is NOT a common example of an instrument or method used to collect qualitative data?

    1. Focus group

    2. Interview

    3. Satisfaction survey

    4. Onsite observation


  1. If an outcome you are trying to measure involves a change in knowledge, skill, or performance, which would most likely be the best instrument to use to collect data?

    1. Assessment or test

    2. Focus group

    3. Interview

    4. Participant Observation


  1. Which of the following is NOT a typical disadvantage of administering surveys?

    1. Low response rates

    2. Time consuming to transcribe and analyze responses

    3. Potential to misunderstand questions

    4. Response options may not capture nuances


  1. Which of the following is NOT a typical advantage of qualitative interviews?

    1. Ability to explore a range and depth of topics

    2. Yields rich data

    3. Obtain responses from large number of people

    4. Opportunity for interviewer to explain or clarify questions


  1. Which of the following are additional considerations in choosing a data collection method?

    1. Research Ethics

    2. Institutional Review Board (IRB)

    3. Data Use Agreements

    4. All of the above


  1. The concepts of reliability, validity, sampling, generalizability, statistical power, and covariates are additional considerations unique to which of the following?

    1. Outcome evaluation

    2. Process evaluation

    3. Performance measurement

    4. Logic models







































Reporting and Using Evaluation Results

  1. Which of the following is NOT an encouraged purpose and use of evaluation?

    1. Part of the culture of a learning organization

    2. A compliance exercise

    3. An investment in program improvement

    4. A tool for building a program’s evidence base


  1. What is the purpose of reporting?

    1. Completes the evaluation process by documenting work done and lessons learned

    2. Provides an opportunity for reflection and learning, generating ideas for program improvement

    3. Monitors and tracks progress in strengthening program

    4. Demonstrates accountability to stakeholders

    5. Communicates accomplishments and what the program does 

    6. All of the above

  2. What is a dissemination plan?

    1. A written plan for all of the research questions you intend to answer and your planned approach for answering each of them

    2. A written document that outlines the steps you took as a part of the evaluation process, including the formulation of research questions, implementation of the evaluation design, and analysis of data to produce results

    3. Organizational tool that visually lays out information needs, products, timelines, priorities, and roles and responsibilities


  1. When creating product content for different stakeholders, it is important to be

    1. Proactive

    2. Reactive


  1. After conducting an evaluation, only one written product, the evaluation report, should be produced to disseminate the results

    1. True

    2. False


  1. The evaluation report does not need to describe the program theory of change, because stakeholders already understand your program

    1. True

    2. False


  1. It is common practice in evaluation reports to highlight and discuss evaluation design constraints, issues in data collection, negative/null findings, and study limitations

    1. True

    2. False


  1. Which of the following is NOT a step in using evaluation results for action and improvement?

    1. Identify program components that are working well

    2. Identify program components that need to be improved

    3. Include programmatic actions for improvement section in evaluation report

    4. Develop and implement an action plan for improvement


  1. Which of the following changes may NOT be relevant when developing an action plan?

    1. The program design

    2. The evaluation design

    3. How a program is implemented

    4. How services are delivered

    5. The staff


  1. Which of the logistics should be specified in the action plan?

    1. Who will carry out the improvements

    2. By when they will take place, and for how long

    3. What resources (i.e., money, staff) are needed to carry out the changes

    4. Who can be an advocate or partner in the change

    5. All of the above



































Long-Term Research Agenda

  1. Which of the following does NOT describe a long-term research agenda?

    1. A series of intentional or planned program evaluations and research tools that build towards addressing a research goal

    2. Generally spans over several years

    3. A fixed document that does not need revisions once created

    4. Unique and tailored to each individual program


  1. Why is it important to have a long-term research agenda?

    1. It sets clear goals for what program stakeholders want or need to know about the program years into the future

    2. It defines your destination, then identifies the supporting steps that will get you there

    3. It continues to build evidence of program effectiveness

    4. It demonstrates strategic investment of funds in evaluation activities

    5. All of the above


  1. Which of the following is NOT necessary to consider when developing a long-term research agenda?

    1. Hiring an external evaluator

    2. Program maturity

    3. Existing evidence base

    4. Funder requirements and other stakeholder needs

    5. Long-term program goals

    6. Long-term research goals

    7. Evaluation budget


  1. Which of the following provides the most plausible order of the steps along the evidence continuum of a program growing from evidence-informed to evidence-based?

    1. Assess program outcomes; identify a strong program design; obtain evidence of positive program outcomes; ensure effective implementation; attain causal evidence of positive program outcomes

    2. Identify a strong program design; ensure effective implementation; assess program outcomes; obtain evidence of positive program outcomes; attain causal evidence of positive program outcomes

    3. Assess program outcomes; obtain evidence of positive program outcomes; attain causal evidence of positive program outcomes; identify a strong program design; ensure effective implementation


  1. Collecting and monitoring performance measures is represented in which stage of the evidence continuum?

    1. Stage 1: Identify a strong program design

    2. Stage 2: Ensure effective implementation

    3. Stage 3: Assess program outcomes

    4. Stage 4: Obtain evidence of positive program outcomes

    5. Stage 5: Attain causal evidence of positive program outcomes


  1. Conducting an impact evaluation, such as a quasi-experimental evaluation using a comparison group, is represented in which stage of the evidence continuum?

    1. Stage 1: Identify a strong program design

    2. Stage 2: Ensure effective implementation

    3. Stage 3: Assess program outcomes

    4. Stage 4: Obtain evidence of positive program outcomes

    5. Stage 5: Attain causal evidence of positive program outcomes


  1. Conducting a non-experimental outcome design using a single group pre-post design is represented in which stage of the evidence continuum?

    1. Stage 1: Identify a strong program design

    2. Stage 2: Ensure effective implementation

    3. Stage 3: Assess program outcomes

    4. Stage 4: Obtain evidence of positive program outcomes

    5. Stage 5: Attain causal evidence of positive program outcomes


  1. The evidence continuum is completed in stages, and a program’s movement along the continuum is always linear in nature.

    1. True

    2. False


  1. There is no value to building evidence at all stages along the evidence continuum, and programs should only focus on establishing attaining causal evidence through QEDs and RCTs.

    1. True

    2. False


  1. A long-term research agenda is unique and should be tailored to fit each individual program.

    1. True

    2. False


















EXISTING COURSE QUESTIONS

Basic Steps in Conducting an Evaluation


  1. What is the first phase in conducting an evaluation?


    1. Action and Improvement Phase

    2. Planning Phase

    3. Implementation Phase

    4. Development Phase


  1. What is generally the first step in planning for an evaluation?


    1. Budget for an evaluation

    2. Select an evaluator

    3. Build (or review) your program logic model

    4. Define purpose and scope


  1. In what ways can a logic model help serve as a planning tool for your evaluation?


    1. Identifies research questions about your program

    2. Determine appropriate evaluation design

    3. Identifies data collection methods

    4. All of the above


  1. Which of the following is a research question for an impact evaluation?


    1. Is the program implemented as intended?

    2. What changes occurred as a result of the program?

    3. Both a and b

    4. Neither a nor b


  1. Who of the following cannot serve an external evaluator for an impact evaluation?


    1. University professor

    2. Consulting firm

    3. Program staff

    4. Any of the above can serve as the external evaluator.


  1. How does a process evaluation differ from an outcomes evaluation?


    1. A process evaluation focuses on the inputs, activities and/or outputs of the logic model.

    2. A process evaluation assesses the implementation of a program.

    3. Both a and b

    4. Neither a nor b



  1. Which of the following is not a component of your evaluation plan?


    1. Program background

    2. Evaluation design

    3. Analysis plan

    4. Research brief


  1. True or False? All program evaluations require new data collection.


    1. True

    2. False


  1. Which of the following should not be included in your reports’ findings?


    1. Positive findings

    2. Negative findings

    3. Inconclusive results

    4. Overstated findings


  1. In what areas can evaluation findings support program decisions?


    1. Program design

    2. Program implementation

    3. Program improvement

    4. All of the above



























Overview of Evaluation Designs


  1. True or False: Evaluation design is the structure you will use to generate answers to questions you have about your program.


    1. True

    2. False


  1. What is a key consideration in selecting an evaluation design?


    1. Your program model

    2. The primary purpose of the evaluation

    3. The specific question(s) the evaluation will address

    4. All of the above


  1. Which of the following statements is false?


    1. Available resources need to be considered when selecting an evaluation design.

    2. When conducting an evaluation, it is necessary to evaluate your entire program.

    3. Evaluation can be narrow or broad depending on the questions to be answered.

    4. Evaluation is not a one-time activity, but a series of activities over time that align with the life cycle of your program.


  1. Which of the following characterizes process evaluations?


    1. Requires a comparison group

    2. Documents to what extent the program has been implemented as intended

    3. Typically employs advanced statistical methods

    4. All of the above


  1. Which of the following characterizes outcome evaluations?


    1. May include a comparison group

    2. Documents to what extent the program has been implemented as intended

    3. Solely collects qualitative data

    4. Does not require advanced statistical methods



  1. What is the difference between a comparison group and a control group?


    1. Participants receive alternative services in a comparison group and no services in a control group.

    2. A control group is identified through random assignment techniques.

    3. A control group can be any group that is similar to a program’s beneficiaries.

    4. None of the above.


  1. Which of the following is an example of an outcome evaluation question?


    1. Is the intervention being implemented as designed or planned?

    2. Did the program change beneficiaries’ knowledge, attitude, behavior, or condition?

    3. Both a and b

    4. Neither a or b


  1. Which of the following is another name for an experimental design study?


    1. Single group post design

    2. Single group pre-post design

    3. Retrospective study design

    4. Randomized controlled trial


  1. Which of the following is an advantage of quasi-experimental designs over experimental designs?


    1. More rigorous design option

    2. Ease in identifying a similar comparison group to program beneficiaries

    3. Often less labor intensive and expensive

    4. Greater need for intensive monitoring


  1. Which of the following types of evaluation designs provides the highest level of evidence regarding program outcomes?


    1. Single group pre-post design

    2. Experimental design

    3. Retrospective study design

    4. Quasi-experimental design





















How to Write an Evaluation Plan


  1. True or False: An evaluation plan is a written document that should be continually updated with details on all of the evaluation steps and activities you plan to conduct.


    1. True

    2. False


  1. Which of the following statements is false?


    1. An evaluation plan facilitates a smoother transition if staff turnover occurs.

    2. Multiple stakeholders may contribute to the development of an evaluation plan.

    3. An evaluation plan is only useful for certain types of evaluation designs, such as an outcome evaluation.

    4. An evaluation plan may serve as a written understanding between the grantee and external evaluator.


  1. Which of the following information should be included in an evaluation plan to provide context for the research questions you intend to answer and your planned approach for answering each question?

    1. Theory of change and supporting research

    2. Summary of previous evaluations

    3. Program logic model

    4. All of the above


  1. True or False: Including information on your program model in the evaluation plan is not necessary if you only intend to share your evaluation plan with individuals who are familiar with your program.


    1. True

    2. False


  1. Which of the following does not characterize a good research question?


    1. Uses multiple research methods to answer

    2. Will lead to measurable or observable results

    3. Clearly stated and specific

    4. Aligns with your program’s theory of change and logic model


  1. Which of the following is a type of evaluation design?


    1. Process evaluation

    2. Outcome evaluation

    3. Both a and b

    4. Neither a or b


  1. Which of the following types of outcome evaluation designs require a description of a comparison or control group in the evaluation plan?


    1. Quasi-experimental evaluation

    2. Experimental evaluation

    3. Non-experimental outcome evaluation

    4. Both a and b


  1. Which of the following is NOT true –

Your evaluation plan should describe how you plan to answer each research question by specifying:


    1. What information will be collected

    2. What information will not be collected and why

    3. Who/what will be the source of data

    4. When you will collect the data


  1. True or False: Copies of data collection instruments are not a required element of the evaluation plan that recompete AmeriCorps applicants must submit during the Grantee Application Review Process (GARP) but should be included in their fully developed evaluation plan by the end of the first grant year.


    1. True

    2. False


  1. Which of the following statements is false?


    1. Your evaluation plan should include a timeline of when you expect to carry out each of your key evaluation activities specified in your plan.

    2. Your evaluation plan should describe each of your evaluation findings.

    3. Your evaluation plan should include an estimated budget for your evaluation plan.

    4. Information on obtaining Institutional Review Board (IRB) clearance should be included in your evaluation plan if that is deemed necessary for your evaluation.

















How to Develop a Program Logic Model



  1. There are four main steps in developing a program logic model. Which of the following is the first step?


    1. Define the program and context for the program

    2. Define the program elements in a table

    3. Test your draft logic model

    4. Convene a work group and gather documents



  1. What considerations need to be weighed when determining what to evaluate?


    1. Focus

    2. Time frame

    3. Budget

    4. All of the above



  1. Which of the following is not a key component of a logic model?


    1. Input

    2. Output

    3. Budgeting

    4. Outcomes



  1. A program logic model is a visual representation of your program and its theory of change.


    1. True

    2. False



  1. Which of the following indicators is an outcome?


    1. # students participating in tutoring program

    2. improved academic achievement

    3. Both a and b

    4. Neither a nor b





  1. Which of the following indicators is an output?


    1. # youths completing training program in environmental conservation

    2. Increased perceived social support

    3. Both a and b

    4. Neither a nor b



  1. What types of changes do outcomes measure?


    1. Knowledge

    2. Behavior

    3. Condition

    4. All of the above



  1. Which of the following statements does not characterize program logic models?


    1. A program logic model describes how a program will create change.

    2. Developing a program logic model is an iterative process.

    3. Once completed, a program’s logic model does not change.

    4. Developing a logic model is a collaborative process.



  1. Underlying a logic model is a series of “if-then” relationships that express the causal connections of your program’s theory of change).


    1. True

    2. False



  1. Why develop a logic model?


    1. Help communicate program goals and progress

    2. Support continuous improvement

    3. Use as foundation for evaluation

    4. All of the above






File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorSlater, Bethany
File Modified0000-00-00
File Created2021-01-22

© 2024 OMB.report | Privacy Policy