Personal Responsibility Education Program (PREP) Local Evaluation Support

Formative Data Collections for ACF Program Support

Instrument 1 - PREIS Evaluation Plan Template and Guidance_2-28-22_clean (003)

Personal Responsibility Education Program (PREP) Local Evaluation Support

OMB: 0970-0531

Document [docx]
Download: docx | pdf



PREIS Evaluation Plan Template











Shape1

THE PAPERWORK REDUCTION ACT OF 1995

This collection of information is voluntary and will be used to provide the Administration for Children and Families with information on PREP grantees’ implementation and evaluation plans in order to support their adolescent pregnancy prevention work. Public reporting burden for this collection of information is estimated to average 12 hours per response, including the time for reviewing instructions, gathering and maintaining the data needed, and reviewing the collection of information. An agency may not conduct or sponsor, and a person is not required to respond to, a collection of information unless it displays a currently valid OMB control number. The OMB number and expiration date for this collection are OMB #: 0970-0531, Exp: XX/XX/XXXX. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden, to Michelle Blocklin at [email protected].





CONTENTS







Overview of Evaluation Plan Requirement

The purpose of Personal Responsibility Education Innovative Strategies (PREIS) program evaluations is to determine the effectiveness of the innovative interventions and/or approaches on behavior change. Local impact evaluations should be rigorous in nature, meaning they use a randomized controlled trial (RCT) or high-quality quasi-experimental design (QED). PREIS grantees and evaluators are required to develop a plan for their local evaluation to answer both primary and secondary grantee-specific research questions.

PREIS programs and their local evaluators are required to develop an evaluation plan in collaboration with the Family and Youth Services Bureau (FYSB). FYSB approval will be required prior to implementation of a proposed evaluation plan.

This template is provided to PREIS grantees to assist in the development of their evaluation plan. It includes the required components of an evaluation plan as delineated in the Notice of Funding Opportunity (NOFO) and provides a logical flow for describing them. The evaluation plan template includes six sections: (1) Introduction, (2) Impact Evaluation, (3) Process Evaluation, (4) Other Evaluation Activities, (5) Approvals and Data Security, and (6) Evaluation Roles and Responsibilities. The two appendices provide optional tools on (A) logic model development and (B) specifying contrasts intended to help evaluators think through and fill out sections of the template. Using the tools in the appendices is not required.

The local evaluation support (LES) team will support the grantees’ development of their evaluation plans, through individual support as well as additional resources and webinars. Grantees will be expected to submit draft versions of their evaluation plans or sections of their plans according to the review schedule they develop in partnership with their LES liaison. This schedule helps ensure the LES team is able to provide ongoing feedback during the plan development, with the expectation that all or almost all components of draft evaluation plans will have been reviewed at least once by the LES liaison prior to the complete plan submission in June 2022.

Once evaluators send completed evaluation plans to the LES team, the LES team will review the plans in coordination with FYSB. LES liaisons will support grantees and evaluators in revising evaluation plans until they meet standards set for by FYSB and are approved by FYSB. Evaluation Phase 2 activities should not begin until FYSB has approved the plans. However, grantees/evaluators should alert LES liaisons to imminent evaluation activities to ensure their timely approval by FYSB.

Evaluation Plan Components

The following sections present a template for your evaluation plan. Instructions for completing each section of the evaluation plan are shown for each topic, and all include a brief description.



  1. Introduction

The evaluation plan introduction should summarize the rationale for the local evaluation, how the local evaluation will help to inform current and future programming and expand the evidence base on adolescent pregnancy prevention.

The summary should include whether the local evaluation focus is on:

  • The entire PREIS intervention being implemented

  • The intervention being implemented in one or more (but not all) implementation sites

  • A component of the PREIS intervention

If the program funded by this grant is embedded as part of a larger set of services offered to youth (e.g., funding a new program offering in an afterschool program), please describe how the funded program fits within the larger set of services.

The introduction may also include a brief (1-2 paragraph) summary of the current knowledge base relevant to the specific intervention being evaluated.

    1. Intervention Activities

Briefly describe the intervention’s overall approach and goal

  • What new activities are you introducing or expanding as a result of PREIS funding and for what purposes? Please provide the name of the intervention.

  • What content does the intervention cover?

  • In what setting(s) will the intervention be delivered (e.g., classrooms, after-school)?

  • What is the duration and dosage of the intervention?

  • Who will deliver the intervention (e.g., teachers, trained health educators) and in what format (e.g., in person or online)? What qualifications/trainings are required of facilitators?

  • How is the intervention delivered (e.g., large group, small group, one-on-one)?

  • What adult preparation subjects will be addressed by the intervention?

    1. Logic Model and Theory of Change

Shape2

Logic Model.

A systematic and visual way to present and share your understanding of the relationships among the resources you have to operate your program (inputs), the activities you plan (key components), and the changes or results you hope to achieve (outcomes) (W.K. KELLOGG Foundation).

Include a logic model for the intervention. A Logic Model Template is included in Appendix A. The logic model should include key components (or activities) of the intervention, necessary inputs, mediators or intermediate outcomes, outcome domains the intervention is designed to improve, and pathways from key components to outcomes.

Include a narrative describing the intervention’s theory of change. The theory of change should:

  • provide a broad framework for and narrative accompaniment to the logic model; and

  • specify how the intervention services work together to achieve the intended change in outcomes for the targeted set of participants.



    1. Target Population for the Intervention

Describe and define the intervention’s target population, including characteristics of youth who are eligible to receive the intervention and the targeted communities (or geographic catchment area), and a brief rationale for the selection of this population. The description should include:

  • A summary of the target population characteristics

  • Age

  • Grade level

  • Race/ethnicity or tribe

  • Sex/gender

  • Risk characteristics

  • Other characteristics (e.g., adjudicated youth, homeless youth, youth in foster care)

  • Where youth will be recruited

  • Eligibility or screening criteria for communities (or geographic catchment areas) and youth to receive program services

  • An estimate of the size of the total population of youth who would be eligible to receive the intervention in the targeted area

  • Relevant contextual background (i.e., why was this population selected?)



  1. Impact Evaluation

Note that some of the subsections below may not be applicable for your specific study design. Please delete any items that are not relevant and make adaptations to other items, as necessary. Note that this section focuses only on the design of the impact evaluation. You may report additional descriptive or other analyses that are not central to the impact study in Section 4.

If you are using more than one evaluation design to answer your research questions, you will need to fill out this section multiple times. For example, if you are using a school-level QED to answer research questions about the effect of the intervention on middle school students, and an RCT to answer research questions about the effect of the intervention on high-school students, you would fill out this section twice: once for the QED and once for the RCT. If you are planning multiple designs, the LES team can provide additional guidance.


    1. Impact Evaluation Overview

Briefly describe the overall design you plan to use to answer the research questions; for example, is it a quasi-experimental design (QED), a randomized controlled trial (RCT) in which individuals are randomly assigned, or a cluster RCT in which groups are randomly assigned?

      1. Research Questions

List your research questions. Each research question should include the following:

  • The name of the intervention or component(s) being tested,

  • The target population (e.g., 9th and 10th grade students),

  • The counterfactual condition, (i.e., summarize the business as usual or other condition to which treatment will be compared; the condition/services the control or comparison group will be offered),

  • The outcome domain,

  • An outcome domain is a general, or high-level category of outcome that may be affected by the treatment. Each domain may be measured using more than one outcome measure. Sexual risk behavior, knowledge, and intentions are examples of three outcome domains. The domain of intentions might include measures like intention to have oral sex in the next 3 months and intention to use a condom during oral sex in the next 3 months.

  • The length of time of exposure to the intervention condition.

For example, a research question that includes all of this information might read as follows: “What is the effect of the two-week Pregnancy Prevention Intervention on the intentions of 9th and 10th grade students compared to their usual health classes that do not include a sexual health curriculum?”

Tip: All research questions should align with the project’s logic model. At least one research question should address an intermediate outcome as depicted in the logic model (e.g., changes in knowledge, attitudes, or skills). All outcomes indicated in your research questions should be included in your logic model. However, not all outcomes included in the logic model need to be measured in the impact evaluation.

Evaluators should carefully consider the total number of research questions being asked, keeping in mind that research questions may be addressed at multiple follow ups (e.g., short term and long term). Having a large number of research questions exacerbates the problem of multiple comparisons, or the possibility of erroneously finding one or more statistically significant findings simply due to testing a large number of hypotheses. Evaluators will need to adjust for multiple comparisons in their analysis.

Each research question should be designated as either primary (i.e., those upon which outcome evaluation conclusions will be drawn, sometimes called “confirmatory”) or secondary (i.e., those that might provide additional suggestive evidence, sometimes called “exploratory”). All research questions and outcomes chosen to address these questions must align with the content delivered to youth. Primary research questions must address outcome domains related to at least one of the following behaviors outlined in the PREP legislation: sexual activity, condoms or contraceptive use, and teen pregnancy prevention. Primary outcomes related to sexual activity may include, but are not limited to, abstinence, cessation, and decreases in the frequency of sex.

Secondary research questions are related to factors that may have an indirect effect on sexual risk behaviors. Please see the PREIS NOFO for more information.

In addition to listing the research questions in narrative form, please also fill out the table below with key information corresponding to each research question:

Research Question

Target population

Treatment (intervention or components being tested)

Comparison
(counterfactual) condition

Outcome domain

Primary or Secondary?

1






2






3






4






5






6







      1. Study Registration

Please indicate on what public-facing registry the trial will be registered. (If not already determined, FYSB and OPRE encourage grantees to register their experiments at clinicaltrials.gov.)

      1. Conflicts of Interest

Please include a statement indicating whether there are any potential conflicts of interest. If so, please describe the conflicts and what steps will be taken to address them.

  • For example, if the evaluator’s organization is providing advice on how to best implement the program and also evaluating the quality of implementation, there could be a reputational conflict of interest. In this example, the conflict could be mitigated by constructing a firewall between the staff providing advice and conducting the evaluation.

    1. Treatment and Comparison Conditions

Use this section to describe the treatment and comparison conditions in enough detail so the reader knows exactly what is being tested.

      1. Treatment Condition

Describe the “treatment” that will be tested. That is, what intervention or components of the intervention will the “treatment group” portion of the evaluation sample be exposed to? Please be clear about whether the treatment to be tested is the same as the intervention activities described in Section 1.1, or whether it is a subset of those activities.

      1. Control/Comparison Condition

Provide a description of the control or comparison condition. If there are any confounds that could bias the estimated effect of the intervention, please describe them.

Tip: Even when treatment and comparison groups are relatively similar, there may be other characteristics that fundamentally bias the research. These are called “confounds.” To avoid confounds, aside from treatment status, the comparison group should not share a given characteristic, which is different from the treatment group. For example, if all treatment individuals were middle schoolers and all comparison individuals were high schoolers, or if all treatment schools were in urban settings and all comparison schools were in rural settings, it will be impossible to disentangle the effect of the intervention from the effect of age or the setting.

The description of the control or comparison condition should include:

  • A comprehensive description of the condition the intervention is being compared to, beyond identifying whether it is an alternative intervention, a “business-as-usual” condition, or a “no-treatment” condition. List any other services related to adolescent sexual health that are available to youth in the communities where your intervention will operate. If this information is unknown, just explain what you know about it.

  • How the control/comparison condition was selected.

  • Whether there are any plans to offer the intervention to the control/comparison group at a future time point. If so, provide a timeline for those activities.

    1. Sample Identification, Selection, and Retention

In this section, you will describe the impact evaluation sample(s) (i.e., the individuals and/or groups that are contributing data to the evaluation, which may be a subset of the individuals or communities who are eligible for the intervention overall, as described in Section 1.3), and how they will be identified and enrolled into the study. You will also describe any anticipated challenges related to reaching the intended population and strategies for addressing those challenges.

      1. Anticipated Sample Size

Please complete the table below to describe your expected enrollment into the evaluation sample, by treatment condition.

Row A: Indicate the number of cohorts to be enrolled over the evaluation period. If you have a program that enrolls continuously, enter N/A.

Row B: Indicate the number of groups (clusters) to be participating in each cohort. If an individual RCT, enter N/A.

Row C: Indicate the number of youth to be enrolled in each cluster, for each cohort. (An average estimate is acceptable.) For an individual RCT, this is the total sample size for each treatment condition.

Unit

Treatment

Control

A. Number of cohorts to be enrolled



B. Number of clusters to be enrolled per cohort



C. Number of youth to be enrolled per cluster per cohort



Note: If there are more than two study groups, please add a column for each additional group and label it accordingly. If there are two or more levels of clustering (e.g., schools and classrooms), please add a row and label it accordingly.



Tip: A cohort is defined as a group of individuals who are simultaneously enrolled in the intervention. Consider the following example: A study randomly assigns 10 schools to treatment and control groups. It then enrolls all 9th grade students in those schools into the study in the fall of 2023 and surveys them 9 months later, in the spring of 2024. In the fall of 2024, the study enrolls all of the new 9th grade students in those same 10 schools and follows up with them 9 months later, in the spring of 2025. In this example, the study has two cohorts, each with 10 clusters (schools).



      1. Identification and selection of clusters (if applicable)

If there are plans to identify and select groups or clusters such as schools, classrooms, or clinics as part of the sample recruitment, describe how those groups will be identified and recruited. Repeat this discussion for each level of clustering that defines either sample selection or assignment to treatment and control conditions. For example, if schools are recruited to participate in the study, and within schools-classes are assigned to treatment and control conditions, repeat this section for both schools and classes. If schools are recruited to participate and schools are assigned to treatment and control conditions, use this section once to describe identification and assignment of schools.

  • Describe the strategies you will use to identify and recruit clusters. If you have already begun to recruit partners, please describe the status of those partnerships.

  • What are the inclusion and exclusion criteria for clusters?

  • If assignment to treatment vs. control/comparison conditions occurs at the cluster level, provide a detailed description of the assignment procedure.

Shape3

Matched Design.

A type of quasi-experimental design that can be used when random assignment is not feasible. In such a design, a comparison group is chosen from among potential candidates such that the observable characteristics of the treatment and comparison groups are as similar as possible. There are many types of matching, the most popular of which is matching based on the propensity score, or predicted probability of group membership.

Sometimes evaluators randomly assign clusters within blocks to ensure comparisons can be made within the blocks. (For example, in a multi-state evaluation you might randomly assign schools separately within each state to ensure you can estimate impacts separately by state). If you are planning to do this, describe the blocks and procedures.

For matched designs, describe the dimensions on which clusters will be matched (for example, you might match schools with similar percentage of free and reduced-price lunch students) and the procedures for matching (for example, one-to-one matching with or without replacement).

  • Clearly state whether or not all of the individuals that will be included in the impact analysis were members of the cluster prior to their assignment (e.g., were students enrolled in the study schools prior to the random assignment of schools; were youth receiving services at a clinic prior to random assignment of clinics).

      1. Identification and selection of individuals

Describe how individuals will be identified and recruited into the study (e.g., all students in a school or in relevant grades are included, or all students in the classes that are included in the study will be included).

  • What are the inclusion and exclusion criteria for individual youth?

  • Will partners or implementation sites be responsible for recruiting study participants?

  • If so, please describe how you will collaborate with partners to ensure they meet their enrollment targets.

  • Describe any anticipated challenges related to reaching the intended youth population.

  • Describe the strategies you will use to recruit participants into the study and how the strategies will address the recruitment challenges you anticipate.

  • Describe the procedures you will use to collect consent/assent from study participants, or consent from their parent or guardian and assent from the participant (if needed). Describe the timing of consent/assent relative to the timing of random assignment.

  • Describe the timing of identification of ineligible individuals.

  • For RCTs, will all ineligible individuals be identified prior to random assignment? If ineligible individuals can be identified after random assignment, explain the extent to which they can be identified equally well in treatment and control groups. Describe whether determination of ineligibility is clearly unrelated to treatment assignment.

  • For RCTs, describe whether schools or other organizations will be allowed to exempt individuals from random assignment. If so, describe the criteria that will be used for exempting individuals and the timing of the exemption procedures.

  • For cluster RCT studies, indicate whether the individuals in the analysis sample were members of the randomized units (e.g., schools, classes, teachers) prior to randomization, or whether any individuals in the analysis sample could have joined the randomized clusters after randomization. If so, describe when students join clusters relative to the timing of randomization of the clusters.

  • If assignment to treatment vs. control/comparison conditions occurs at the individual level, provide a detailed description of the assignment procedure including the random assignment ratio, any software to be used, and other relevant procedures.

  • For example, for RCTs, if individuals are randomized to treatment and control conditions within blocks (e.g., within schools, or within grade-levels and within schools, or within gender and within grade levels and within schools) describe the blocks and procedures.

  • For example, for matched designs, describe the dimensions on which individuals will be matched, and describe the procedures for matching.

What are the expected start and end dates for program and evaluation enrollment?


Program Enrollment

Evaluation Enrollment

Expected start date



Expected end date





      1. Tracking & Retention of individuals

Describe how you will engage youth and retain them in the program. Describe engagement strategies and any incentives you are offering to complete the program. What is the expected attendance or dosage rate?

What strategies will you use to track and retain youth enrolled in the study? Describe strategies for maintaining contact with youth, to include engagement activities and incentives. For individual-level data collection, describe your plan for tracking participants for follow-up data collection. Describe the anticipated response rate (i.e., your best guess) for each round of follow-up data collection. For randomized controlled trials, use the randomly assigned sample as the denominator. For quasi-experimental studies, use the baseline sample as the denominator.

_____ First, what is the number of individuals you expect to randomly assign (RCT) or identify at baseline (QED)?

Now, please provide your best guess as to the response rate for each of the following:

_____ Baseline (Wave 1)

_____ Post-program (Wave 2)

_____ Short-term follow-up (Wave 3)

_____ Long-term follow-up (Wave 4)


    1. Data Collection

      1. Data Collection Plan

For each research question in Section 2.1, describe your data sources and measures, including:

  • Where will you get the data (data sources),

  • For whom will you collect the data; e.g., will you collect the data for all treatment and control group members, or for only a subset of treatment and control group members who engaged in certain services? (sample),

  • Who will collect the data (party responsible for data collection),

  • How it will be collected (data collection method), and

  • The timing for baseline and follow-up periods (timing).

Please describe any costs associated with obtaining the measures you will use and whether these are accounted for in your budget.

Provide a description of, and brief rationale for, the timing of data collection: Are you measuring from the time of enrollment (e.g., 6 months after the participant completes the baseline survey), the time of intervention completion (e.g., 6 months after a participant is expected to complete the intervention), or something else? If based on completion of the intervention, how will you know whether/when a participant has completed their engagement with the intervention and how will you determine similar timing for members of the control/comparison group or for participants who drop out? Describe the data collection window for each time point.

For individual level data collection, describe your plan for tracking participants for follow up data collection. Please attach to the plan any developed data collection instruments, such as participant surveys, interview protocols, or focus group discussion guides.

Impact research question

Data sources (and measures)

Sample

Party responsible for data collection

Data collection method

Timing

1






2






3








TIP: All outcomes measured in the impact evaluation should be included in the logic model, and at least one intermediate outcome included in the logic model should be measured. However, not all outcomes included in the logic model need to be measured in the impact evaluation.



      1. Specific Measures, Tools, and Instruments

For each measure listed in the table above, provide a description of the following:

  • Rationale for selection (i.e., why the measure is important or relevant for the intervention)

  • Intended respondents/sample (intended by measure authors)

  • Psychometric information (i.e., reliability and validity), or plans to assess if developing measures

  • Relevant study citations


You may consider using the table below for this information. (The first two columns include examples of outcomes and measures; these examples are neither required nor exhaustive):

Outcome (domain)

Measure

Rationale for Selection

Intended respondents

Psychometric Information

Citation(s)

Recent vaginal sex (recent sexual activity)

Vaginal sex in past 3 months





Recent oral sex (recent sexual activity)

Oral sex in past 3 months





Condom use (unprotected sex)

Sex without a condom in past 3 months





Birth control use (unprotected sex)

Sex without birth control in past 3 months





Intentions for sexual activity (intentions)

Intend to have sexual intercourse in next 3 months





Refusal skills (skills)

Average of 3 items (1-5 Likert scale) on perceived ability to say no to sexual activity.





Etc.








      1. Anticipated Challenges

Indicate which data you think will be the most challenging to collect and why. Describe what strategies you will put in place to address those challenges.

      1. Data Sharing/Data Use Agreements

Describe your plans for obtaining any necessary data sharing and data use agreements. Please attach draft, final, and/or executed agreements, if available.

    1. Analysis

      1. Analysis plan

Provide a brief (1-2 paragraph) overview of your plans for conducting analysis, including how you will handle outliers and missing data, which models you will run, and what software you will use.

  • For example, will you use a linear regression model? Will that model include baseline covariates?

  • What software will you use to implement the model?

You will be required to submit a more detailed analysis plan during Phase 3 at which time the LES team will provide an OMB-approved analysis plan template. The Phase 3 analysis plan will include descriptions of your strategy for dealing with multiple comparisons; “Greek” model specifications; plans for assessing outliers, baseline equivalence in the analytic sample; plans for calculating attrition; treatment of missing data; and treatment of inconsistent data responses.

      1. Contrasts

Describe the test(s)/contrast(s) that will answer each of your research questions, and note whether the test is primary (i.e., those upon which you will draw outcome evaluation conclusions) or secondary (i.e., those that might provide additional suggestive evidence). To help complete this section, we suggest completing the contrast table in Appendix B.

      1. Subgroups (optional)

Describe any anticipated subgroups for which you will conduct additional analyses (e.g., subgroups based on age, race/ethnicity, gender).

      1. Minimum Detectable Effect for Planned Sample

Present a power analysis for two key outcomes showing the proposed sample size will support adequate statistical power to detect program impact/effects. For studies with two or more primary outcomes, the power analysis should focus only on those two chosen key primary outcomes. Along with the Minimum Detectable Impact (MDI) and Minimum Detectable Effect Size, please make sure to report the following assumptions:

  • The level of significance (usually 0.05 percent)

  • The number of sides of the test (usually two-tailed)

  • The power (usually 80 percent)

  • The size of the analytic sample (i.e., the number of treatment and control group members at follow up)

  • For binary outcomes, the mean of the outcome

  • For continuous outcomes, the standard deviation of the outcome

  • The R-squared (i.e., proportion of outcome variance explained by covariates)

  • For cluster RCTs, the intraclass correlation coefficient (ICC)

  • For cluster RCTs, the proportion of group-level outcome variance explained by covariates

For additional information on power analysis, see https://opa.hhs.gov/sites/default/files/2020-07/mdi-tabrief.pdf.

Tip: When conducting your power analyses, make sure that you factor in the response rates that you anticipate achieving. For example, if your baseline sample size is 1,000 (500 treatment group members and 500 control group members) and you expect an 80% response rate on the long-term follow up survey, your power analysis should use assume a sample size of 800 (400 treatment and 400 control group members). Your LES Liaison can provide support and additional resources for conducting a power analysis.


    1. Timeline

Include a timeline of all your post-pilot impact evaluation activities. Include activities such as IRB submission, waves of data collection, analysis, interim and final report writing/submission. Example activities are included in the table below, but you can add/edit, as needed.

Impact Evaluation Activity

Start Date

End Date

IRB submission



Baseline data collection



Wave (1,2,3) follow-up data collection



Analysis



Reporting



Dissemination



  1. Process Evaluation (optional)

Use this section if you wish to include an optional process evaluation design plan. This section of the template is meant to provide guidance and suggestions for what topics to include in such a plan. No specific items are required.

PREIS evaluations are not required to design or conduct a process/implementation study of their PREIS-funded interventions, although FYSB expects that the final evaluation report will document and report on implementation. Nonetheless, a process evaluation that provides high-quality implementation and performance feedback may yield important information about when, why, and how interventions work. A process study plan might include topics such as plans for defining and assessing fidelity of implementation, defining and measuring the reach of the intervention as well as plans for documenting implementation drivers, barriers to implementation, and any solutions to overcoming those barriers.

    1. Research Questions

List each research question to be examined in the process/implementation study. For each question, clearly link the question to the discrete logic model elements to be explored and include the research design that will be implemented to answer the question, using Table 1 below.

Table 3.1

Research Question

Logic Model elements to be measured (e.g., input, activity, output, outcome)

Design

1.



2.



3.



4.




If you are using more than one research design to answer process study research questions, you will need to repeat this section multiple times. The LES team can provide additional guidance.



      1. Data Collection Plan

Using the table below, describe the elements of your data collection plan for each research question.

  • Indicator/Measure: What is the operational definition of each indicator? How will it be systematically measured (e.g., percent of intervention sessions attended)?

  • Data Source: What tool or method will be used to collect the data on a given measure (e.g., attendance logs, client self-report)? Note if the data collected will be primary (collected directly from participants) or secondary data (collected by someone else at an earlier timepoint).

  • Sample: Who are the specific intended respondents? What is the expected sample size?

  • Data collection schedule: What is the schedule and frequency of data collection for each indicator?

  • Person responsible: Who will be responsible for data collection for each indicator?

  • Methodology: How will the data be collected (e.g., interview, program administrative records)?

RQ#

Indicator/ Measure

Data Source

Sample

Data collection

Schedule

Person Responsible

Methodology





































      1. Measures, Tools, and Instruments

For each measure listed in the data collection plan, provide a description of the following (if applicable):

  • Rationale for selection

  • Intended Respondents/Sample (intended by measure authors)

  • Authors

  • Psychometric information (i.e., reliability and validity), or plans to assess if developing measures

  • Relevant study citations

      1. Data Sharing/Data Use Agreements

Describe your plans for obtaining any necessary data sharing and data use agreements.

      1. Sampling Plan

Provide a description of your sample or sampling plan, including the following:

  • Describe the sampling method (if you will not collect data from the full population, e.g., all implementation sites)

  • Describe the sample recruitment strategy

  • Describe the inclusion/exclusion criteria for individual study participants as well as implementation sites, communities, or geographic catchment areas.

      1. Analytic Approach

Grantees will develop a detailed data analysis plan during Phase 3 and will be provided an analysis plan template for that purpose. However, it is helpful to consider general ideas about the analytic approach early in evaluation planning. In this section, provide your initial thoughts on the following elements:

  • Describe your analysis plan for answering each research question (qualitative or quantitative analysis or both).

  • If using quantitative analysis, provide information on your selected statistical method, and how each of the variables in Table 2 above will be analyzed to answer your research questions.

  • If using primarily qualitative data analysis, describe the plan for data reduction (i.e., coding, defining themes and patterns), testing validity (i.e., triangulation, validation procedures), and qualitative data analysis software to be utilized.

      1. Timeline

Include a timeline for all activities of the process evaluation. Example activities are included in the table below, but you can add/edit as needed.

Process Evaluation Activity

Start Date

End Date

IRB submission



Data collection



Analysis



Reporting



Dissemination



  1. Other Evaluation Activities (optional)

This section is optional. If you choose to document plans for other evaluation activities, please briefly describe any that are not part of the impact or process evaluations, for example descriptive analyses, even if they were not originally included in the grant proposal. Describe the research questions and plans to answer them.

Examples might include:

  • A treatment-on-the-treated study, linking dosage to outcomes

  • A mediational analysis examining the link between the intervention and outcomes via intermediate outcomes

  • An ethnographic study of a small subset of treatment group members designed to learn more about their day-to-day needs and experiences with the intervention



  1. Approvals and Data Security

    1. Plan for IRB Approval- (Phase 2 after design plan approval)

Describe your plan for obtaining IRB approval, including identifying the IRB of record, the timeline for submission and approval, the anticipated date of approval. Also include the plan for staff training on human subjects’ protection.

    1. Plan for Federal Wide Assurance (during phase 1 or after approval in phase 2)

Grantees conducting a local evaluation will be required to obtain a federal-wide assurance (for more information see http://www.hhs.gov/ohrp/assurances/assurances/filasurt.html). Describe your plan for obtaining FWA.

    1. Data Security & Privacy

Describe your plan for data security and privacy for your local evaluation, including staff training, and procedures to obtain consent and protect the privacy of participants and the confidentiality of data.

  1. Evaluation Roles and Responsibilities

Grantee’s local evaluations must be conducted by an independent evaluator with demonstrated past experience. Please provide the contact information and summarize the experience of your selected local evaluator.

Please complete the table below for your individual evaluation team members. Both grantee and evaluator staff should be involved in developing the evaluation plan. Frequent and timely communication between program staff and evaluators ensures a clear understanding of the project on the part of evaluators and that its goals are reflected in the evaluation plan.

Name

Organization

Role in Evaluation













Appendix A. Logic Model Template

Grant: (name) Logic Model (use text boxes: add/change boxes and arrows as needed)

Shape4

Appendix B. Contrast Table

Below we provide a contrast table, including examples for two research questions.

Research Question: Primary/ Secondary

Design

Target Population*

Sample Eligibility Criteria

Treatment Group

Comparison Group

Outcome

Baseline (if applicable)

Treatment Description*

Condition/ Description*

Domain*

Unit of assignment/ observation:
Measure
[Scale]

Timing of measurement

Unit of assignment/ observation:
Measure
[Scale]

Timing of measurement

RQ 1

RCT

High school freshmen in low-performing schools

All students enrolled in health classes in participating schools

Innovative Program A

Business as usual

Sexual intercourse

Individual participants: Survey – sexual intercourse in past month

6 months and 12 months after random assignment

Individual participants: Survey – sexual intercourse in past month

Immediately prior to random assignment

RQ 2

RCT

High school freshmen in low-performing schools

All students enrolled in health classes in participating schools

Innovative Program A

Business as usual

Unprotected sex

Individual participants: Survey – sex without a condom in past month

6 months and 12 months after random assignment

Individual participants: Survey – sex without a condom in past month

Immediately prior to random assignment























* Indicates one of the four components of your impact evaluation research questions

Example Research Question 1: To what extent did six weeks of Innovative Program A reduce the incidence of sexual intercourse among high school freshmen compared to high school freshmen who did not receive the intervention?

Example Research Question 2: To what extent did six weeks of Innovative Program A reduce the incidence of unprotected sex among high school freshmen compared to high school freshmen who did not receive the intervention?




File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorMccoy, Kathleen (ACF) (CTR)
File Modified0000-00-00
File Created2022-05-04

© 2024 OMB.report | Privacy Policy