THE PAPERWORK REDUCTION ACT OF 1995 This collection of information is voluntary and will be used to provide the Administration for Children and Families with information to help refine and guide SRAE program development. Public reporting burden for the collection of information is estimated to average 32 hours per response, including the time for reviewing instructions, gathering and maintaining the data needed, and reviewing the collection of information. An agency may not conduct or sponsor, and a person is not required to respond to, a collection of information unless it displays a currently valid OMB control number. The OMB number and expiration date for this collection are OMB #: XXXX-XXXX, Exp: XX/XX/20XX. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden, to Susan Zief [email protected]. |
[Feel free to add graphic or logo]
Evaluation
of
[Intervention Name] in
[Geographic area]
Final Impact Report for
[Grantee Organization]
[Date]
Prepared by
[Evaluator Organization Authors]
Recommended Citation: [Authors]. (Year). Evaluation of [Intervention Name(s)] in [Geographic Area]. Evaluator location: evaluator organization.
Acknowledgements:
[In this space, please list contributors to this evaluation report including reviewers and editors that you would like to acknowledge; feel free to acknowledge any persons critical in making the evaluation possible, the program implementation possible, etc.]
Disclosure:
[In this space, please disclose any conflict of interest, financial or otherwise]
This publication was prepared under Grant Number [Insert Grant Number] from the Family and Youth Services Bureau within the Administration for Children and Families (ACF), U. S. Department of Health & Human Services (HHS). The views expressed in this report are those of the authors and do not necessarily represent the policies of HHS, ACF, or the Family and Youth Services Bureau.
Evaluation Abstract:
[Copy from your pre-existing abstract as available. Note: The Methods and Findings sections are new and should be completed as part of this report. In the Findings section, please present both implementation and impact findings, and weave those findings together.]
“The Evaluation of [Intervention Name] in [Geographic Area]”
Grantee
Grantee Name:
Project Lead:
Email address:
Evaluator
Evaluator’s Organization:
Evaluator Lead:
Email address:
Intervention Name
[Start writing here.]
Intervention Description
[Start writing here.]
Comparison Condition
[Start writing here.]
Comparison Condition Description
[Start writing here.]
Behavioral Outcome(s)
[Start writing here.]
Non-behavioral Outcomes
[Start writing here.]
Sample and Setting
[Start writing here.]
Research Design
[Start writing here.]
Data Collection
[Start writing here.]
Methods
[Start writing here.]
Findings
[Start writing here.]
THE Evaluation of [intervention name] in [place]
I. Introduction
A. Introduction and study overview
[Copy and paste text/Start writing here.]
B. Primary research question(s)
[Copy and paste text/Start writing here.]
C. Secondary research question(s)
[Copy and paste text/Start writing here.]
II. Programming for treatment and comparison groups
[Provide an overview of the section here.]
A. Description of program as intended
[Copy and paste text/Start writing here.]
B. Description of comparison condition
[Copy and paste text/Start writing here.]
III. Impact evaluation design
[Provide an overview of the section here.]
A. Identification and recruitment of the study participants
[Copy and paste text/Start writing here.]
B. Research design
[Copy and paste text/Start writing here.]
C. Data collection
[Copy and paste text/Start writing here.]
D. Measures
[Copy and paste text/Start writing here.]
[Copy and paste Tables III.1 and III.2 here.]
E. Study sample
[Copy and paste text/Start writing here.]
[Copy and paste Table III.3 (a or b) here.]
F. Baseline equivalence and sample characteristics
[Copy and paste text/Start writing here.]
[Copy and paste Table III.4 here.]
G. Methods
[Copy and paste text/Start writing here.]
IV. Implementation evaluation
V. Study findings
A. Implementation study findings
[Copy and paste text/Start writing here.]
B. Impact study findings
[Copy and paste text/Start writing here.]
[Copy and paste Tables V.1 and V.2 here.]
C. Additional analyses (if applicable – if not, delete this section)
[Copy and paste text/Start writing here.]
[Copy and paste Tables V.1 and V.2 here.]
VI. Conclusion
A. Summary
[Copy and paste text/Start writing here.]
B. Limitations
[Copy and paste text/Start writing here.]
C. Discussion
[Copy and paste text/Start writing here.]
VII. References
[Copy and paste text/Start writing here.]
Appendix A: Logic model
[Paste logic model here]
Appendix B: Implementation Evaluation
[Copy and paste text/Start writing here.]
[Copy and paste Table B.1.]
Appendix C: Model specification
[Copy and paste text/Start writing here.]
Appendix D: Data cleaning
[Copy and paste text/Start writing here.]
Appendix E: Implementation Evaluation Methods
[Copy and paste text/Start writing here.]
Appendix F: Intent-to-Treat analyses
[Copy and paste text/Start writing here.]
Appendix S: Sensitivity Analyses
[Copy and paste text/Start writing here.]
[Copy and paste Tables S.1 and S.2 here.]
Table III.1. Outcome measures used for primary impact analyses research questions. This template includes an example in italics, as a SAMPLE for you to consider for your own report)
Behavioral outcome measure name |
Source item(s) |
Constructed measure |
Timing
of measure |
Ever had sexual intercourse |
Have you ever had sexual intercourse? |
Dichotomous variable coded as 1 if answered yes, zero if no, and missing otherwise. |
6 months after program ends |
|
|
|
|
Table III.2. Outcome measures used for secondary impact analyses research questions
Outcome measure name |
Source item(s) |
Constructed measure |
Timing
of measure |
Ever had sexual intercourse |
Have you ever had sexual intercourse? |
Dichotomous variable coded as 1 if answered yes, zero if no, and missing otherwise. |
12 months after program ends |
|
|
|
|
Table III.3a. Cluster and youth sample sizes by intervention status (Only use for studies with cluster-level assignment; if your design uses individual-level assignment, use Table III.3b)
Number of: |
Time period |
Total
|
Intervention sample size |
Comparison sample size |
Total response rate |
Intervention response rate |
Comparison response rate |
Clusters |
|
|
|
|
|
|
|
Clusters: At beginning of study |
|
1c =(1a +1b) |
1a |
1b |
N/A |
NA |
N/A |
Clusters: At least one youth completed baseline survey |
Baseline |
2c =(2a + 2b) |
2a |
2b |
=2c/1c |
=2a/1a |
=2b/1b |
Clusters: At least one youth completed follow-up |
Immediately post-programming |
3c = (3a + 3b) |
3a |
3b |
=3c/1c |
=3a/1a |
=3b/1b |
Clusters: At least one youth completed follow-up |
6-months post-programming |
4c =(4a + 4b) |
4a |
4b |
=4c/1c |
=4a/1a |
=4b/1b |
Clusters: At least one youth completed follow-up |
12-months post-programming |
5c = (5a + 5b) |
5a |
5b |
=5c/1c |
=5a/1a |
=5b/1b |
Youth |
|
|
|
|
|
|
|
Youth in non-attriting clustersa |
|
|
|
|
|
|
|
Youth: At time that clusters were assigned to condition |
|
6c (=6a + 6b) |
6a |
6b |
N/A |
NA |
N/A |
Youth: Who consentedb |
|
7c = (7a + 7b) |
7a |
7b |
=7c/6c |
=7a/6a |
=7b/6b |
Youth: Completed a baseline survey |
Baseline |
8c = (8a + 8b) |
8a |
8b |
=8c/6c |
=8a/6a |
=8b/6b |
Youth: Completed a follow-up survey |
Immediately post-programming |
9c = (9a + 9b) |
9a |
9b |
=9c/6c |
=9a/6a |
=9b/6b |
Youth: Included in the impact analysis sample at follow-up (accounts for item non-response)c |
Immediately post-programming |
10c = (10a + 10b) |
10a |
10b |
=10c/6c |
=10a/6a |
=10b/6b |
Youth: Completed a follow-up survey |
6-months post-programming |
11c = (11a + 11b) |
11a |
11b |
=11c/6c |
=11a/6a |
=11b/6b |
Youth: Included in the impact analysis sample at follow-up (accounts for item non-response)b |
6-months post-programming |
12c = (12a + 12b) |
12a |
12b |
=12/6c |
=12a/6a |
=12b/6b |
a For all rows in this section, do not include youth from clusters that dropped (attrited) over the course of the study. For example, if you randomly assigned 10 clusters (5 to each condition), and one intervention group cluster (e.g. school) dropped from the study, you would only include youth in this section from the 9 clusters that did not drop from the study. Because the cluster-level response rate in the above rows already captures that dropped cluster, you do not need to count youth from the lost clusters in your youth-level response rates.
b If consent occurred before assignment, delete this row. Add a note at the bottom of the table indicating that consent occurred before random assignment.
c See guidance in section III.E for defining your analytic sample(s).
Table III.3b. Youth sample sizes by intervention status (Only use for studies with individual-level assignment; if your design uses cluster-level assignment, use Table III.3a instead)
Number of youth |
Time Period |
Total sample size |
Intervention sample size |
Comparison sample size |
Total response rate |
Intervention response rate |
Comparison response rate |
Assigned to condition |
|
1c = (1a + 1b) |
1a |
1b |
N/A |
NA |
N/A |
Completed a baseline survey |
|
2c = (2a + 2b) |
2a |
2b |
=2c/1c |
=2a/1a |
=2b/1b |
Completed a follow-up survey |
Immediately post-programming |
3c = (3a + 3b) |
3a |
3b |
=3c/1c |
=3a/1a |
=3b/1b |
Included in the impact analysis sample at follow-up (accounts for item non-response)a |
Immediately post-programming |
4c =(4a + 4b) |
4a |
4b |
=4c/1c |
=4a/1a |
=4b/1b |
Completed a follow-up survey |
6-months post-programming |
5c = (5a + 5b) |
5a |
5b |
=5c/1c |
=5a/1a |
=5b/1b |
Included in the impact analysis sample at follow-up (accounts for item non-response)a |
6-months post-programming |
6c = (6a + 6b) |
6a |
6b |
=6c/1c |
=6a/1a |
=6b/1b |
a See guidance in section III.E for defining your analytic sample(s).
Table III.4. Summary statistics of key baseline measures for youth completing [Survey follow-up period]
Baseline measure |
Intervention proportion or mean (standard deviation) |
Comparison proportion or mean (standard deviation) |
Intervention versus comparison difference |
Intervention versus comparison p-value of difference |
Age or grade level |
|
|
|
|
Gender (female) |
|
|
|
|
Race/ethnicity |
|
|
|
|
Hispanic |
|
|
|
|
Non-Hispanic White |
|
|
|
|
Non-Hispanic Black |
|
|
|
|
Non-Hispanic Asian |
|
|
|
|
Behavioral outcome measure 1 |
|
|
|
|
Behavioral outcome measure 2 |
|
|
|
|
Non-behavioral outcome measure 1 |
|
|
|
|
Non-behavioral outcome measure 2 |
|
|
|
|
Sample size |
|
|
|
|
Table V.1. Targets and findings for each measure used to answer implementation evaluation research questions (NOTE: example data included in italics. Please remove before completing the table)
Implementation element |
Research question |
Measure |
Target |
Results |
Fidelity |
Were all intended program components offered and for the expected duration? |
|
|
|
Fidelity |
What content did the youth receive? |
|
|
|
Fidelity |
Who delivered services to youth? |
|
|
|
Fidelity |
What were the unplanned adaptations to key program components? |
|
|
|
Dosage |
How often did youth participate in the program on average? |
|
|
|
Quality |
What was the quality of staff–participant interactions? |
|
|
|
Engagement |
How engaged were youth in the program? |
|
|
|
Context |
What other pregnancy prevention programming was available to study participants? |
|
|
|
Context |
What external events affected implementation? |
|
|
|
Table V.2. Post-intervention estimated effects using data from [Survey follow-up time period] to address the primary research questions
Outcome measure |
Intervention proportion or mean (standard deviation) |
Comparison proportion or mean (standard deviation) |
Intervention compared to comparison difference (p-value of difference) |
Behavioral Outcome 1 |
|
|
|
Behavioral Outcome 2 |
|
|
|
Behavioral Outcome 3 |
|
|
|
Behavioral Outcome 4 |
|
|
|
Sample Size |
|
|
|
Source: [Name for the Data Collection, Date. For instance, follow-up surveys administered 12 to 14 months after the program.]
Notes: [Anything to note about the analysis. See Table III.1 for a more detailed description of each measure and Chapter III for a description of the impact estimation methods.]
Table V.3. Post-intervention estimated effects using data from [Survey follow-up time period] to address the secondary research questions
Outcome measure |
Intervention proportion or mean (standard deviation) |
Comparison proportion or mean (standard deviation) |
Intervention compared with comparison difference (p-value of difference) |
Outcome 1 |
|
|
|
Outcome 2 |
|
|
|
Outcome 3 |
|
|
|
Outcome 4 |
|
|
|
Sample Size |
|
|
|
Source: [Name for the Data Collection, Date. For instance, Follow-up surveys administered 6 to 8 months after the program.]
Notes: [Anything to note about the analysis. See Table III.2 for a more detailed description of each measure and Chapter III for a description of the impact estimation methods.]
Table B.1. Data used to address implementation research questions (NOTE: example data included in italics. Please remove before completing the table)
Implementation element |
Research question |
Measure |
Data collection frequency/sampling |
Data collectors |
Fidelity |
Were all intended program components offered and for the expected duration? |
|
|
|
Fidelity |
What content did the youth receive? |
|
|
|
Fidelity |
Who delivered services to youth? |
|
|
|
Fidelity |
What were the unplanned adaptations to key program components? |
|
|
|
Dosage |
How often did youth participate in the program on average? |
|
|
|
Quality |
What was the quality of staff–participant interactions? |
|
|
|
Engagement |
How engaged were youth in the program? |
|
|
|
Context |
What other pregnancy prevention programming was available to study participants? |
|
|
|
Context |
What external events affected implementation? |
|
|
|
Table S.1. Sensitivity of impact analyses using data from [Survey follow-up period] to address the primary research questions
Intervention compared with comparison |
Benchmark approach difference |
Benchmark approach p-value |
Name of sensitivity approach 1 difference |
Name
of sensitivity approach 1 |
Name of sensitivity approach 2 difference |
Name
of sensitivity approach 2 |
Name of sensitivity approach 3 difference |
Name
of sensitivity approach 3 |
Name of sensitivity approach 4 difference |
Name
of sensitivity approach 4 |
Behavioral Outcome 1 |
|
|
|
|
|
|
|
|
|
|
Behavioral Outcome 2 |
|
|
|
|
|
|
|
|
|
|
Behavioral Outcome 3 |
|
|
|
|
|
|
|
|
|
|
Behavioral Outcome 4 |
|
|
|
|
|
|
|
|
|
|
Source: [Name for the Data Collection, Date. For instance, Follow-up surveys administered six to eight months after the program.]
Notes: [Anything to note about the analysis. See Table III.1 for a more detailed description of each measure and Chapter III for a description of the impact estimation methods.]
Table S.2. Sensitivity of impact analyses using data from [Survey follow-up period] to address the secondary research questions
Intervention compared with comparison |
Benchmark approach difference |
Benchmark approach p-value |
Name of sensitivity approach 1 difference |
Name
of sensitivity approach 1 |
Name of sensitivity approach 2 difference |
Name
of sensitivity approach 2 |
Name of sensitivity approach 3 difference |
Name
of sensitivity approach 3 |
Name of sensitivity approach 4 difference |
Name
of sensitivity approach 4 |
Behavioral Outcome 1 |
|
|
|
|
|
|
|
|
|
|
Behavioral Outcome 2 |
|
|
|
|
|
|
|
|
|
|
Non-behavioral Outcome 1 |
|
|
|
|
|
|
|
|
|
|
Non-behavioral Outcome 2 |
|
|
|
|
|
|
|
|
|
|
Source: [Name for the Data Collection, Date. For example, Follow-up surveys administered six to eight months after the program.
Notes: [Anything to note about the analysis. See Table III.2 for a more detailed description of each measure and Section III for a description of the impact estimation methods.]
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
Author | Lauren Murphy |
File Modified | 0000-00-00 |
File Created | 2025-06-04 |