Instrument 8: Report template for impact evaluations

Instrument 8. Report Template-Impact Evaluations.docx

Sexual Risk Avoidance Education (SRAE) National Evaluation Overarching Generic

Instrument 8: Report template for impact evaluations

OMB:

Document [docx]
Download: docx | pdf








THE PAPERWORK REDUCTION ACT OF 1995

This collection of information is voluntary and will be used to provide the Administration for Children and Families with information to help refine and guide SRAE program development. Public reporting burden for the collection of information is estimated to average 32 hours per response, including the time for reviewing instructions, gathering and maintaining the data needed, and reviewing the collection of information. An agency may not conduct or sponsor, and a person is not required to respond to, a collection of information unless it displays a currently valid OMB control number. The OMB number and expiration date for this collection are OMB #: XXXX-XXXX, Exp: XX/XX/20XX. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden, to Susan Zief [email protected].

Instrument 8: SRAE Impact Report Template





[Feel free to add graphic or logo]

Evaluation of
[Intervention Name] in
[Geographic area]

Final Impact Report for

[Grantee Organization]

[Date]

Prepared by

[Evaluator Organization Authors]

Recommended Citation: [Authors]. (Year). Evaluation of [Intervention Name(s)] in [Geographic Area]. Evaluator location: evaluator organization.

Acknowledgements:

[In this space, please list contributors to this evaluation report including reviewers and editors that you would like to acknowledge; feel free to acknowledge any persons critical in making the evaluation possible, the program implementation possible, etc.]

Disclosure:

[In this space, please disclose any conflict of interest, financial or otherwise]

This publication was prepared under Grant Number [Insert Grant Number] from the Family and Youth Services Bureau within the Administration for Children and Families (ACF), U. S. Department of Health & Human Services (HHS). The views expressed in this report are those of the authors and do not necessarily represent the policies of HHS, ACF, or the Family and Youth Services Bureau.



Evaluation Abstract:

[Copy from your pre-existing abstract as available. Note: The Methods and Findings sections are new and should be completed as part of this report. In the Findings section, please present both implementation and impact findings, and weave those findings together.]

The Evaluation of [Intervention Name] in [Geographic Area]”

Grantee

Grantee Name:

Project Lead:

Email address:

Evaluator

Evaluator’s Organization:

Evaluator Lead:

Email address:

Intervention Name

[Start writing here.]

Intervention Description

[Start writing here.]

Comparison Condition

[Start writing here.]

Comparison Condition Description

[Start writing here.]

Behavioral Outcome(s)

[Start writing here.]

Non-behavioral Outcomes

[Start writing here.]

Sample and Setting

[Start writing here.]

Research Design

[Start writing here.]

Data Collection

[Start writing here.]

Methods

[Start writing here.]

Findings

[Start writing here.]

THE Evaluation of [intervention name] in [place]

I. Introduction

A. Introduction and study overview

[Copy and paste text/Start writing here.]

B. Primary research question(s)

[Copy and paste text/Start writing here.]

C. Secondary research question(s)

[Copy and paste text/Start writing here.]

II. Programming for treatment and comparison groups

[Provide an overview of the section here.]

A. Description of program as intended

[Copy and paste text/Start writing here.]

B. Description of comparison condition

[Copy and paste text/Start writing here.]

III. Impact evaluation design

[Provide an overview of the section here.]

A. Identification and recruitment of the study participants

[Copy and paste text/Start writing here.]

B. Research design

[Copy and paste text/Start writing here.]

C. Data collection

[Copy and paste text/Start writing here.]

D. Measures

[Copy and paste text/Start writing here.]

[Copy and paste Tables III.1 and III.2 here.]

E. Study sample

[Copy and paste text/Start writing here.]

[Copy and paste Table III.3 (a or b) here.]

F. Baseline equivalence and sample characteristics

[Copy and paste text/Start writing here.]

[Copy and paste Table III.4 here.]

G. Methods

[Copy and paste text/Start writing here.]

IV. Implementation evaluation

V. Study findings

A. Implementation study findings

[Copy and paste text/Start writing here.]

B. Impact study findings

[Copy and paste text/Start writing here.]

[Copy and paste Tables V.1 and V.2 here.]

C. Additional analyses (if applicable – if not, delete this section)

[Copy and paste text/Start writing here.]

[Copy and paste Tables V.1 and V.2 here.]

VI. Conclusion

A. Summary

[Copy and paste text/Start writing here.]

B. Limitations

[Copy and paste text/Start writing here.]

C. Discussion

[Copy and paste text/Start writing here.]

VII. References

[Copy and paste text/Start writing here.]

Appendix A: Logic model

[Paste logic model here]



Appendix B: Implementation Evaluation

[Copy and paste text/Start writing here.]

[Copy and paste Table B.1.]





Appendix C: Model specification

[Copy and paste text/Start writing here.]







Appendix D: Data cleaning

[Copy and paste text/Start writing here.]





Appendix E: Implementation Evaluation Methods

[Copy and paste text/Start writing here.]





Appendix F: Intent-to-Treat analyses

[Copy and paste text/Start writing here.]





Appendix S: Sensitivity Analyses

[Copy and paste text/Start writing here.]

[Copy and paste Tables S.1 and S.2 here.]





Table III.1. Outcome measures used for primary impact analyses research questions. This template includes an example in italics, as a SAMPLE for you to consider for your own report)

Behavioral outcome measure name

Source item(s)

Constructed measure

Timing of measure
relative to program

Ever had sexual intercourse

Have you ever had sexual intercourse?

Dichotomous variable coded as 1 if answered yes, zero if no, and missing otherwise.

6 months after program ends









Table III.2. Outcome measures used for secondary impact analyses research questions

Outcome measure name

Source item(s)

Constructed measure

Timing of measure
relative to program

Ever had sexual intercourse

Have you ever had sexual intercourse?

Dichotomous variable coded as 1 if answered yes, zero if no, and missing otherwise.

12 months after program ends







Table III.3a. Cluster and youth sample sizes by intervention status (Only use for studies with cluster-level assignment; if your design uses individual-level assignment, use Table III.3b)

Number of:

Time period

Total
sample size

Intervention sample size

Comparison sample size

Total response rate

Intervention response rate

Comparison response rate

Clusters








Clusters: At beginning of study

1c =(1a +1b)

1a

1b

N/A

NA

N/A

Clusters: At least one youth completed baseline survey

Baseline

2c =(2a + 2b)

2a

2b

=2c/1c

=2a/1a

=2b/1b

Clusters: At least one youth completed follow-up

Immediately post-programming

3c = (3a + 3b)

3a

3b

=3c/1c

=3a/1a

=3b/1b

Clusters: At least one youth completed follow-up

6-months post-programming

4c =(4a + 4b)

4a

4b

=4c/1c

=4a/1a

=4b/1b

Clusters: At least one youth completed follow-up

12-months post-programming

5c = (5a + 5b)

5a

5b

=5c/1c

=5a/1a

=5b/1b

Youth








Youth in non-attriting clustersa








Youth: At time that clusters were assigned to condition

6c (=6a + 6b)

6a

6b

N/A

NA

N/A

Youth: Who consentedb

7c = (7a + 7b)

7a

7b

=7c/6c

=7a/6a

=7b/6b

Youth: Completed a baseline survey

Baseline

8c = (8a + 8b)

8a

8b

=8c/6c

=8a/6a

=8b/6b

Youth: Completed a follow-up survey

Immediately post-programming

9c = (9a + 9b)

9a

9b

=9c/6c

=9a/6a

=9b/6b

Youth: Included in the impact analysis sample at follow-up (accounts for item non-response)c

Immediately post-programming

10c = (10a + 10b)

10a

10b

=10c/6c

=10a/6a

=10b/6b

Youth: Completed a follow-up survey

6-months post-programming

11c = (11a + 11b)

11a

11b

=11c/6c

=11a/6a

=11b/6b

Youth: Included in the impact analysis sample at follow-up (accounts for item non-response)b

6-months post-programming

12c = (12a + 12b)

12a

12b

=12/6c

=12a/6a

=12b/6b

a For all rows in this section, do not include youth from clusters that dropped (attrited) over the course of the study. For example, if you randomly assigned 10 clusters (5 to each condition), and one intervention group cluster (e.g. school) dropped from the study, you would only include youth in this section from the 9 clusters that did not drop from the study. Because the cluster-level response rate in the above rows already captures that dropped cluster, you do not need to count youth from the lost clusters in your youth-level response rates.

b If consent occurred before assignment, delete this row. Add a note at the bottom of the table indicating that consent occurred before random assignment.

c See guidance in section III.E for defining your analytic sample(s).



Table III.3b. Youth sample sizes by intervention status (Only use for studies with individual-level assignment; if your design uses cluster-level assignment, use Table III.3a instead)

Number of youth

Time Period

Total sample size

Intervention sample size

Comparison sample size

Total response rate

Intervention response rate

Comparison response rate

Assigned to condition

1c = (1a + 1b)

1a

1b

N/A

NA

N/A

Completed a baseline survey

2c = (2a + 2b)

2a

2b

=2c/1c

=2a/1a

=2b/1b

Completed a follow-up survey

Immediately post-programming

3c = (3a + 3b)

3a

3b

=3c/1c

=3a/1a

=3b/1b

Included in the impact analysis sample at follow-up (accounts for item non-response)a

Immediately post-programming

4c =(4a + 4b)

4a

4b

=4c/1c

=4a/1a

=4b/1b

Completed a follow-up survey

6-months post-programming

5c = (5a + 5b)

5a

5b

=5c/1c

=5a/1a

=5b/1b

Included in the impact analysis sample at follow-up (accounts for item non-response)a

6-months post-programming

6c = (6a + 6b)

6a

6b

=6c/1c

=6a/1a

=6b/1b

a See guidance in section III.E for defining your analytic sample(s).



Table III.4. Summary statistics of key baseline measures for youth completing [Survey follow-up period]

Baseline measure

Intervention proportion or mean (standard deviation)

Comparison proportion or mean (standard deviation)

Intervention versus comparison difference

Intervention versus comparison p-value of difference

Age or grade level





Gender (female)





Race/ethnicity





Hispanic





Non-Hispanic White





Non-Hispanic Black





Non-Hispanic Asian





Behavioral outcome measure 1





Behavioral outcome measure 2





Non-behavioral outcome measure 1





Non-behavioral outcome measure 2





Sample size









Table V.1. Targets and findings for each measure used to answer implementation evaluation research questions (NOTE: example data included in italics. Please remove before completing the table)

Implementation element

Research question

Measure

Target

Results

Fidelity

Were all intended program components offered and for the expected duration?

  • Total number of sessions delivered

  • Average session duration, calculated as the average of the recorded session lengths (in minutes)

  • 95 percent of groups to receive all 12 sessions

  • Average session duration will be at least 40 minutes

  • 75 percent of groups received all 12 sessions

  • Average duration of session was 35 minutes

Fidelity

What content did the youth receive?

  • Total number of topics covered, calculated as the average of the total number of topics checked by each program facilitator in the daily fidelity tracking log or protocol

  • 95 percent of groups to receive 90 percent of the topics

  • 65 percent of groups received 90 percent of the topics; 45 percent of groups received 100 percent of the topics

Fidelity

Who delivered services to youth?

  • Number and type of staff delivering services to study participants, such as the number of session facilitators

  • Percentage of staff who receive minimum training, calculated as the number of staff who received at least 20 hours of training divided by the total number of staff who delivered the program

  • Three full-time health educators will deliver programming

  • All health educators to receive at least 20 hours of training each year

  • A total of five staff were employed during evaluation to fill three full-time health educator positions

  • 4 of 5 educators received at least 20 hours of training each year (average = 24.5 hours)

Fidelity

What were the unplanned adaptations to key program components?

  • List of unplanned adaptations, such as a change in setting, sessions added or deleted, and components cut

  • n/a

  • 45 percent of educators skipped at least one component in Lessons 3 and 5

Dosage

How often did youth participate in the program on average?

  • Average number (or percentage) of sessions youth attended

  • Percentage of the sample attending the required or recommended proportion of sessions

  • Percentage of the sample that did not attend sessions at all

  • n/a

  • 75 percent of youth to attend 75 percent of the program sessions

  • Less than 5 percent of the sample gets none of the program

  • Youth attended 8 sessions on average

  • 60 percent of youth attended 75 percent of the program sessions

  • 10 percent of the sample received none of the program

Quality

What was the quality of staff–participant interactions?

  • Percentage of observed sessions with high quality interactions, calculated as the percentage of observed interactions that study staff scored as “high quality”

  • 90 percent of observed sessions to be implemented with high quality (rated as a 3.5 out of 4 on the quality scale)

  • 87 percent of observed sessions implemented with high quality (rated as a 3.5 out of 4 on the quality scale)

Engagement

How engaged were youth in the program?

  • Percentage of observed sessions with moderate participant engagement, calculated as the percentage of sessions in which study staff scored participants’ engagement as “moderately engaged” or higher

  • 90 percent of observed sessions to be implemented with moderate to high engagement

  • 85 percent of observed sessions implemented with moderate to high engagement

Context

What other pregnancy prevention programming was available to study participants?

  • Percentage of the sample receiving pregnancy prevention programming from other providers, constructed from immediate post-survey data on experiences outside of the current program

  • Less than 20 percent of youth to receive formal content outside of the program

  • 35 percent of youth (50 percent in control group and 15 percent in treatment group) received other pregnancy prevention programming

Context

What external events affected implementation?

  • Percentage and total number of sessions not delivered due to event in the community, if any

  • n/a

  • Hurricane in community closed some programming sites for a week. Sessions were made up for 60 percent of youth in those sites.



Table V.2. Post-intervention estimated effects using data from [Survey follow-up time period] to address the primary research questions

Outcome measure

Intervention proportion or mean (standard deviation)

Comparison proportion or mean (standard deviation)

Intervention compared to comparison difference (p-value of difference)

Behavioral Outcome 1




Behavioral Outcome 2




Behavioral Outcome 3




Behavioral Outcome 4




Sample Size




Source: [Name for the Data Collection, Date. For instance, follow-up surveys administered 12 to 14 months after the program.]

Notes: [Anything to note about the analysis. See Table III.1 for a more detailed description of each measure and Chapter III for a description of the impact estimation methods.]



Table V.3. Post-intervention estimated effects using data from [Survey follow-up time period] to address the secondary research questions

Outcome measure

Intervention proportion or mean (standard deviation)

Comparison proportion or mean (standard deviation)

Intervention compared with comparison difference (p-value of difference)

Outcome 1




Outcome 2




Outcome 3




Outcome 4




Sample Size




Source: [Name for the Data Collection, Date. For instance, Follow-up surveys administered 6 to 8 months after the program.]

Notes: [Anything to note about the analysis. See Table III.2 for a more detailed description of each measure and Chapter III for a description of the impact estimation methods.]




Table B.1. Data used to address implementation research questions (NOTE: example data included in italics. Please remove before completing the table)

Implementation element

Research question

Measure

Data collection frequency/sampling

Data collectors

Fidelity

Were all intended program components offered and for the expected duration?

  • Total number of sessions delivered

  • Average session duration, calculated as the average of the recorded session lengths (in minutes)

  • All sessions delivered are captured in MIS

  • Session length sampled weekly

  • Program staff



  • Program staff

Fidelity

What content did the youth receive?

  • Total number of topics covered, calculated as the average of the total number of topics checked by each program facilitator in the daily fidelity tracking log or protocol

  • Content from all sessions is captured in MIS

  • Program staff

Fidelity

Who delivered services to youth?

  • Number and type of staff delivering services to study participants, such as the number of session facilitators

  • Percentage of staff who receive minimum training, calculated as the number of staff who received at least 20 hours of training divided by the total number of staff who delivered the program

  • Staff records

  • Training attendance records from all training activities are captured in MIS

  • Program staff

  • Program staff

Fidelity

What were the unplanned adaptations to key program components?

  • List of unplanned adaptations, such as a change in setting, sessions added or deleted, and components cut

  • As needed

  • Program staff, project director, evaluation staff

Dosage

How often did youth participate in the program on average?

  • Average number (or percentage) of sessions youth attended

  • Percentage of the sample attending the required or recommended proportion of sessions

  • Percentage of the sample that did not attend sessions at all

  • Student attendance at all sessions is captured in MIS

  • Student attendance at all sessions is captured in MIS

  • Student attendance at all sessions is captured in MIS

  • Program staff

  • Program staff

  • Program staff

Quality

What was the quality of staff–participant interactions?

  • Percentage of observed sessions with high quality interactions, calculated as the percentage of observed interactions that study staff scored as “high quality”

  • Convenience sample of 10% of classroom sessions were selected for observation

  • Evaluation staff

Engagement

How engaged were youth in the program?

  • Percentage of observed sessions with moderate participant engagement, calculated as the percentage of sessions in which study staff scored participants’ engagement as “moderately engaged” or higher

  • Random sample of 5% of classroom sessions were selected for observation

  • Evaluation staff

Context

What other pregnancy prevention programming was available to study participants?

  • Percentage of the sample receiving pregnancy prevention programming from other providers, constructed from immediate post-survey data on experiences outside of the current program

  • Post-program

  • Evaluation staff

Context

What external events affected implementation?

  • Percentage and total number of sessions not delivered due to event in the community, if any

  • As needed

  • Evaluation staff



Table S.1. Sensitivity of impact analyses using data from [Survey follow-up period] to address the primary research questions

Intervention compared with comparison

Benchmark approach difference

Benchmark approach p-value

Name of sensitivity approach 1 difference

Name of sensitivity approach 1
value

Name of sensitivity approach 2 difference

Name of sensitivity approach 2
p-value

Name of sensitivity approach 3 difference

Name of sensitivity approach 3
p-value

Name of sensitivity approach 4 difference

Name of sensitivity approach 4
p-value

Behavioral Outcome 1











Behavioral Outcome 2











Behavioral Outcome 3











Behavioral Outcome 4











Source: [Name for the Data Collection, Date. For instance, Follow-up surveys administered six to eight months after the program.]

Notes: [Anything to note about the analysis. See Table III.1 for a more detailed description of each measure and Chapter III for a description of the impact estimation methods.]



Table S.2. Sensitivity of impact analyses using data from [Survey follow-up period] to address the secondary research questions

Intervention compared with comparison

Benchmark approach difference

Benchmark approach p-value

Name of sensitivity approach 1 difference

Name of sensitivity approach 1
p-value

Name of sensitivity approach 2 difference

Name of sensitivity approach 2
p-value

Name of sensitivity approach 3 difference

Name of sensitivity approach 3
p-value

Name of sensitivity approach 4 difference

Name of sensitivity approach 4
p-value

Behavioral Outcome 1











Behavioral Outcome 2











Non-behavioral Outcome 1











Non-behavioral Outcome 2











Source: [Name for the Data Collection, Date. For example, Follow-up surveys administered six to eight months after the program.

Notes: [Anything to note about the analysis. See Table III.2 for a more detailed description of each measure and Section III for a description of the impact estimation methods.]



File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorLauren Murphy
File Modified0000-00-00
File Created2025-06-04

© 2025 OMB.report | Privacy Policy