PREIS Impact Report Tables Template
The Paperwork Reduction Act
Statement: This collection of information is voluntary and will be
used to document the results of your evaluation. Public reporting
burden for this collection of information is estimated to average 25
hours per response, including the time for reviewing instructions,
gathering and maintaining the data needed, and reviewing the
collection of information. An agency may not conduct or sponsor, and
a person is not required to respond to, a collection of information
unless it displays a currently valid OMB control number. The OMB
number and expiration date for this collection are OMB #: 0970-0531,
Exp: XX/XX/XXXX.
Send comments regarding this burden estimate or any other aspect of
this collection of information, including suggestions for reducing
this burden to Jean Knab; [email protected].
Table III.1. Outcome measures used for primary impact analyses research questions. This template includes an example in italics, as a SAMPLE for you to consider for your own report)
Behavioral outcome measure name |
Source item(s) |
Constructed measure |
Timing
of measure |
Ever had sexual intercourse |
Have you ever had sexual intercourse? |
Dichotomous variable coded as 1 if answered yes, zero if no, and missing otherwise. |
6 months after program ends |
|
|
|
|
Table III.2. Outcome measures used for secondary impact analyses research questions
Outcome measure name |
Source item(s) |
Constructed measure |
Timing
of measure |
Ever had sexual intercourse |
Have you ever had sexual intercourse? |
Dichotomous variable coded as 1 if answered yes, zero if no, and missing otherwise. |
12 months after program ends |
|
|
|
|
Table III.3a. Cluster and youth sample sizes by intervention status (Only use for studies with cluster-level assignment; if your design uses individual-level assignment, use Table III.3b)
Number of: |
Time period |
Total
|
Intervention sample size |
Comparison sample size |
Total response rate |
Intervention response rate |
Comparison response rate |
Clusters |
|
|
|
|
|
|
|
Clusters: At beginning of study |
|
1c =(1a +1b) |
1a |
1b |
N/A |
NA |
N/A |
Clusters: At least one youth completed baseline survey |
Baseline |
2c =(2a + 2b) |
2a |
2b |
=2c/1c |
=2a/1a |
=2b/1b |
Clusters: At least one youth completed follow-up |
Immediately post-programming |
3c = (3a + 3b) |
3a |
3b |
=3c/1c |
=3a/1a |
=3b/1b |
Clusters: At least one youth completed follow-up |
6-months post-programming |
4c =(4a + 4b) |
4a |
4b |
=4c/1c |
=4a/1a |
=4b/1b |
Clusters: At least one youth completed follow-up |
12-months post-programming |
5c = (5a + 5b) |
5a |
5b |
=5c/1c |
=5a/1a |
=5b/1b |
Youth |
|
|
|
|
|
|
|
Youth in non-attriting clustersa |
|
|
|
|
|
|
|
Youth: At time that clusters were assigned to condition |
|
6c (=6a + 6b) |
6a |
6b |
N/A |
NA |
N/A |
Youth: Who consentedb |
|
7c = (7a + 7b) |
7a |
7b |
=7c/6c |
=7a/6a |
=7b/6b |
Youth: Completed a baseline survey |
Baseline |
8c = (8a + 8b) |
8a |
8b |
=8c/6c |
=8a/6a |
=8b/6b |
Youth: Completed a follow-up survey |
Immediately post-programming |
9c = (9a + 9b) |
9a |
9b |
=9c/6c |
=9a/6a |
=9b/6b |
Youth: Included in the impact analysis sample at follow-up (accounts for item non-response)c |
Immediately post-programming |
10c = (10a + 10b) |
10a |
10b |
=10c/6c |
=10a/6a |
=10b/6b |
Youth: Completed a follow-up survey |
6-months post-programming |
11c = (11a + 11b) |
11a |
11b |
=11c/6c |
=11a/6a |
=11b/6b |
Youth: Included in the impact analysis sample at follow-up (accounts for item non-response)b |
6-months post-programming |
12c = (12a + 12b) |
12a |
12b |
=12/6c |
=12a/6a |
=12b/6b |
a For all rows in this section, do not include youth from clusters that dropped (attrited) over the course of the study. For example, if you randomly assigned 10 clusters (5 to each condition), and one intervention group cluster (e.g. school) dropped from the study, you would only include youth in this section from the 9 clusters that did not drop from the study. Because the cluster-level response rate in the above rows already captures that dropped cluster, you do not need to count youth from the lost clusters in your youth-level response rates.
b If consent occurred before assignment, delete this row. Add a note at the bottom of the table indicating that consent occurred before random assignment.
c See guidance in section III.E for defining your analytic sample(s).
Table III.3b. Youth sample sizes by intervention status (Only use for studies with individual-level assignment; if your design uses cluster-level assignment, use Table III.3a instead)
Number of youth |
Time Period |
Total sample size |
Intervention sample size |
Comparison sample size |
Total response rate |
Intervention response rate |
Comparison response rate |
Assigned to condition |
|
1c = (1a + 1b) |
1a |
1b |
N/A |
NA |
N/A |
Completed a baseline survey |
|
2c = (2a + 2b) |
2a |
2b |
=2c/1c |
=2a/1a |
=2b/1b |
Completed a follow-up survey |
Immediately post-programming |
3c = (3a + 3b) |
3a |
3b |
=3c/1c |
=3a/1a |
=3b/1b |
Included in the impact analysis sample at follow-up (accounts for item non-response)a |
Immediately post-programming |
4c =(4a + 4b) |
4a |
4b |
=4c/1c |
=4a/1a |
=4b/1b |
Completed a follow-up survey |
6-months post-programming |
5c = (5a + 5b) |
5a |
5b |
=5c/1c |
=5a/1a |
=5b/1b |
Included in the impact analysis sample at follow-up (accounts for item non-response)a |
6-months post-programming |
6c = (6a + 6b) |
6a |
6b |
=6c/1c |
=6a/1a |
=6b/1b |
a See guidance in section III.E for defining your analytic sample(s).
Table III.4. Summary statistics of key baseline measures for youth completing [Survey follow-up period]
Baseline measure |
Intervention proportion or mean (standard deviation) |
Comparison proportion or mean (standard deviation) |
Intervention versus comparison difference |
Intervention versus comparison p-value of difference |
Age or grade level |
|
|
|
|
Gender (female) |
|
|
|
|
Race/ethnicity |
|
|
|
|
Hispanic |
|
|
|
|
Non-Hispanic White |
|
|
|
|
Non-Hispanic Black |
|
|
|
|
Non-Hispanic Asian |
|
|
|
|
Behavioral outcome measure 1 |
|
|
|
|
Behavioral outcome measure 2 |
|
|
|
|
Non-behavioral outcome measure 1 |
|
|
|
|
Non-behavioral outcome measure 2 |
|
|
|
|
Sample size |
|
|
|
|
Table V.1. Targets and findings for each measure used to answer implementation evaluation research questions (NOTE: example data included in italics. Please remove before completing the table)
Implementation element |
Research question |
Measure |
Target |
Results |
Fidelity |
Were all intended program components offered and for the expected duration? |
|
|
|
Fidelity |
What content did the youth receive? |
|
|
|
Fidelity |
Who delivered services to youth? |
|
|
|
Fidelity |
What were the unplanned adaptations to key program components? |
|
|
|
Dosage |
How often did youth participate in the program on average? |
|
|
|
Quality |
What was the quality of staff–participant interactions? |
|
|
|
Engagement |
How engaged were youth in the program? |
|
|
|
Context |
What other pregnancy prevention programming was available to study participants? |
|
|
|
Context |
What external events affected implementation? |
|
|
|
Table V.2. Post-intervention estimated effects using data from [Survey follow-up time period] to address the primary research questions
Outcome measure |
Intervention proportion or mean (standard deviation) |
Comparison proportion or mean (standard deviation) |
Intervention compared to comparison difference (p-value of difference) |
Behavioral Outcome 1 |
|
|
|
Behavioral Outcome 2 |
|
|
|
Behavioral Outcome 3 |
|
|
|
Behavioral Outcome 4 |
|
|
|
Sample Size |
|
|
|
Source: [Name for the Data Collection, Date. For instance, follow-up surveys administered 12 to 14 months after the program.]
Notes: [Anything to note about the analysis. See Table III.1 for a more detailed description of each measure and Chapter III for a description of the impact estimation methods.]
Table V.3. Post-intervention estimated effects using data from [Survey follow-up time period] to address the secondary research questions
Outcome measure |
Intervention proportion or mean (standard deviation) |
Comparison proportion or mean (standard deviation) |
Intervention compared with comparison difference (p-value of difference) |
Outcome 1 |
|
|
|
Outcome 2 |
|
|
|
Outcome 3 |
|
|
|
Outcome 4 |
|
|
|
Sample Size |
|
|
|
Source: [Name for the Data Collection, Date. For instance, Follow-up surveys administered 6 to 8 months after the program.]
Notes: [Anything to note about the analysis. See Table III.2 for a more detailed description of each measure and Chapter III for a description of the impact estimation methods.]
Table B.1. Data used to address implementation research questions (NOTE: example data included in italics. Please remove before completing the table)
Implementation element |
Research question |
Measure |
Data collection frequency/sampling |
Data collectors |
Fidelity |
Were all intended program components offered and for the expected duration? |
|
|
|
Fidelity |
What content did the youth receive? |
|
|
|
Fidelity |
Who delivered services to youth? |
|
|
|
Fidelity |
What were the unplanned adaptations to key program components? |
|
|
|
Dosage |
How often did youth participate in the program on average? |
|
|
|
Quality |
What was the quality of staff–participant interactions? |
|
|
|
Engagement |
How engaged were youth in the program? |
|
|
|
Context |
What other pregnancy prevention programming was available to study participants? |
|
|
|
Context |
What external events affected implementation? |
|
|
|
Table S.1. Sensitivity of impact analyses using data from [Survey follow-up period] to address the primary research questions
Intervention compared with comparison |
Benchmark approach difference |
Benchmark approach p-value |
Name of sensitivity approach 1 difference |
Name
of sensitivity approach 1 |
Name of sensitivity approach 2 difference |
Name
of sensitivity approach 2 |
Name of sensitivity approach 3 difference |
Name
of sensitivity approach 3 |
Name of sensitivity approach 4 difference |
Name
of sensitivity approach 4 |
Behavioral Outcome 1 |
|
|
|
|
|
|
|
|
|
|
Behavioral Outcome 2 |
|
|
|
|
|
|
|
|
|
|
Behavioral Outcome 3 |
|
|
|
|
|
|
|
|
|
|
Behavioral Outcome 4 |
|
|
|
|
|
|
|
|
|
|
Source: [Name for the Data Collection, Date. For instance, Follow-up surveys administered six to eight months after the program.]
Notes: [Anything to note about the analysis. See Table III.1 for a more detailed description of each measure and Chapter III for a description of the impact estimation methods.]
Table S.2. Sensitivity of impact analyses using data from [Survey follow-up period] to address the secondary research questions
Intervention compared with comparison |
Benchmark approach difference |
Benchmark approach p-value |
Name of sensitivity approach 1 difference |
Name
of sensitivity approach 1 |
Name of sensitivity approach 2 difference |
Name
of sensitivity approach 2 |
Name of sensitivity approach 3 difference |
Name
of sensitivity approach 3 |
Name of sensitivity approach 4 difference |
Name
of sensitivity approach 4 |
Behavioral Outcome 1 |
|
|
|
|
|
|
|
|
|
|
Behavioral Outcome 2 |
|
|
|
|
|
|
|
|
|
|
Non-behavioral Outcome 1 |
|
|
|
|
|
|
|
|
|
|
Non-behavioral Outcome 2 |
|
|
|
|
|
|
|
|
|
|
Source: [Name for the Data Collection, Date. For example, Follow-up surveys administered six to eight months after the program. ]
Notes: [Anything to note about the analysis. See Table III.2 for a more detailed description of each measure and Section III for a description of the impact estimation methods.]
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
Author | William Garrett |
File Modified | 0000-00-00 |
File Created | 2024-10-07 |