Appendix B: Instructions for the Impact Evaluation Final Report Template for HMRF Grant Recipients

Appendix B_Instructions for the Impact Evaluation Final Report Template.docx

Healthy Marriage and Responsible Fatherhood Local Evaluation Final Report Templates

Appendix B: Instructions for the Impact Evaluation Final Report Template for HMRF Grant Recipients

OMB: 0970-0640

Document [docx]
Download: docx | pdf







Instructions for the Impact Evaluation Final Report Template

Healthy Marriage and Responsible Fatherhood Grant Recipients

December 2024



According to the Paperwork Reduction Act of 1995, an agency may not conduct or sponsor, and a person is not required to respond to, a collection of information unless it displays a valid OMB control number. The valid OMB control number for this collection is ##-#; this number is valid through [DATE]. Public reporting burden for this collection of information is estimated to average 30 hours, including the time for reviewing instructions, gathering and maintaining the data needed, reviewing the collection of information, and revising it. This collection of information is voluntary for individuals, but the information is required from HMRF grant recipients to retain a benefit (Authority: 42 U.S.C. 603[a][2]).



Shape1



Purpose of this document

The Administration for Children and Families (ACF), Office of Family Assistance (OFA) requires all Healthy Marriage and Responsible Fatherhood (HMRF) grant recipients with local impact evaluations funded by OFA to submit a final evaluation report. This document provides guidance on how to structure your final report so it is comprehensive and accessible. Many of the report sections draw directly from the analysis plans for the impact and implementation studies (healthy marriage and relationship education [HMRE] evaluations) or the updated evaluation plan (Responsible Fatherhood [RF] evaluations). This document refers to these materials as “plans.” The instructions refer to the relevant sections in the plans whenever possible. You can simplify your report writing by using and modifying text from your plans.

These instructions are in the form of an annotated outline that contains guidance for writing each section of the final report by following the report template (provided in a separate file). The outline describes: (1) the purpose of the section and the information you should discuss, (2) things to keep in mind when writing the section, (3) documents (like the plans) you can use to help you write the section, and (4) whether to include tables or figures and where to find shells for them.

You can write the report directly in the accompanying template (HMRF Impact Report Template.docx), which will make it easier for ACF and your evaluation technical assistance partner (ETAP) to review and will help ensure each report meets the accessibility requirements.1 See the box below for tips on making your report meet Section 508 compliance standards. The template outlines the report and gives you space to fill in each section. A separate file (HMRF Impact Report Table Shells.docx) provides some table shells that we recommend you use and paste in as you write the report. Using these shells will help you complete the report faster and include key details that will help readers understand your findings.

Tips for creating Section 508 accessible documents

Accessibility, or 508 compliance, means that a person with a disability can successfully navigate to and understand the information in a document. To reduce barriers for people with disabilities in accessing your report, please consider the following tips:

  • Headings: Please use the template provided and the built-in style elements to create headings.

  • Alternative (alt) Text: Alt text is read aloud to the person accessing the document by a screen reader. Alt text is required for any graphic (including images, charts, equations, and diagrams) to clearly describe an included image and should include any text that is part of the graphic. If the graphic is sufficiently described in the body of the text, then the chart title as alt text would be sufficient.

  • Tables: Please use the table shells provided as these have been created to meet accessibility standards. Please do not merge cells and avoid blank cells. Each table header should be repeated at the top of each subsequent page. Please don’t allow rows to split across pages.

  • Color contrast: Use dark text colors on light backgrounds (or vice versa) to provide sufficient contrast. Do not use colors as the only way to convey important information or content.

For more information and tips, please review ACF’s tip sheet on 508 compliance2

Here are some additional recommendations for your final impact report:

  1. Organize the final report so it is about 30-40 double-spaced pages or 15-20 single-spaced pages, not counting the tables, figures, references, and appendices. Appendices should be used for additional tables and technical details, as explained at the end of this outline.

  2. The report should be written for all audiences. Write as if the audience knows nothing about the intervention or the evaluation. The report should provide enough detail for readers to understand the intervention and its evaluation, and it should be free of project- or intervention-specific jargon and abbreviations.

  3. Reach out to your ETAP with questions about the report guidance or your approach as you begin to work on your report. Your ETAP is available and eager to address your questions quickly. Getting questions resolved early in the process will simplify the review process at the end of the grant period.

  4. Visit https://www.acf.hhs.gov/ofa/programs/healthy-marriage-responsible-fatherhood/data-reports for examples of final reports produced for the 2015 cohort of HMRF grant recipients.

How to submit the report

  1. Please submit a report that you believe is ready for review by your ETAP and family assistance program specialist (FPS) by March 31, 2025. ]. Please use this naming convention for your file when submitting your report: [Grant recipient name] [Grant Project type, i.e. FF/FW/R4L] Impact Evaluation Final Report.docx. Please send a Word version of the document, not a PDF.

  2. Your FPS and ETAP will review the final report, provide comments and suggest edits, and return it to you for revisions. Ideally, at least a few other reviewers will have edited and read it before you submit it; this should minimize the number of editorial comments your family assistance program specialist (FPS) and ETAP will need to provide. Their goal is to focus on content and technical details rather than presentation.

  1. Please email your final report, with the requested revisions addressed, to your FPS and ETAP when it is ready, but no later than August 29, 2025. Your FPS and ETAP will conduct final reviews of the report, and your FPS will send you their final approval by the end of the award period in September 2025.



Instructions for completing the impact report template

X. Cover Page and Front Matter

The cover page should include the title of the report, all authors, and author affiliation(s).

On page ii, give the recommended citation for this report. On page iii, list acknowledgements and disclose any conflicts of interest—financial or otherwise. For examples of how to identify and report on a conflict of interest, please see the International Committee of Medical Journal Editors and the SAGE Publications policy guidelines for authors. Please note: if the evaluation team is not completely independent from the intervention (program) team (that is, if they are not different organizations with completely separate leadership and oversight), this is a conflict of interest that must be documented.

Finally, this page must include the attribution to ACF:

This publication was prepared under Grant Number [Insert Grant Number] from the Office of Family Assistance (OFA) within the Administration for Children and Families (ACF), U.S. Department of Health & Human Services (HHS). The views expressed in this report are those of the authors and do not necessarily represent the views or policies of HHS, ACF, or OFA.

XX. Structured Abstract

Purpose

Starting on page iv, provide a summary of the final report.

Instructions

In no more than 350 words, summarize:

  • Objective of the intervention and the impact study

  • Study design

  • Main result(s)

  • Key conclusions

The summary of the study design should briefly note whether the study was a randomized control trial (RCT) or quasi-experimental design (QED); the number of sites that participated in the study; the size of the final analytic sample (that is, the sample that the final impact estimates are based on); and the timing of the follow-up survey relative to the baseline survey.

The summary of the main results should include only findings that address the primary research questions of the implementation or impact analysis and not those that address secondary research questions, unless these findings are essential for drawing the key conclusion(s).

Documents to reference

HMRE evaluations: impact analysis plan

RF evaluations: updated evaluation plan

Tables or figures

None





  1. Introduction

This section describes the goals of the study and the motivation behind it and gives a high-level overview of the research questions.

A. Study overview

Purpose

Orient the reader to the study.

Instructions

In one to two pages:

  1. Explain the motivation for this intervention and why responsible fatherhood or marriage and relationship education are important for the local community under study.

  2. Briefly describe the intervention being studied.

  3. Explain the motivation for conducting this impact evaluation.

  4. Briefly summarize how this current study compares to earlier research and adds to the knowledge base about these kinds of interventions (you will add details on the intervention later).

Tip: This section should be a brief overview of the study. Other sections of the report will provide detailed information on the intervention, sample, methods, and findings in the study. The reader should understand why the intervention focused on the population under study and the motivation for selecting the chosen intervention.

Documents to reference

All grant recipients: Grant application and evaluation plan

HMRE evaluations: Implementation and impact analysis plans

RF evaluations: Updated evaluation plan

Tables or figures

None

B. Primary research question(s)

Purpose

Articulate the primary research questions and the related hypotheses about the intervention’s impacts on outcomes reflecting healthy marriage or responsible fatherhood.

Instructions

This section should present the primary research question(s). A bulleted or numbered list is fine.

If the research questions described in the final report are not the same ones you included in your impact analysis plan or updated evaluation plan, you should explain the reasons for this difference in the final report, along with any other deviations from your plan after discussing these deviations with your FPS and ETAP.

Reminder: The primary research question(s) focus on the impact of the intervention. Each question is about a specific outcome and time point to help connect the outcome(s) and time point(s) to the intervention’s logic model for the theory of change. There should be no more than five primary research questions, to minimize the likelihood of false positives.

Documents to reference

HMRE evaluations: impact analysis plan (Section A.1)

RF evaluations: Section 1.1 of the evaluation plan

Tables or figures

None





C. Secondary research question(s), as applicable

Purpose

Outline additional (non-primary) research questions and related hypotheses for your intervention as appropriate.

Instructions

This section should present all non-primary research question(s). In your analysis plan, these are listed as secondary research questions and are considered optional, exploratory analyses. It is fine to present these in a bulleted or numbered list. Note and explain any deviations from the secondary research question(s) in your analysis plan.

Reminder: The secondary research question(s) can examine, for example, (1) outcomes specified in the primary research question but at a different point in time than the ones in the primary analysis (such as immediately after the end of the intervention); (2) other outcomes that might be influenced by the intervention, such as precursors to the healthy relationship outcomes of primary interest; (3) the relationships between mediating variables and outcomes, such as the relationship between dosage or participation and outcomes; (4) moderator analyses, including the relationship between baseline characteristics and impacts; and/or (5) impact analyses of primary outcomes using methods that impute outcomes for individuals missing a survey.

Documents to reference

HMRE evaluations: impact analysis plan (Section A.2)

RF evaluations: Section 1.2 of the evaluation plan

Tables or figures

None

  1. Intervention and Counterfactual Conditions

Begin with a one- or two-paragraph overview of this section, highlighting that it will describe the focal population(s), the intended program (or intervention services), the services that will be the comparison to the intervention, and the research questions associated with these services.

A. Focal population

Purpose

Briefly describe the focal population.

Instructions

Describe the focal population(s); that is, provide information on the characteristics of the population that each component of the intervention intends to serve, such as age, gender, marital status, and socioeconomic status. An example of this would be, “The component is designed to be delivered to couples with low incomes who are in a romantic relationship and have children younger than age 18.”

Tip: The study’s eligibility criteria for enrollment may be a good place to start.

Documents to reference

HMRE evaluations: impact analysis plan (Section B.1)

RF evaluations: Section 1, 1.2, and 9 of your evaluation plan

Tables or figures

None





B. Description of program as intended

Purpose

Summarize the program being tested.

Instructions

This section should describe how the program was intended to be implemented (or the services the intervention group were supposed to receive). You can draw from the description used in your plans. Discuss the program components, content, and intended implementation—including the location or setting, duration, dosage, and supplementary components—and staffing (including the education and training of staff).

Please note that in Section D of this chapter you will discuss the intervention services participants actually received in this study.

Documents to reference

HMRE evaluations: impact analysis plan (Section B.2)

RF evaluations: Section 1.2 of your evaluation plan

Tables or figures

It is often useful for the reader to see summarized information about the program in a table (for example, presenting the content, dosage and schedule, planned mode of delivery, and focal population for each intervention component).

Please refer to Tables II.1 and II.2 in the table shells document to summarize information on the intervention and counterfactual components in a table. (HMRE evaluations: These are the same as Tables 3 and 4 in the template for your impact analysis plan.)

C Description of counterfactual condition as intended

Purpose

A summary of the counterfactual being examined

Instructions

This section should describe the comparison condition as intended. Later, in Section V.B on implementation findings, you will discuss the actual experiences of the comparison group in the study, including whether the comparison group received any services that were similar to the services offered to the intervention group. If the intervention and comparison conditions shared common elements, please highlight the similarities and differences in services received, and consider adding a table to help the reader compare the content.

Typically, the comparison will fit into one of the following categories:

  1. A no-services or wait-listed comparison group. In this instance, you should describe the other HMRF programs or related services that are available to this group within the community.

  2. A “business-as-usual” comparison group. In this instance, you should describe what services or programs are available to this group. For example, the comparison group might have received a standard HMRF program, whereas the treatment group received an enhanced version with additional sessions and content or presented in a different format. If the comparison group received a defined program or intervention, describe what they were offered and how it was supposed to be implemented. Like the description of the intervention condition above, this discussion should include the intended counterfactual components, dosage, content, and delivery. You do not need to discuss the theory of change here.

Documents to reference

HMRE evaluations: impact analysis plan (Section B.3)

RF evaluations: Section 7 of the evaluation plan

Tables or figures

Please refer to Tables II.1 (Description of intended intervention and counterfactual components and focal populations) and II.2 (Staff characteristics, education, training, and development to support intervention and counterfactual components) in the table shells document to summarize intervention and counterfactual components in tables. HMRE evaluations: These are the same as Tables 2 and 3 in the template for your impact analysis plan.

D. Implementation research questions about the intervention and counterfactual conditions

Purpose

Outline the main research questions about your implementation analysis.

Instructions

This section should present the research questions and related hypotheses addressed in your implementation analysis. In most cases, this analysis will address key aspects of engagement, dosage, and fidelity of the implementation of the intervention you are testing. Implementation measures, analysis methods, and findings will be discussed in Sections III.C, IV.D. and V.B, respectively.

Documents to reference

HMRE evaluations: implementation analysis plan (Section A and Table 1)

RF evaluations: Section 1 of the evaluation plan

Tables or figures

None

  1. Study Design

Start with a one-paragraph introduction highlighting that this section will provide an overview of the study design, sample, and data collection. Please include information about whether and where your study was registered (for example, whether it was registered at clinicaltrials.gov or Open Science Framework).

A. Evaluation enrollment and assignment to study conditions

Purpose

Describe how members of the focal population became part of the impact study sample and the research design (RCT or QED) used to assign participants to the treatment/intervention or comparison conditions.

Instructions

This section should provide information on the following:

  1. Recruitment and enrollment targets for the study’s intervention and comparison groups

  2. Location of recruitment by intervention and comparison groups

  3. Participant eligibility criteria

  4. Special recruitment and enrollment procedures, if any

  5. Consent process

If the study is an RCT:

Describe the following about the random assignment process:

  1. Unit of randomization

  2. Who randomly assigned units to intervention or comparison conditions, and when, how, and under what circumstances that was done

  3. Method of randomization and whether randomization was done all at once or on a rolling basis

  4. Stratification/blocking (if applicable)

  5. Planned probability of assignment to the intervention group

  6. If applicable, any subsampling that took place after random assignment, the reason for it, the criteria you used, and how you implemented the subsampling

If the study is a QED:

Describe the following:

  1. The process used to identify and form the treatment and comparison groups, including whether you assigned clients or groups of clients to the treatment or comparison group. Specify when this assignment procedure took place relative to the timing of obtaining consent and collecting baseline data.

  1. Indicate that Section IV.B of this report includes information about how you assessed baseline equivalence for the final QED analytic sample used to estimate the main findings (the ones based on the primary research questions).

  2. If you used an administrative data set to select the comparison group, describe the source of the data and the criteria for identifying people similar to the clients in the treatment group, including characteristics and variables used to create comparable groups.

  3. If your study was originally an RCT but you had to use a QED approach such as propensity score matching or weighting to construct a comparison group (because of high attrition and lack of baseline equivalence or some other issue with random assignment), please state that in this section. Also indicate in this section that Appendix C provides details of the original RCT design, including attrition rates and baseline equivalence.

Documents to reference

HMRE evaluations: impact analysis plan (Section C.1)

RF evaluations: Sections 7 and 9 of the evaluation plan

Tables or figures

None

B. Outcome measures

Purpose

Describe how you used survey or other data to construct the outcomes of interest for the primary and secondary research questions.

Instructions

Define the outcomes you are examining in the primary and secondary research questions. Briefly explain how you constructed each outcome measure to analyze. If you used more than one item or variable to construct a measure, describe the survey items you used and how you coded them.

These outcomes should map to the primary and secondary research questions, respectively. Include the time period(s) you used to assess impacts for these questions. Whenever applicable and possible, provide the properties of the outcome measures, such as inter-rater reliability and internal consistency.

In Appendix D, you can give the details about how you cleaned data to prepare them for constructing the outcome measures.

Documents to reference

HMRE evaluations: impact analysis plan (Section A.3 and Tables 1 and 2).

RF evaluations: Section 10.1 of the evaluation plan

Tables or figures

Use Tables III.1. (Outcome measures used to answer impact analysis primary research questions) and III.2 (Outcome measures used to answer impact analysis secondary research questions) from the table shells document. Do not include Table III.2 if there are no secondary research questions. The templates include examples of descriptions of outcome measures in italics.



Box 1. Instructions for completing Tables III.1 and III.2

Tables III.1. and III.2 are the same as or similar to tables completed for your evaluation and analysis plans (HMRE evaluations: Tables 1 and 2 in Section A.3 of the impact analysis plan; RF evaluations: Section 10.1 of the evaluation plan) and give the following information:

  • Outcome measure name” column: Give the name of the outcome that will be used throughout the report, in both text and tables.

  • Description of outcome measure and its properties” column: Describe the outcome measure and give details on any items you used to construct it.

  • Important: If an outcome is a composite of multiple items (for example, a scaled measure based on five survey items) please report its Cronbach’s alpha (measure of internal consistency). See the example in Tables III.1 and III.2. If the outcome is a published measure or scale, please provide the name of the measure.

  • In the “Source” column, document the source for each measure. If all measures in the table are from the same source, please delete this column and add the source as a note at the bottom of the table.

  • In the “Timing of measure” column, please indicate the amount of time that has passed since the baseline survey was administered.


C. Implementation measures

Purpose

Describe how you used various data elements to operationalize the implementation measures.

Instructions

Briefly describe how you constructed each implementation measure. Use a table to link the description of measures to the implementation research questions.

Documents to reference

HMRE evaluations: implementation analysis plan (Section C and Table 3)

RF evaluations: Sections 10.1 and 14 of the evaluation plan

Tables or figures

Use Table III.3 from the table shells document. (HMRE evaluations can also use the information from Table 3 in their implementation analysis plan). The template for Table III.3 includes examples of implementation research questions, measures, sources, and timing, frequency, and party responsible of data collection in italics.

In the examples in the table template, the word “clients” is used for simplicity. You should replace the term “clients,” instead specifying “intervention group members” and “comparison group members” to address the implementation research questions separately by treatment and comparison groups. Also, you should adapt the text in the example research questions to describe the intervention services received by the intervention group and any alternative services and services similar to the intervention received by the comparison group during the evaluation period (enrollment through the final follow-up interview), based on the data you have collected through surveys and nFORM.

D. Data collection

Provide a brief introduction to this section on data collection, indicating you will first discuss data collection for the impact evaluation and then the implementation evaluation.



1. Impact study data collection

Purpose

Describe how you obtained information from HMRF program participants enrolled in the evaluation.

Instructions

Describe how you collected the data. Include the following key features of the data collection for each study group (treatment and comparison):

  • Data sources

  • Timing of each data collection point

  • Modes and methods of collecting data at each data collection point

  • Who collected the data (for example, intervention staff or evaluation staff)

  • Overall process for data collection.

Provide details on differences between intervention and comparison conditions in who collected the data and the timing, mode, and processes used for data collection, as appropriate. Be sure to note any changes made to data collection during the evaluation and document the rationale behind those changes.

Documents to reference

HMRE evaluations: impact analysis plan (Section C.2)

RF evaluations: Section 10.3 of the evaluation plan

Tables or figures

A table can complement the text discussion to help succinctly summarize the features of the data collection process. If you include a table on the data collection process, mention the table in the main body of the report when you describe data collection, and include it in the appendix. See the supplementary table shell (Table B.1), in the table shells document. HMRE evaluations can use the information in Table 5 from your impact analysis plan to populate this table.



2. Implementation data collection

Purpose

Document the sources you used to conduct the implementation analysis.

Instructions

Describe the data collected for the implementation study. Using your research questions as a guide (from Section II.D in this final report), discuss the data sources and link them to specific research questions and aspects of implementations (for example, dosage, fidelity, etc.). What data did you collect for each aspect of implementation, and how? Who was responsible for collecting the data?

Documents to reference

HMRE evaluations: implementation analysis plan (Section B and Table 2)

RF evaluations: Section 10.3 of the evaluation plan

Tables or figures

If you collected data from a variety of sources, a table could help organize what you present in this section. If you include a table, please mention the table in the main body of the report and include it in the appendix. See the supplementary table shell (Table B.2) in the table shells document for an example. HMRE evaluations can use the information in Table 2 from their implementation analysis plan to populate this table.

  1. Analytic methods

Provide a brief introduction to this chapter, indicating that it will describe how you constructed the sample(s) used for the main impact analysis (the analytic sample[s]), how you assessed baseline equivalence of the treatment and comparison groups, and the estimation approach(es) you used to address the primary research questions.

A. Analytic sample

Purpose

Describe the composition of the analytic sample(s) you used to estimate the intervention’s impacts.

Instructions

In this section describe how you handled all types of missing data. That is, how you dealt with item nonresponse (for example, when certain items in a survey were left blank), unit nonresponse (for example, when individuals did not participate in the 12-month follow-up survey), and attrition (that is, when participants dropped out of the study) in your analysis. Describe how you arrived at the analytic sample(s) that you used in the impact analyses:

  • Item nonresponse. Describe the level of item nonresponse for the outcome measures you are using in the impact analyses (primary and secondary research questions) and the method(s) you used to address missing data at the item level. More details on how you addressed item nonresponse and prepared the data for the analysis can be included in Appendix D; refer to that appendix here.

  • Unit nonresponse and attrition. This section should also clearly show how many participants (for example, individuals or couples) are in your analytic sample. To do this, you should first describe the flow of participants, from assignment to intervention and comparison conditions to your final data collection timepoint. This discussion should account for all forms of data loss —factoring in non-consent, survey nonresponse (or unit nonresponse), attrition (leaving the study), and additional exclusions the research team made. For HMRE evaluations, please use the CONSORT diagram included in your analysis plan (updated with your final enrolled sample) for preparing this description For RF evaluations, please use the template shared with you during analysis planning to update or complete the CONSORT diagram for your evaluation and use it to prepare this description.

Clearly report overall and differential (between the intervention and comparison groups) attrition from the initial assigned sample. You should also note what percentage of individuals in the analytic sample are crossovers (individuals assigned to the comparison group who actually received intervention services), and report any differences in attrition by outcome. If you conducted an RCT, include a statement that categorizes your evaluation as having high or low attrition at each follow-up time point. ACF follows the What Works Clearinghouse (WWC) standards for computing overall and differential attrition and for determining whether or not there was high attrition in the evaluation. ACF recommends use of the cautious boundary for all evaluations (with the exception of evaluations serving youth in schools during the regular school day, which can use the optimistic boundary) to determine whether the evaluation had high attrition.

Documents to reference

HMRE evaluations: Impact Analysis Plan (Sections D.1 and D.2, and CONSORT diagram (Section C.3, updated with final sample numbers)

RF evaluations: Section 14 of the evaluation plan

Tables or figures

(1) To support the discussion, you may choose to include your final CONSORT diagram(s) with details of the final sample in Appendix B.

(2) In the body of the report, include either Table IV.1a or Table IV.1b (in table shells document). Use either the individual-level (Table IV.1a) or cluster-level (Table IV.1b) design sample flow table, whichever is appropriate for the study design. Please refer to Box 2 and Box 3 for instructions on filling out the tables.

Please use the Table IV.1 (a or b) that is appropriate for the study design and complete only one table for this section.



Box 2. Instructions for completing Table IV.1a

  • The purpose of this table is to clearly present the sample sizes and response rates for participants in individual-level assignment studies.

  • The italicized text in the shell for Table IV.1a highlights how to calculate total sample sizes and response rates given other information in the table. Italicized text in the first column also provides guidance. Please clearly indicate the timing of the follow-up surveys relative to program exit. For example, “First follow-up: three months after program exit” and “Second follow-up: 12 months after program exit.”

  • To describe the sample from more than two follow-up periods, please add rows for the additional follow-ups as needed.

  • In the columns “Total sample size,” “Intervention sample size,” and “Comparison sample size,” enter the number of individuals who were assigned to the condition in the “Assigned to condition” row. In the following rows, enter the number of individuals who completed the survey.

  • In the columns “Total response rate,” “Intervention response rate,” and “Comparison response rate,” please make and report the calculations indicated by the italicized formula.

Note: the denominator for the response rate calculation will be the numbers entered in sample size columns in the “Assigned to condition” row.

  • Indicate the timing of the follow-up data collection in the parentheses (for example, use “12 months after end of intervention” to replace “timing” in the parentheses).

  • The row “Contributed to first follow-up survey” and, if applicable, the corresponding row for a second follow-up, corresponds to those study participants who participated in the follow-up survey data collection, without considering item nonresponse or any analysis restrictions.

  • For each outcome measure examined in the primary questions:

  • In the row “Contributed to first follow-up (accounts for item nonresponse and any other analysis restrictions)” and, if applicable, the corresponding row for a second follow-up, you may have different sample sizes for two outcomes of interest because of different rates of missing data for each of the outcomes.



Box 3. Instructions for completing Table IV.1b

Please use this version if your study enrolled couples or family groups, or if groups such as schools or classrooms were randomly assigned.

  • The purpose of this table is to clearly present the sample sizes and response rates for both individual and cluster-level assignment studies.

  • The table is split into two sections. The top section focuses on cluster sample sizes and response rates. The bottom section focuses on individual sample sizes and response rates.

  • In the columns “Total sample size,” “Intervention sample size” and “Comparison sample size”:

  • In the top section, enter the number of clusters at the beginning of the study that were assigned to condition in the “Clusters: at beginning of study” row. In the next three rows, enter the number of clusters in which at least one individual completed the relevant survey. If there are more than two follow-up periods, you will need to add extra rows.

  • In the bottom section, start with the number of individuals in non-attriting clusters. For all rows in this section, exclude individuals that were in clusters that dropped (attrited) from the study. For example, if you randomly assigned 10 clusters (five to each condition), and one intervention group cluster dropped from the study, you would only include people from the nine clusters that did not drop from the study (exclude individuals from the cluster that left the intervention group). List how many of these individuals (in non-attriting clusters) were present in the clusters at the time of assignment in the “Individual: At time that clusters were assigned to condition” row, and then list how many of these individuals completed the relevant surveys in the rows beneath.

  • For each row, the value entered in the “Total sample size” column should be the sum of the “intervention sample size” and “comparison sample size” (this calculation is shown in italics). In the columns “Total response rate,” “Intervention response rate,” and “Comparison response rate,” please conduct the calculations indicated by the italicized formula.

  • Note that for the top section (about clusters), the denominator for the response rate calculations will be the numbers entered in sample size columns in the “Clusters: At beginning of study” row. For the bottom section (about individual participants), the denominator for the response rate calculations will be the numbers entered in the sample size columns in the “Individual: At the time that clusters were assigned to condition” row.

  • In the row for “Individual: who consented,” if consent was given before assignment, delete this row, and in the row for “Individual: at time that clusters were assigned to condition,” insert the number of individuals in the non-attriting clusters who provided consent. Add a note at the bottom of the table indicating that consent was given before random assignment.



B. Baseline equivalence and characteristics of the sample

Purpose

Discuss how you assessed baseline equivalence for the analytic sample, present the results of the assessment, and describe the characteristics of the sample.

Instructions

If your study’s design is an RCT with high attrition or a QED, you must demonstrate baseline equivalence of the treatment and comparison groups in the analytic sample(s) and describe how you did that in this section. As a best practice, even low-attrition RCTs should demonstrate baseline equivalence on the baseline measures of the outcomes and on key demographic variables; this better informs the selection of covariates.

Briefly describe the analytic methods you used to assess the equivalence of the analytic sample(s). Also report the findings from your assessment of baseline equivalence. For example, say whether in your low attrition RCT, high attrition RCT, or QED, baseline differences were (1) within the range that does not require statistical adjustment (0.05 standard deviations), (2) within the range that requires adjustment (higher than 0.05 standard deviations and up to 0.25 standard deviations), or (3) outside the range to establish baseline equivalence (higher than 0.25 standards deviations), which does not demonstrate baseline equivalence.

Reminder: The analytic method you used to show baseline equivalence should account for the study design (for example, clustering, stratification, propensity score weighting). After describing your methods, you should present a table that shows data assessing equivalence for each analytic sample you used to answer your primary research questions. Then, discuss the key takeaways from your baseline equivalence analysis, including identifying which variables are not equivalent and how this information modified your analytic approach (for example, the addition of covariates in the impact model).

HMRE evaluations: Your impact analysis plan described the measures you would use to demonstrate baseline equivalence. The equivalence table(s) must include these measures (for example, baseline measures of the outcomes of interest and key demographic characteristics such as race or ethnicity and socioeconomic status). These baseline measures should also be consistent with the covariates included in the impact estimation models discussed in the upcoming instructions for Section IV.C. If the covariates in your impact analyses are not the same as the ones you used to assess baseline equivalence, please note why.

Important: If your study was originally an RCT but you had to use a statistical procedure to construct equivalent groups because of high attrition, the discussion in this section should focus only on the baseline equivalence of the analytic sample constructed with the statistical procedure (for example, propensity score matching or weighting). Details on the baseline equivalence, sample sizes, and attrition rates for the original RCT should be referenced here but described in Appendix C.

There may be more than one analytic sample in the evaluation study because you may be examining more than one outcome measure at different follow-up time points, and the level of item nonresponse and attrition for each measure and follow-up time period may vary. If that is the case in your study, we recommend doing the following for your final impact report:

  1. In the body of the report (that is, in Section IV), present a baseline table (Table IV.2 in the table shells document) for the sample of survey respondents at each follow-up. For example, if there are two follow-ups in the evaluation, one six months after and another 12 months after the end of the intervention, in the body of the report present one baseline table for the sample of respondents to the six-month survey and another table for the sample of respondents to the 12-month follow-up survey.

  2. In Appendix C, present a baseline table (Table C.1 in Appendix C of the table shells document) for the analytic sample of each outcome on which impacts were estimated to answer the primary research questions. For example, if your primary research questions are examining two outcomes, Outcome Measure 1 and Outcome Measure 2, measured at the six- and 12-month follow-ups after the end of intervention, present four baseline tables, each one establishing baseline equivalence for one of the following analytic samples: (1) participants completing Outcome Measure 1 at the six-month follow-up, (2) participants completing Outcome Measure 2 at the six-month follow-up, (3) participants completing Outcome Measure 1 at the 12-month follow-up, and (4) participants completing Outcome Measure 2 at the 12-month follow-up.

Documents to reference

HMRE evaluations: impact analysis plan (Section D.2)

RF evaluations: Section 14 of the evaluation plan

Tables or figures

Refer to the shells for Tables IV.2 and C.1 (Appendix C) in the table shells document; they can be used to assess and demonstrate baseline equivalence.



Box 4. Instructions for completing Tables IV.2 and C.1

  • The purpose of these tables is to demonstrate equivalence between study groups on key baseline characteristics and present useful summary statistics on these measures.

  • Copy and paste the relevant table shell (Table IV.2 or C.1 in the table shells documents) in the appropriate section of the final report (Section IV or Appendix C) so there is one table for each analytic sample in the evaluation.

  • In Table IV.2, replace the [follow-up timing] text in the header with the time point of the survey. For example, “Table IV.2. Summary statistics of key baseline measures and baseline equivalence across study groups, for individuals (or couples) completing the 12-month follow-up survey.” In Table C.1, replace the [Outcome Measure #] text in the header with the name of each outcome measure examined for the primary research questions. Replace the text [follow-up timing] with the time point of the survey. For example, “Table C.1. Summary statistics of key baseline measures and baseline equivalence across study groups, for individuals completing the parenting attitudes outcome measure at the 6-month follow-up.”

  • The template tables include examples of content for the rows in italics. Please edit accordingly and add additional rows as needed.

  • In Columns 2 and 3, “Intervention mean (standard deviation)” and “Comparison mean (standard deviation),” Tables IV.2 and C.1, enter the mean value and standard deviation of each measure. If a measure is binary (such as male/female) report as a percentage. For example, if 50 percent of the sample is female, enter a mean of 50 and denote that this measure is a percentage by adding a “(%)” next to the measure name (for example, “Female (%)”). If the measure is a scaled variable, please note the range next to the measure name (for example, “range: 1 to 5”).

  • Converting differences in means into effect sizes makes it easier to interpret the size of the difference across measures using a common threshold. It is best practice to assess baseline differences by computing the effect size for the difference in baseline means between the treatment and the comparison groups and reporting it in Column 4. The suggested effect size to calculate depends on the type of variable:

  • Hedges’ g: Use for continuous variables (for example, age)

  • Cox’s index: Use for dichotomous variables (for example, whether the participant was married or not at baseline)

  • The WWC Procedures and Standards Handbook, Version 5.0, provides additional guidance on effect sizes (https://ies.ed.gov/ncee/wwc/Docs/referenceresources/Final_WWC-HandbookVer5.0-0-508.pdf).

  • If, instead of using effect sizes, you conduct a statistical test of the difference in baseline means to assess baseline equivalence, do not include the column “Effect size,” and in Column 5 (“Intervention versus comparison mean difference (p-value of difference)” enter the difference in means and in parentheses, under the difference, the p-value for the test of differences in means.

  • In the final row of each table (Tables IV.2 and C.1), enter the sample size in Columns 2 and 3. These numbers should align with the analytic sample presented in Section IV.A.



C. Estimation approach for primary analyses

Purpose

Describe the modeling approach used to answer the primary research questions and the approach to testing program effects.

Instructions on modeling approach and specification

This description of the analytic approach should include the following:

  1. Modeling approach. Describe the type of model used to estimate intervention impacts for each research question (such as linear regression or logistic regression).

  2. Model specification. Describe the variables included in the model and the parameters of interest. For example, identify the parameter representing the impact estimate. List all the covariates included in the analyses in a table (such as Table IV.3 in the table shells document). If applicable, include information on clustering, stratification, propensity score weighting, and other statistical approaches to account for features of the study design. Do not specify the model equations for estimating impacts in this section. Instead, include them in Appendix E along with any other details about the model specification not described in the body of the text.

Details about data cleaning can be described in Appendix D.

Please note: if the study is an RCT with high attrition at the unit of assignment or a QED, ACF guidelines dictate that evaluations conduct analyses based only on individuals with complete data (that is, a complete case analysis, no imputation of missing data) as the primary analysis. In these situations, you may include the analyses using imputed data as additional analyses in Appendix F.

Instructions on testing program effects

Report the impact estimates resulting from your analyses in this section. ACF requires that evaluations report estimates of program impacts for each follow-up time point at which outcomes were measured. That is for ease of interpretation, standardization, and comparison across estimates. In this section also describe your approach to testing the program effects estimates:

  1. Testing statistical significance of differences in means (impacts). If your primary research questions focus on examining whether the estimated effect of the intervention is statistically significantly greater (or smaller) than the effect of the comparison condition, you will be testing whether the differences in outcome means between the intervention and comparison groups are statistically significantly different from zero. If this is the case, ACF requires a two-tailed test with .05 significance level (for example, “Findings are considered statistically significant based on p < .05”). ACF strongly recommends an estimation method, such as a regression model, to assess the impact of the intervention, to adjust for baseline differences and to include covariates that may improve the precision of the impact estimate.

  2. Testing equivalent effects. If your primary research questions focus on examining whether the impact of the intervention is equal to that of the comparison condition (for example, whether the same program delivered to both the intervention and comparison groups using two different delivery modalities, such as in person versus virtual, is equally effective), then you will be testing whether the difference in effects between the two groups is zero (that is, that the two groups produce equal impacts). If this is the case, showing that the differences in means between the intervention and comparison groups are not statistically significant does not yield enough information to conclude that the effect of the intervention is equivalent to the effect of the comparison condition. What is enough is to test whether the difference in the outcome means between intervention and comparison conditions is (1) large enough to be of practical importance—that is, greater than the smallest effect size of interest, or (2) so small that it is not of practical importance or is less than the smallest effect size of interest. Therefore, to test for equivalent effects, follow these steps:

    • Step 1: Determine the smallest effect size of interest by reviewing studies similar to yours.

    • Step 2: Specify the equivalence interval, which defines whether a difference in means is large (or small) enough to be (or not be) of practical importance. This interval has a lower bound and an upper bound.

    • Step 3. Conduct the equivalent effects test by doing one of the following:

      • Conduct two one-sided significance tests, one for testing the null hypothesis of whether the difference in means between conditions is smaller than or equal to the lower bound of the equivalence interval, and another for testing the null hypothesis of whether the difference in means is greater than or equal to the upper bound of the equivalence interval (this is known as the TOST procedure). To conclude there are equivalent effects within the specified lower and upper bounds, the two one-sided tests need to reject the null hypothesis. If the tests cannot reject the two null hypotheses, then equivalent effects cannot be established.

      • Estimate the 90 percent confidence interval of the difference in means between conditions and compare it to the equivalence interval.

For additional details on how to conduct tests of equivalent effects, HMRE evaluations can refer to Appendix B of the instructions for completing the impact analysis plan.

Documents to reference

HMRE evaluations: impact analysis plan: Section D.3 (modeling approach and model specification), Table 6 (list of covariates), and Appendix B of analysis plan instructions (equivalent effects testing)

RF evaluations: Section 14 of the evaluation plan

Tables or figures

Refer to Table IV.3 in the table shells document, which you can use to list the covariates included in the impact analyses (and HMRE evaluations can also use Table 6 from the impact analysis plan).

D. Implementation analyses

Purpose

Describe how you measured services each study group received.

Instructions

Describe the methods you used to test the implementation research questions. Include information on your approaches to examining and summarizing qualitative data from interviews, focus groups, and observations. For example, describe how you developed qualitative data sources, how you did the coding, and how you established reliability.

Previous documents to reference

HMRE evaluations: implementation analysis plan (Section C)

RF evaluations: Section 14 of the evaluation plan

Tables or figures

None

E. Sensitivity analyses

Purpose

Briefly describe any analyses you conducted to test the robustness of the main impact results to alternative assumptions that reflect important research decisions; and document whether the observed main impact results are not due to researcher decisions about how data were cleaned and analyzed.

Instructions

Describe the methods you used to test the robustness of the primary impact results or the appropriateness of the analytic model discussed above. For example, sensitivity analyses might adjust for alternative sets of covariates. Briefly summarize the results if the sensitivity analysis findings are similar to the findings from the primary analytic approach.

Documents to reference

HMRE evaluations: impact analysis plan (Section D.4)

RF evaluations: Section 14 of the evaluation plan

Tables or figures

None

F. Secondary analyses

Purpose

Briefly describe any analyses you conducted to address the study’s secondary research questions, including the motivation for examining those questions, the measures you explored, and the analytic approach you used. Make sure each finding aligns with a given secondary research question.

Instructions

Describe the analytic approach you used to address all secondary research questions, to the extent that it differs from the analytic approach proposed for primary research questions, For example, you might be interested in examining additional outcomes and time points the intervention might influence that are not addressed by the primary research questions. Please follow the same guidance for describing analyses to address secondary research questions that you used to address primary research questions (Section IV.C in these instructions).

Documents to reference

HMRE evaluations: impact analysis plan (Section D.5).

RF evaluations: Section 14 of the evaluation plan

Tables or figures

None

  1. Findings

This chapter should describe the results of the impact and implementation analyses. The sections are organized as follows: impact analysis to address primary research questions, implementation analysis, sensitivity analyses, and analyses to address any secondary research questions. Begin each section by summarizing the key finding(s) in the Key findings box (see the report template).

Findings should be organized by research question. If two research questions are closely related, you may consider addressing them; grant recipients may also consider other ways of organizing findings, as it makes sense for individual evaluations. For your study, if it makes more sense to structure this section differently, please discuss this change with your ETAP.

A. Results of the primary impact evaluation

Purpose

Present the results from the impact analyses to address the primary research questions.

Instructions

Begin by briefly summarizing the key findings in the Key findings box. This should consist of one bullet point per research question or a short paragraph.

Present the findings from the impact analyses in tables, then discuss these findings in the text. Make sure each finding aligns with a given primary research question. Briefly elaborate on the findings and patterns of findings in this section. Focus on factual description of findings, and save the broader discussion for the conclusion. For example, in the conclusion section (not here) discuss how strong adherence to the program model during implementation may partly explain positive findings of the intervention’s effectiveness.

Please present the findings in a metric (for example, percentage point difference) that is easy for readers to interpret. For example, if you used logistic regression, do not present results as odds ratios in the body of the report; rather, transform them into something that will make sense to a lay reader, such as predicted probability or an adjusted mean.

If you conducted tests of equivalent effects, describe (1) how you determined the smallest effect size of interest and cite the studies you reviewed to make this determination, (2) how you specified the equivalence interval, (3) whether you conducted two one-sided significance tests or estimated the 90 percent confidence interval to test for equivalent effects, and (4) how you concluded whether equivalent effects were established based on the findings from your tests.

Documents to reference

HMRE evaluations: impact analysis plan (Section D.3 and Appendix B of analysis plan instructions)

RF evaluations: Section 14 of the evaluation plan

Tables or figures

Tables V.1a (estimated effects of the intervention relative to a comparison condition) and V.1b (equivalent effects between intervention and a comparison condition), which you can find in the table shells document.



Box 5. Instructions for completing Table V.1a

  • The purpose of this table is to summarize the intervention’s estimated effects and compare them to the effects for those assigned to the comparison condition.

  • Edit and add rows as needed to represent all primary outcomes you are going to report estimated effects for.

  • In Columns 2 and 4, enter the model-based (adjusted) outcome mean, and in Columns 3 and 5 enter the outcome’s standard deviation if the outcome is continuous. (If the outcome measure is binary, enter n.a. for not applicable.) The model-based means should be adjusted for baseline covariates, and the standard deviation can be unadjusted or adjusted for covariates. If the outcome measure is binary, report the means as a percentage. It is best practice to report means and standard deviations of the outcome so that users of your final report can calculate effect sizes based on their preferred method or formula.

  • In Column 6, enter the difference in means between groups, and in Column 7 enter the p-value from a test of the statistical significance of this difference. Indicate whether the difference is statistically significant.

  • In the last row of the table, report the sample size for the intervention (Column 2) and the comparison (Column 4) groups. These sample sizes should align with the analytic sample size(s) discussed in Section IV.A.





Box 6. Instructions for completing Tables V.1b and V.1c

  • The purpose of these tables is to summarize the findings on whether the effects of the intervention are equivalent to the effects of a comparison condition.

  • Use Table V.1b if you tested for equivalent effects using the TOST procedure. Use Table V.1c if you tested for equivalent effects with the 90 percent confidence interval of the difference in means. Use only one of these table shells, not both.

  • Edit and add rows as needed to represent all primary outcomes that you are going to estimate effects for. The italicized text in Tables V.1b and V.1c in the table shells document gives an example of how to populate the tables.

  • In Columns 2 and 4 of either table shell, enter the model-based (adjusted) outcome mean, and in Columns 3 and 5, enter the outcome’s standard deviation if the outcome is continuous. (If the outcome measure is binary, enter n.a. for not applicable.) In the Notes section of the table, indicate whether the intervention and comparison means are unadjusted or covariate-adjusted based on the specification of the final impact model. If the means are covariate-adjusted, then these should be used for the equivalent effects tests you are reporting on in Table V.1b or V.1c. If the outcome measure is binary, report the means as a percentage.

  • In Column 6 of either table shell, report the smallest effect size of interest that you determined by reviewing studies similar to yours. This effect should be in standard deviation units.

  • In Column 7 of either table, give the lower and upper bounds of the equivalence interval (the interval that defines whether a difference in means is large (or small) enough to be (not be) of practical importance). Indicate whether this interval is not expressed in the same units as the means of the outcome measure. For example, if the interval is expressed in standard deviation units, you could indicate it with an “SD” (for “standard deviation”), such as (-0.10 SD, 0.10 SD). Indicate whether the differences are statistically significant at the 0.05 level.

  • If you used the TOST procedure to test for equivalent effects:

  • In Column 8 of Table V.1b, report the p-value of the test of whether the difference in means between the intervention and comparison conditions is lower than or equal to the lower bound of the equivalence interval. And in Column 9 of Table V.1b, report the p-value of whether the test of the difference in means between the intervention and comparison conditions is higher than or equal to the upper bound of the equivalence interval.

  • In Column 10 of Table V.1b, enter a “Yes” or “No” depending on whether the results from applying the TOST procedure support the conclusion that the effects are equivalent. If the results of the two tests are statistically significant at the 0.05 level, you can conclude the effects are equivalent and put “Yes” in Column 10. If the results of one or the two tests are not statistically significant, you cannot conclude the effects are equivalent and should enter “No” in Column 10.

  • If you computed the 90 percent confidence interval for the difference in means between the intervention and comparison conditions to test for equivalent effects:

  • In Column 8 of Table V.1c, enter the 90 percent confidence interval. If this interval is not expressed in the same units as the means of the outcome measure, say so on the table.

  • In Column 9 of Table V.1c, enter a “Yes” or “No” depending on whether the results from applying the TOST procedure support the conclusion that the effects are equivalent. If the results of the two tests are statistically significant at the 0.05 level, you can conclude the effects are equivalent and put “Yes” in Column 9. If the results of one or the two tests are not statistically significant, you cannot conclude the effects are equivalent and should put “No” in Column 9.

  • In the lower section of either table, report the analytic sample sizes, by intervention (Column 2) and comparison (Column 4) groups, for the analysis of each of the outcome measures in the primary questions. These sample sizes should align with the analytic sample size(s) discussed in Section IV.A.



B. Results of the implementation evaluation

Purpose

Describe the intervention as actually implemented (actual services received) and provide context for the impact findings.

Instructions

Begin the section by briefly summarizing the key findings in the Key findings box.

The findings should be written concisely and grounded in numeric evidence (for example, “The intervention was implemented with fidelity and achieved its attendance goals. Ninety-five percent of all intervention sessions were delivered, and 82 percent of the sample attended at least 75 percent of intervention sessions”).

This section should describe what the participants in the intervention and the comparison groups actually received. Also discuss whether the comparison group received any services that were similar to the services offered to the intervention group.

Important: Discuss the key limitations of the implementation data.

We encourage the use of subheadings in this section to discuss the findings related to fidelity, dosage, quality, engagement, context, and experiences of the intervention and comparison groups, to the extent they are available for your evaluation. Use this section to tell the story of the implementation that provides both context for the impacts and the key lessons learned from implementation.

Documents to reference

HMRE evaluations: None

RF evaluations: None

Tables or figures

A table may be helpful if you are describing or quantifying many data elements about the implementation.



C. Results of the sensitivity analyses

Purpose

Briefly describe any analyses conducted to test the robustness of the main impact results to alternative assumptions that reflect important research decisions. Document whether the observed main impact results are not due to researcher decisions about how to clean and analyze the data.

Instructions

This section should discuss in more detail any sensitivity analyses that show results that are different from the primary analytic approach, and thus challenge key research decisions. If findings differ for some sensitivity analyses, briefly discuss the similarities and differences (both in terms of magnitude and statistical significance) in the impact findings across the sensitivity analyses, relative to the primary analytic approach. If the results from the sensitivity analyses differ substantially from the main results presented in the report, please provide commentary about which set of results is more appropriate.

Please present the findings from the sensitivity analyses in a table (refer to the table shells document, Table V.2) and briefly discuss the findings in the text.

You can include equations for estimating the sensitivity analyses in Appendix F.

Previous documents to reference

HMRE evaluations: None

RF evaluations: None

Tables or figures

Please complete Table V.2 in the table shells document.


Box 7. Instructions for completing Table V.2

  • The purpose of this table is to summarize the sensitivity of estimated impacts to methodological decisions.

  • Only present the impact estimates (difference in the mean between the two study groups)

  • Denote statistical significance next to the estimated impact using asterisk(s).

  • Add rows as needed for additional outcomes or comparisons. Add columns as needed for the sensitivity analyses.

  • Replace column heading in italics with a descriptive name for the sensitivity analysis you conducted, such as “No covariate adjustment.” These headings should match headings in Chapter IV where you described the approach.

D. Results of the secondary analyses

Purpose

Describe the analyses and findings for secondary research questions, which may examine other outcomes, other time periods, subgroup analyses, mediated analyses, or moderator analyses.

Instructions

Begin the section by briefly summarizing the key findings in the Key findings box.

Then present the findings. Findings from secondary analyses are not considered impact findings but can complement impact findings nicely and provide context for the study’s conclusions. Please present the findings in a metric that is easy for readers to interpret (for example, percentage point difference or average scale value).

Briefly describe the analytic method(s) used to answer the research questions. If the methods are identical to those used to answer the primary research questions, state that. You can include more detailed information in Appendix F and refer readers to the appendix. For example, if you conducted analysis on imputed data (optional), you may want to describe how you did the imputation in Appendix F. When describing your analytic approach in detail in Appendix F, follow the guidance for Sections IV.A and IV.C.

Previous documents to reference

HMRE evaluations: None

RF evaluations: None

Tables or figures

Please refer to the table shells document for Table V.3 to show details of how to present impact findings that address the secondary research questions. The instructions for completing Table V.3 are the same as those for Table V.1a (Box 5).

  1. Summary and conclusions

This chapter summarizes the impact and implementation findings and describes lessons learned and limitations.

A. Implications

Purpose

Describe the implications of the study’s findings.

Instructions

Without repeating the results section, briefly summarize key findings. Describe what conclusions can be drawn from these findings.

Previous documents to reference

All grant recipients: Grant application and evaluation plan (for information about what the evaluation study intended to uncover)

None

Tables or figures

None

B. Equity considerations

Purpose

Describe equity considerations for the intervention.

Instructions

Discuss equity considerations for the intervention. Include a discussion of any exploratory subgroup analyses.

Previous documents to reference

None

Tables or figures

None

C. Limitations and future directions

Purpose

Describe the limitations of the study and discuss next steps for the field.

Instructions

Discuss the study’s limitations. For example, you may want to discuss the generalizability of the findings by explaining internal and external validity. If this was a high attrition RCT or had high amounts of missing data, be sure to note that here. Describe how the study’s conclusions will influence future interventions and research.

Previous documents to reference

None

Tables or figures

None

D. Other lessons learned

Purpose

Describe lessons learned from the study.

Instructions

Discuss the important lessons learned that are consistent with study findings or that could help others replicate the intervention or serve the same population.

Previous documents to reference

None

Tables or figures

None



  1. References

    Purpose

    Provide a full reference for any work cited in the report.

    Instructions

    Please use the American Psychological Association (APA) style guide for citing works in the report.

    Previous documents to reference

    None

    Tables or figures

    None

  2. Appendices

Based on our guidance for the report sections, the report may include the following appendices (note: it may not be necessary to include appendices for all of these items):

  1. Logic model. Please attach a copy of the intervention’s logic model.

  2. Data and study sample. First, provide details about the data sources and data collection processes for the impact study (see the shell for Table B.1 in the table shells document). Second, provide details on the key features of the data collection for the implementation study (See the shell for Table B.2 in the table shells document). The purpose of this table is to enable the reader to understand the data collected for the implementation analysis. Italicized text in Tables B.1 and B.2 gives examples of how you might fill in the tables; use text suited to your analysis. Third, include the final CONSORT diagram, which is referenced in Section IV.A of your report.

  3. Attrition and baseline equivalence. For studies that were originally an RCT but had to construct equivalent groups using a statistical approach (so the design effectively became a QED), describe the original RCT design, attrition rates, and baseline equivalence (see Appendix C in the table templates document). For QEDs and RCTs that had to construct equivalent groups, please describe the statistical approach you used to construct equivalent groups in Appendix C.

  4. Data preparation. Methods used to clean and prepare data (including how you handled missing and inconsistent data). HMRE evaluations can refer to your impact analysis plan, Section D.1, and RF evaluations can refer to Section 14 of their evaluation plan. Add any details as appropriate.

  5. Impact estimation. Include model specifications (equations) used to assess baseline equivalence and intervention impacts.

  6. Additional analyses. Include in this appendix any additional analyses that were not characterized as sensitivity or secondary analyses. For example, these may include the analyses you conducted to address missing data, including main and alternative approaches to handle missing data. In this appendix, you may also include the equations for estimating the sensitivity analyses from Sections IV.E and V.C in the report and additional details on how you conducted the analyses to address secondary research questions (such as the equations) for transparency.



1 U.S. Department of Health and Human Services. “Digital Accessibility at HHS.” n.d. https://www.hhs.gov/web/section-508/index.html

2 U.S. Department of Health and Human Services. “Section 508 Tips for Document Creation.” 2020. https://www.acf.hhs.gov/sites/default/files/documents/cb/508_tip_sheet.pdf

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File TitleHMRF Impact Report Instructions to RF-LETA
AuthorHMRF
File Modified0000-00-00
File Created2024-12-09

© 2025 OMB.report | Privacy Policy