Tribal PREP Descriptive Report: Guidance
A key goal of the Family and Youth Services Bureau (FYSB) Tribal Personal Responsibility Education Program (Tribal PREP) is to ensure that rigorous evidence on program and implementation outcomes contributes to the knowledge base on adolescent pregnancy prevention for tribal youth.
This document provides guidance on structuring a comprehensive and accessible final report and provides suggestions for what information to include and where to obtain it. The goal of the final report is to document your evaluation and share the findings with a public audience. A report template accompanies this document for your use in preparing the report. To save you time, many of the template report sections draw directly from the material prepared for the Evaluation Abstract and Evaluation Analysis Plan; you should use text from those sources to simplify report writing.
The annotated outline below contains the following guidance for each section of the report: (1) purpose describes the purpose of the section and what you should discuss, (2) instructions and reminders contains items to keep in mind when writing the section, (3) potential sources lists existing documents that may be sources for the section, and (4) non-text elements indicates whether elements such as tables or figures should be used. The first part of the report should provide background on the full evaluation, followed by a discussion of your implementation evaluation and then the outcomes evaluation. We provide a recommended analytic approach for the outcomes evaluation in Section VI.G.
Your report should be written in the accompanying template, which is set up to make the production and 508-compliance process easier as well as to facilitate review by FYSB and the RETA team.1 An attached Word file (Tribal PREP_Descriptive Report Template.docx) provides an outline of the report with places for you to fill in each section. A separate attached table shell file (Tribal PREP_Descriptive Report Tables.docx) provides some required and optional table shells for you to use and paste into the outline file as you write the report. Using these shells will allow FYSB to more quickly make the reports 508-compliant so that they can be posted on the FYSB website. You can also find these files on SharePoint in the Templates folder.
Here are some additional suggestions for your report:
The final report should be approximately 30–40 pages, double-spaced (15–20 pages single-spaced), excluding tables, figures, references, and appendices.
The report should be written for all audiences, not only for other researchers. Write as if the audience has not been involved in the grant and knows nothing about the program or the evaluation. The report should provide enough detail for readers to understand the program and its evaluation and should be free of project- or program-specific jargon and abbreviations.
Reach out to your RETA with questions about this report guidance or your approach as you begin to work on the report. Resolving questions early in the process will simplify the review process at the end of the grant period.
Please submit a report that you believe is ready for external review. It should not read as a draft. Ideally, it will have been edited and read by multiple people before submission to minimize the number of editorial comments your federal project officer (FPO) and RETA will need to provide. Their goal is to focus on content and technical details rather than on formatting and editorial changes.
Please email your final report as a Word file (not PDF) to your FPO and copy your RETA liaison by [due date]. For consistency, please use this common naming convention when submitting your report: [Grantee name] Descriptive Report_[report draft date]. Your FPO and RETA liaison will review the final report, provide comments and suggested edits, and return it to you for revisions. Your final report must be approved by your FPO by the end of your grant period.
Cover page
The report cover page should include the title of the report and all authors.
Disclose any conflicts of interest—financial or otherwise—on the cover page. For an example of how to identify a conflict of interest, please see the International Committee of Medical Journal Editors. For instance, if someone on the evaluation team has received a grant from an organization included in the evaluation, you could include language such as “[insert name] reports receipt of a grant from [organization name] during the evaluation period.” Please note, if the evaluation team is not completely independent from the program team (that is, if they are not from a different organization with completely separate leadership and oversight), this is a conflict of interest that must be documented.
Finally, the cover page should include this attribution to FYSB:
This publication was prepared under Grant Number [Insert Grant Number] from the Family and Youth Services Bureau within the Administration for Children and Families (ACF), U.S. Department of Health & Human Services (HHS). The views expressed in this report are those of the authors and do not necessarily represent the policies of HHS, ACF, or the Family and Youth Services Bureau.
Evaluation abstract
Purpose |
Provide a one- to two-page executive summary of the final report. |
Instructions and reminders |
Please complete the abstract in the outline template. Most of the fields can be copied from your most recent approved abstract, although you should remember to update it if anything has changed since its approval. There are some additional fields to include, specifically the Methods, Implementation findings, Outcomes findings and Conclusions sections. You can find your most recent approved evaluation abstract SharePoint, in your Abstract folder. |
Potential sources |
Evaluation abstract (first submitted in fall 2018, potentially resubmitted with the Evaluation Analysis Plan). |
Non-text elements |
None |
Introduction
A. Introduction and study overview
Purpose |
Orient the reader to the study. |
Instructions and reminders |
In this section, explain (1) the need for teen pregnancy prevention for the particular population (defined by locality, tribe, age, etc.) studied; (2) the rationale for selecting your program and how the evaluation fits within the FYSB Tribal PREP grant program; (3) previous research describing the effects of the program, including, if applicable, how prior findings were assessed by the HHS Teen Pregnancy Prevention Evidence Review (Evidence Review); (4) whether the program was implemented with other tribal populations, to the extent known; and (5) that this report describes the implementation and outcomes of the Tribal PREP-funded program. |
Potential sources |
Tribal PREP Funding Opportunity Announcement Grant application FYSB website Youth.gov |
Non-text elements |
None |
B. Study objectives
Purpose |
Summarize the main goals of the outcomes and implementation evaluations. |
Instructions and reminders |
This section should briefly summarize the objectives for both the outcomes and implementation evaluations. Describe the goals of the overall study and briefly describe how each evaluation will contribute to those goals. Note: This section should not list each research question to be investigated. A more detailed description of the implementation and outcomes research questions will be provided in Sections V.A and V.A. You can refer the reader to those sections for more information. This section is just designed to be a broad overview of the goals of the evaluation. |
Potential sources |
Evaluation design plan Evaluation analysis plan
|
Non-text elements |
None |
II. Tribal communities
Provide a one- or two-paragraph overview, highlighting that this section describes the Tribal communities that were engaged in the evaluation and how the project team engaged the communities prior to the start of and during the evaluation.
A. Tribes participating in evaluation
Purpose |
Describe the tribal communities that participated in the evaluation. |
Instructions and reminders |
This section should describe which tribal communities participated in the evaluation, including the size of the tribe(s) and an estimate of the number of eligible youth for the evaluation. Include a brief discussion of the cultural and linguistic traditions of the tribe(s), as well as details about specific values related to the topics of adolescent sexual beliefs and behaviors. This section should also describe the geographic region served during this evaluation and give a sense of the dispersion of the communities over that region. |
Potential sources |
Evaluation design plan |
Non-text elements |
It may be useful to include a map of the geographic representation of the tribal communities in an appendix. |
B. Needs assessment activities
Purpose |
Describe the activities conducted during the needs assessment. |
Instructions and reminders |
This section should describe the needs assessment phase of the grant. Include a description of the purpose of the needs assessment, the activities that were conducted during this phase, and how the activities influenced the evaluation. Describe the work done to assess the need for adolescent pregnancy prevention programs in the community. Discuss how the team determined what other sexual health programming existed in the community and the size of the potential target population. Include a discussion of how the results of these assessments influenced the evaluation (for example, choosing a model that differed from existing programs in the community, or revising the target population).
Describe how the project team determined which program or programs to deliver, including any assessment of the programs’ evidence base, and how decisions were made regarding the types of cultural adaptations that would be needed. |
Potential sources |
Grant application |
Non-text elements |
None |
C. Engaging the tribal community in the evaluation
Purpose |
Summarize the role tribal communities played in study design, implementation, and reporting. |
Instructions and reminders |
This section should describe how tribal leadership, including leadership councils, elders, tribal institutional review boards (IRBs), and the broader communities, played a role in the evaluation design, including planning and approving the implementation and outcomes studies. Indicate whether ethical concerns in the community influenced the evaluation in any way. Include which IRBs reviewed study materials and how the design or study protocols changed because of their input.
Describe any role tribal communities had in ongoing study activities. For example, describe whether the project team provided regular updates on the project to community stakeholders and how stakeholder input influenced the study.
|
Potential sources |
Evaluation design plan |
Non-text elements |
None |
III. Programming
Provide a one- or two-paragraph introduction to this section, highlighting that it provides an overview of the intended program, including any planned adaptations, and noting that Section VI.A describes what was actually received. (Note: this section focuses on what you intended for youth to receive, not necessarily what happened in practice.)
A. Description of program
Purpose |
Summarize the program being tested. |
Instructions and reminders |
This section should describe the program components. Discuss (1) program activities; (2) program content, including any specific curriculum used; (3) expectations for implementation, including the implementation location or setting (for example, schools or clinics), duration and dosage, and staffing; and (4) theory of change or logic model for the program, including the expected outcomes. Include a discussion of the Adulthood Preparation Subjects (APS) that were covered in your program, how they were covered and how those were selected to be a focus of your program. A graphical representation of your logic model or theory of change is required as Appendix A in the report. |
Potential sources |
Evaluation abstract Evaluation design plan Evaluation analysis plan Grant application (for logic model, conceptual model, or theory of change models) |
Non-text elements |
It is often useful for the reader if you present information about the program description in a table (for example, with columns for target population, program activities, program content, planned mode of delivery, and so on). Your updated logic model must be included in Appendix A. Be sure the logic model includes the APS content and outcomes. |
B. Description of planned program adaptations
Purpose |
Summarize any planned adaptations made to the curriculum. |
Instructions and reminders |
This section should describe the adaptations (aside from the addition or APS content, which was covered above) made to the curriculum being delivered to youth as part of your broader program. In particular, describe how the curriculum was modified to be responsive to the tribal communities being served. Describe how the adaptations were designed to address the specific needs of the communities, including any specific cultural, linguistic, or environmental concerns. This section should focus only on planned adaptations. The implementation results section (Section VII.A) should discuss any unplanned adaptations. |
Potential sources |
Evaluation abstract Evaluation design plan |
Non-text elements |
None |
IV. Study sample formation
Purpose |
Describe the study setting, context, and eligibility criteria for being part of the study. |
Instructions and reminders |
This section should include a description of the setting and context for the overall project, including both the implementation and outcomes study. It should include a description of the study target population, and how the sample was identified and recruited for the study (both youth and sites, as appropriate). Include any eligibility criteria or required characteristics for inclusion (for example age, AI/AN status, attending a particular school, geographical area). Include information on the success of the recruitment process—that is, the number of sites (for example, schools) contacted and number recruited. Also include the number of youth recruited.
This discussion should focus on the enrolled sample. The remainder of the report template is divided into two components, the implementation study and the outcomes study. In each component, you will describe the sample or samples relevant to the analyses done for that study.
|
Potential sources |
Evaluation analysis plan Evaluation abstract Evaluation design plan Final CONSORT diagram |
Non-text elements |
None |
V. Implementation evaluation design
Provide a one- or two-paragraph overview, highlighting that this section provides a description of the implementation evaluation, the sample used to answer the implementation research questions, the data collected, and the measures constructed and how they will be analyzed.
A. Research questions
Purpose |
Describe the research questions being investigated for the implementation study. |
Instructions and reminders |
List the research questions guiding the implementation evaluation, for each aspect of implementation being examined (fidelity to the curriculum or program model, dosage of the program, quality of implementation, engagement of participants, and experiences of the comparison group and other context). Your research questions should be drawn from previously approved documents (including your evaluation design plan or impact and program implementation evaluation analysis plan), although you may need to make some changes to those if anything has changed.
Please be sure to include any research questions specific to the APS topics your program covered, particularly those topics that may not have been part of the core curriculum but were instead administered as a supplement outside the program. Also be sure to include any questions pertaining to cultural adaptations of the program that you plan to answer in your analyses. |
Potential sources |
Evaluation design plan Evaluation analysis plan |
Non-text elements |
A table may be helpful for organizing the presentation of this section. If you include a table, please mention it in the main body of the report and include it in the appendix. See Appendix B and Table B.1. |
B. Study sample and data collection
Purpose |
Describe how the data used to answer the implementation research questions were collected, and from whom. |
Instructions and reminders |
Describe the sample or samples that will be used to answer the implementation research questions. For each sample, include the number of participants who were initially enrolled and how many contributed data to the analyses. For example, if your implementation study includes data collected from youth via focus groups and data from teachers via a survey, you would describe how many participants consented to each data collection activity and how many ultimately completed the activity. Be sure to briefly describe how the sample differs from your outcomes sample, if applicable (for instance, if your implementation study was conducted in only the first year of your outcomes study or in a subset of schools.) Then, describe the data that will be used to answer each question, including the mode and frequency of data collection. |
Potential sources |
Evaluation design plan Evaluation analysis plan |
Non-text elements |
A table may be helpful for organizing the presentation of this section. If you include a table, please mention it in the main body of the report and include it in the appendix. See Appendix B and Table B.1. |
C. Measures and methods
Purpose |
Describe the measures that were constructed to answer your implementation research questions and how you analyzed the data systematically. |
Instructions and reminders |
Describe the analyses that were conducted for each research question guiding the implementation evaluation. What measures were constructed for the analyses from the data collected? How were implementation elements quantified? What methods were used to analyze the data? For example, please describe any preexisting targets you used to assess your implementation measures against, or how you coded and analyzed data collected from interviews and focus groups. |
Potential sources |
Evaluation design plan Evaluation analysis plan Structural Elements of an Intervention brief Qualitative Analysis Tip Sheet (forthcoming) |
Non-text elements |
A table may be helpful for organizing the presentation of some of the material in this section. The table can be combined with the material presented in Section IV.B, as appropriate. If you include a table, please mention it in the main body of the report and include it in the appendix. See Appendix B and Table B.1. |
VI. Outcomes evaluation design
Provide a one- or two-paragraph overview to this section, highlighting that this section will present the research questions, evaluation design, sample, data collection and analytic methods for the outcomes evaluation.
A. Research question(s)
Purpose |
Articulate the key research questions about the program on outcomes of youth. |
Instructions and reminders |
This section should present the research questions. Your research questions should be drawn from previously approved documents (including your evaluation design plan or impact and program implementation evaluation analysis plan), although you may need to make some changes to those if anything has changed. Reminder: the research question(s) should focus on the outcomes of the program at a specific time point. The outcome(s) and time point(s) should be clearly connected to the program’s theory of change. For instance, if your theory of change posits that knowledge should change immediately following the end of the program, you should examine the impact on knowledge outcomes at your immediate post-program survey. |
Potential sources |
Evaluation design plan Evaluation analysis plan |
Non-text elements |
None |
B. Research design
Purpose |
Provide an overview of the research design used to assess program outcomes. |
Instructions and reminders |
This section should clearly identify the design used to assess program outcomes (for instance, examining within-individual change over time), including whether you were able to link data for individual respondents between baseline and post-test surveys. Please indicate any limitations of the design or how it was implemented. For example, if you could not link baseline and outcomes data for an individual respondent, discuss how this affects the interpretation of the outcomes. Consider addressing relevant issues that you have discussed with your FPO and RETA over the course of the study. |
Potential sources |
Evaluation analysis plan Evaluation abstract Evaluation design plan Pre-post Tip Sheet (forthcoming) |
Non-text elements |
None |
C. Data collection
Purpose |
Indicate how data on outcomes of interest (as well as key explanatory variables, including baseline assessments and demographics) were obtained from sample members. |
Instructions and reminders |
Describe the data collections conducted, including timing, mode of administration, and overall process, as well as information on incentives. Please do not refer to “performance measures,” as your readers will be unfamiliar with those. Rather, refer to the survey you used for your evaluation.
|
Potential sources |
Evaluation design plan Evaluation analysis plan |
Non-text elements |
None |
D. Measures
Purpose |
Describe how the outcomes of interest in the research questions were operationalized using survey data or other data elements. |
Instructions and reminders |
Define the outcomes being examined in the research questions. Briefly explain how each outcome measure was operationalized and constructed. If a measure was constructed from multiple items, document the source items and explain how they were combined to create an outcome that was analyzed. If a detailed description of measure construction is necessary, please include that information in an appendix. |
Potential sources |
Evaluation analysis plan Evaluation abstract |
Non-text elements |
Please present this information in a table. Refer to the table shells document, which includes 508-compliant (landscape) table shells for Table VI.1. (Note: in the template, Table VI.1 includes examples for you). Instructions for completing Table VI.1
|
E. Analytic sample
Purpose |
Describe the flow of participants into the analytic sample for the outcomes study. |
Instructions and reminders |
Use this section to describe how the analytic samples were created—that is, the flow of sample members from the point of consent through the follow-up assessments used in the outcomes research questions, factoring in non-consent, attrition, and item nonresponse. (Follow-up is the time period indicated in the research questions for assessing program outcomes.) This section should include the number of students for whom parental consent was obtained, the number of youth for whom baseline and follow-up data were obtained for each key measure, and information about the study sample, such as time period, total sample size, and response rates, for the total sample. If necessary (for example, if several sites dropped from the study, or you plan to do site-level analyses), include information on the number of sites that were recruited into the study and the number of sites that contributed baseline and follow-up data. This section should also indicate sample sizes for your analytic sample(s). An analytic sample is the sample on which you estimate change over time on your outcomes. For example, suppose that one research question focuses on the immediate post-test and a second research question focuses on a six-month follow-up, and recent sexual activity is the key outcome on which you are assessing program outcomes. In this case, you might have two analytic samples: (1) the sample responding to the immediate post-test with non-missing data on recent sexual activity and (2) the sample responding to the six-month follow-up with non-missing data on recent sexual activity. When creating an analytic sample for one particular time point when there are multiple outcomes to be examined (with some item nonresponse across the outcomes), the RETA team recommends identifying a single, common analytic sample that does not have missing data across all of the outcomes of interest. Using a single, common analytic sample will produce an easy-to-follow and understandable presentation of the analyses across multiple outcome measures. If, however, there is substantial item nonresponse across two or more outcomes, then the RETA team recommends considering each outcome as requiring its own, unique analytic sample. Note: If you are unable to link pre- and post-program survey data for an individual, we will talk with you about your approach to the analysis during the analysis plan review. In summary, we recommend you aggregate the data to the smallest cluster possible (such as classroom-year) and look at differences within that cluster over time. When using cluster-level data, it will be important to give the reader a sense of the proportion of the youth in the program who are in your analytic sample (for example, 90 percent of youth in the served classrooms provided pre- and post-program data). Instead of a nonresponse analysis, describe how the sample analyzed might reflect the population that received the program. For example, report on the percentage of the enrolled sample who took the pre-test and the percentage who took the post-test. In addition, estimate the number of youth who completed the post-test who were not enrolled in the program at the time of the pre-test. Also, in the discussion, include an estimate of what percentage of youth were enrolled in the classroom at both baseline and follow-up survey. |
Potential sources |
Evaluation analysis plan Evaluation abstract Final CONSORT diagram |
Non-text elements |
As support for the discussion above, include one of the sample flow tables that follows. Use either the individual-level (Table VI.2a) or the cluster-level (Table VI.2b) table, whichever is appropriate for your planned analyses. The next two pages include more detailed instructions for completing these tables. |
Detailed instructions for TABLE VI.2A OR VI.2B
Please refer to the table shells document for two versions of 508-compliant table shells for reporting sample flow for either individual-level (Table VI.2a) or cluster-level (Table VI.2b) analyses. Complete only one table for this section. If you will use individual-level data linked from pre-test to post-test, use Table VI.2a. If you cannot link pre- and post-test data for individuals, use Table VI.2b.
Instructions for completing Table VI.2a (for studies using individual-level analyses)
The purpose of this table is to present the sample sizes and response rates.
Italicized text highlights how response rates should be calculated given other information in the table.
In the column “Time period,” describe when each survey was administered relative to the end of programming. (Example text is shown in this column in the template.) You should include rows for each outcome at each survey time period that you include in your analyses.
In the column “Sample size,” enter the number of youth who consented to participate in the “Consented to participate” row. In the following rows, enter the number of youth that completed the relevant survey.
In the “Response rate” column, please conduct the calculations indicated by the italicized formula. The denominator for the response rate calculations will be the numbers entered in sample size columns in the “Consented to participate” row.
For the rows “Included in the outcomes analysis at follow-up (accounts for item nonresponse),” you may have different sample sizes for two outcomes of interest because of different rates of missing data for the outcomes. If this is the case, please add a row for each outcome in each time period, as needed. Indicate in the row label the outcome(s) to which the sample sizes apply. For example, if you have two primary outcomes (pregnancy and unsafe sex), and there were different response rates on the items needed to construct these outcomes, you should include two rows for “Included in the outcomes analysis at follow-up (accounts for item nonresponse)”—one for the analysis sample for the pregnancy outcome and one for the analysis sample for the unsafe sex outcome.
Instructions for completing Table VI.2b (for studies using cluster-level analyses)
The purpose of this table is to present the sample sizes for each survey time period.
In the table header and the row “[cluster],” replace [cluster] with the level to which you aggregated your data for analysis. For example, if you analyzed each classroom for each year youth enrolled from that classroom, you would write “Classroom-year.”
Replace the “[Survey time period]” text in the column label with the time point of the survey.
In the row “Enrolled at time of survey,” in columns 2 and 3, enter the number of youth who were enrolled in the schools or classrooms or community-based organizations at the times the baseline and follow-up surveys (respectively) were conducted.
Note: This information will likely come from classroom rosters or other (non-survey) data sources.
In the row “Completed survey,” in columns 2 and 3, enter the number of youth who completed the baseline survey and follow-up survey, respectively.
Note: In the text discussion, please include an estimate of the number of youth who were enrolled in the clusters (according to rosters or other data) at both the baseline and follow-up survey time points.
In the row “Completed both surveys OR enrolled at both time points,” in column 3, if you have data that allow it, enter the number of youth who completed BOTH the baseline and follow-up survey. If you are unable to determine the number of youth who completed both surveys, enter the number of youth who were enrolled in the program at both time points (in other words, they did not drop out of the program or join the program after it began). Then update the row label to reflect the data you present. (Note: Column 2 displays “n.a. (not applicable)” because you will report only one number in this row, the number of youth completing both surveys, in column 3.)
In the row “Number of clusters,” in columns 2 and 3, enter the number of clusters enrolled in the study at the time of the baseline survey and follow-up survey, respectively.
In the row “Average number of youth per cluster,” in columns 2 and 3, enter the mean number of youth per cluster enrolled in the study at the time of the baseline survey and follow-up survey, respectively.
In the row “Range of number of youth per cluster,” in columns 2 and 3, enter the range of the number of youth (in other words, the largest and smallest cluster size) for clusters enrolled in the study at the time of the baseline survey and follow-up survey, respectively.
F. Sample characteristics
Purpose |
Describe the sample characteristics for the reader. |
Instructions and reminders |
Briefly describe the composition of the analytic sample(s) used to answer the research questions for the outcomes evaluation. Present a summary table for each analytic sample being used to answer the research questions. An analytic sample is described as the sample on which effects are estimated. For example, suppose that one research question focuses on an immediate post-test assessment and another one focuses on a six-month follow-up assessment, and recent sexual activity is the key outcome you are evaluating. In this case, provide tables for (1) the sample responding to the immediate post-test with non-missing data on recent sexual activity and (2) the sample responding to the six-month follow-up with non-missing data on recent sexual activity. The tables must include baseline measures of demographic characteristics (age or grade, gender, and race/ethnicity), as well as measures of the outcomes of interest assessed at baseline. The table should document sample sizes for each characteristic reported, and either the mean and standard deviation for continuous variables or the percentage for categorical variables. You may also use these tables to present a nonresponse analysis. (See “Recommended analytic approach” in Section VI.G.) To determine if the analytic sample is representative of the baseline sample, include in the table the demographic characteristics and measures of interest for both the sample of youth who completed both the baseline and follow-up surveys and the sample of youth who completed the baseline survey but did not complete the follow-up survey. Also report the p-value of a test of significance between the two means (the analytic sample and the sample without follow-up data). If there are statistically significant differences between the groups, consider using nonresponse weights for your analysis. See the Pre-post Tip Sheet for guidance on conducting this analysis. Include a narrative description of the sample characteristics for the reader (for example, the age and gender of the sample). Note: If you are unable to link pre- and post-program survey data for an individual, we will talk to you about your approach to the analysis during the analysis plan review. In summary, we recommend you aggregate the data to the smallest cluster possible (such as classroom-year) and look at differences within that cluster over time. When using cluster-level data, it will be important to give the reader a sense of the proportion of the youth in the program who are in your analytic sample (for example, 90 percent of youth in the served classrooms provided pre- and post-program data). See Section VI.E for more details of what data to present instead of a nonresponse analysis. In addition, please present sample means of demographic and other characteristics for those that completed your baseline survey. You will present baseline means of the outcome measures analyzed in Table VII.2. |
Potential sources |
Evaluation analysis plan Pre-post Tip Sheet |
Non-text elements |
Please refer to the table shells document, which includes a 508-compliant (landscape) table shell to be used to present baseline characteristics for studies that can link individual-level data (Table VI.3a) or those that cannot link individual-level data (Table VI.3b). Complete only one table for this section. Instructions for completing Table VI.3a (for those who can link pre-test and post-test data)
Instructions for completing Table VI.3b (for those who cannot link pre-test and post-test data)
|
G. Methods
VII. Evaluation findings
A. Implementation evaluation findings
Purpose |
Provide information on the actual experiences of youth in the program and comparison groups. |
Instructions and reminders |
This section should provide information on the program as received by youth to whom it was offered (rather than the intended implementation, which is discussed in an earlier section) and the context in which it was delivered. This section should also provide information on the comparison group experience. The findings should be written concisely and grounded in numeric findings. For example: “The program was implemented with fidelity and the program achieved its goals for attendance in this out-of-school program. Ninety-five percent of all program sessions were delivered, and 82 percent of the sample attended at least 75 percent of program sessions.” Avoid jargon or overly technical terms as much as possible so that a reader without a research background can understand. Use this section to tell the story of implementation that provides both context for the outcomes and the key lessons learned from implementation. Again, be sure to discuss the implementation findings related to APS topics. We encourage the use of subheadings in the text of this section to discuss the findings related to fidelity and dosage, quality of implementation and engagement, and experiences of the comparison group and context. A table may also help organize findings. Important: If any unplanned adaptations to implementation occurred during the program, these adaptations should be described here as part of the findings of the implementation evaluation. |
Potential sources |
Evaluation analysis plan |
Non-text elements |
Please refer to the table shells document, which includes 508-compliant table shells for Table VII.1. Instructions for completing Table VII.1
|
B. Outcomes evaluation findings
Purpose |
Present the results for the outcomes research questions. |
Instructions and reminders |
Present the findings of the program in tables, then discuss the findings in the text. Be sure to answer each research question. Avoid jargon or overly technical terms as much as possible so that a reader without a research background can understand. Briefly elaborate on the findings and patterns of findings in this section, but save the broader discussion for the conclusion. (For example, you should delay tying together implementation and outcomes findings until the conclusion.) Include a summary of the similarities and differences in the outcomes across the sensitivity analyses (these sensitivity results should be included in Appendix S and Table S.1.) If your analysis plan included additional analyses, beyond pre-post analyses, you can include them in a section VII.C. |
Potential sources |
None |
Non-text elements |
Please refer to the table shells document, which includes a 508-compliant table shell for Table VII.2. Instructions for completing Table VII.2
|
VIII. Conclusion
A. Summary
Purpose |
Summarize the outcomes and implementation findings. |
Instructions and reminders |
At a high level, restate the main outcomes and implementation findings. To the extent that it is appropriate for your descriptive study, weave the outcomes and implementation findings together to create a coherent story about how program implementation may have influenced outcomes. For example, explain how program adherence, youth attendance, or implementation quality might have influenced outcomes. Discuss important lessons learned that explain the outcomes or that could help others replicate the program or serve the same target population. |
Potential sources |
Earlier sections of this report |
Non-text elements |
None |
B. Limitations
Purpose |
Describe any limitations of the study. |
Instructions and reminders |
Describe limitations of the study (for example, lack of a comparison group, issues with data collection or implementation). Discuss how the limitations may influence the interpretation of your findings. For instance, if you had very low attendance in one cohort of youth, there was a limited exposure to the program. If you found no statistically significant findings, this limitation could help explain the findings. |
Potential sources |
Earlier sections of this report |
Non-text elements |
None |
C. Discussion
Purpose |
Synthesize the information presented and describe lessons learned. |
Instructions and reminders |
Present the implications of your evaluation and findings for the broader field. Discuss important lessons learned that explain the impacts or that could help others replicate the program or serve the same target population. For example, if you provided an online intervention, discuss how technology contributed to your evaluation and can be used in the future to address adolescent health education. Also include any areas for future research that you have identified based on this evaluation. |
Potential sources |
Earlier sections of this report |
Non-text elements |
None |
IX. References
Purpose |
Provide a full reference for any work cited in the report. |
Instructions and reminders |
Use the American Medical Association (AMA) style guide for citing works in the report. This section should include a full reference for any work cited. |
Potential sources |
None |
Non-text elements |
None |
X. Appendices
Based on our guidance for the report sections, the report may include the following appendices (note: it may not be necessary to include appendices for all of these items):
Instructions for completing Table S.1
|
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
File Title | Mathematica Standard Report Template |
Author | DPA-Fitts |
File Modified | 0000-00-00 |
File Created | 2023-08-28 |