Fatherhood Family-Focused, Interconnected, Resilient, and Essential (Fatherhood FIRE) Grantee Local Evaluation Plan Template

Formative Data Collections for ACF Research

Instrument_Fatherhood FIRE Evaluation Plan Template_rev3_updated CLEAN

Fatherhood Family-Focused, Interconnected, Resilient, and Essential (Fatherhood FIRE) Grantee Local Evaluation Plan Template

OMB: 0970-0356

Document [docx]
Download: docx | pdf

Shape1

The purpose of this information collection is to document local evaluation plans conducted as part of the Fatherhood Family-Focused, Interconnected, Resilient, and Essential (Fatherhood FIRE) grants. Public reporting burden for this collection of information is estimated to average 8 hours per response, including the time for reviewing instructions, gathering and maintaining the data needed, and reviewing the collection of information. This is a voluntary collection of information. An agency may not conduct or sponsor, and a person is not required to respond to, a collection of information subject to the requirements of the Paperwork Reduction Act of 1995, unless it displays a currently valid OMB control number. The OMB # is 0970-0356 and the expiration date is XX/XX/XXXX. If you have any comments on this collection of information, please contact Charles Michalopoulos at [email protected].



Fatherhood fire Evaluation Plan Template



INSTRUCTIONS: This template will gather important details about your evaluation plan, including information about your programs, research questions, study design, sample characteristics, data collection and analysis plans, and other relevant information. The template aligns with the requirements outlined in the Standards for Local Evaluations of the funding announcement for the Fatherhood FIRE grants.

Please complete this form with your local evaluator. Each section should be addressed to the extent possible given the status of the evaluation plan.

The Evaluation Technical Assistance Partners (ETAPs) are available to support the development of your evaluation plan and answer any questions you may have. You are strongly encouraged to reach out to your ETAP if your evaluation plans have significantly changed since submitting your application.

Your ETAP is: [Name]

[email address]

[phone number]



Grantees are expected to submit their final evaluation plan using this template by [Insert Date]. The Evaluation Technical Assistance Team will review plans in coordination with ACF.

GlossAry of Terms

There are different types of program evaluations and many different ways to describe them. To facilitate communication, the terms used here are described below:

  • Implementation Evaluation: An evaluation that addresses the extent to which a program is operating as intended. It describes who participated in the program, the services they received, and how services were provided. It assesses program operations and whether the population targeted was served. Implementation evaluation helps identify why a program’s intended outcomes were or were not achieved. Implementation evaluation cannot address whether changes have occurred as a result of the program. It is also called a process evaluation or formative evaluation.



  • Outcome Evaluation: Outcome evaluation is used to measure the results of a program. It assesses whether the program led to intended changes in participants’ knowledge, skills, attitudes, behaviors, or awareness. It can be descriptive or impact depending on whether it seeks to address whether the intended changes were the result of the program.



  • Impact Evaluation: Impact evaluation is a type of outcome evaluation that attributes participant changes to the program. To do so, it includes a comparison or control group to help establish that program activities, not something else, caused the changes in the observed outcomes. In this design, two groups are included in the evaluation: (1) a treatment group that includes individuals who participate in the program; and (2) a comparison group that includes individuals who are similar to those in the treatment group, but who do not receive the same services.



  • Descriptive Evaluation: Descriptive evaluation is a type of outcome evaluation that measures change in outcomes by making comparisons among participants after program participation and conditions prior to participation. This design involves collecting information only on individuals participating in the program. This information is collected at least twice: once before participants begin the program and sometime after they complete program.



Program Background

  1. Program Summary

Please provide a brief summary of your grant project including the needs to be addressed, the services provided, and the population served.

Please enter response here.



  1. Evaluation Goals

Please briefly describe key goals of your evaluation and what you hope to learn below.

Please enter response here.



  1. Evaluation Enrollment

Please provide the expected start and end dates for program and evaluation enrollment using the tables below. For impact studies, please indicate expected start and end dates for each study group.



IMPLEMENTATION EVALUATION

Please leave blank if not conducting an implementation study.


Program Enrollment

Study Enrollment

Start Date



End Date



Definition





DESCRIPTIVE EVALUATION

Please leave blank if not conducting a descriptive outcome evaluation.


Program Enrollment

Study Enrollment

Start Date



End Date



Definition





IMPACT EVALUATION

Please leave blank if not conducting an impact evaluation.


Program Enrollment

Study Enrollment



Treatment Group

Comparison Group

Start Date




End Date




Definition








  1. Evaluation Timeline

Please include a timeline for key activities of the evaluation below. Example of activities may include IRB submission, staff training, waves of data collection, analysis period, and report writing and submission.

Evaluation Activity

Start Date

End Date

Please enter response.

Please enter response.

Please enter response.













* you may add rows by hitting the tab button, or right click and select insert row below



Evaluation Plan

  1. Research Questions

    1. Overview of Research Questions

Please state the research questions(s) that the evaluation intends to answer and for each research question indicate the type: implementation or outcome.

    • Implementation Questions: Identifying whether a program has been successful in attaining desired implementation goals (e.g., reaching intended target population, enrolling intended number of participants, delivering training and services in manner intended, etc.)

    • Outcome Questions: Identifying whether program is associated with intended outcomes for participants (e.g., do participants’ knowledge, attitudes, behaviors, or awareness change?)



No

Research Question

Implementation or Outcome?

R1

Please enter response.

Please enter response.

R2



R3



R4



* you may add rows by hitting the tab button, or right click and select insert row below



    1. Outcome Research Questions

For each outcome research question listed above, whether a descriptive or impact design, summarize the inputs (e.g., program components, program supports, implementation features, etc.), target population (e.g., the population for which the effect will be estimated) and the outcomes (e.g., child well-being, father-child engagement, etc.) that will be examined to answer the research question(s). Comparisons for descriptive evaluations may reflect circumstances before the grant, pre-treatment, or pre-determined benchmark from other studies with similar interventions.



Research Question Number

Should correspond to the number indicated in Table 1.1 above


Intervention

Program component or set of activities that the evaluation will test or examine

Target Population

Population for which the effect of the treatment will be estimated

Comparison

What the intervention will be compared to (e.g., pre-intervention for descriptive designs)

Outcome

Changes that are expected to occur as a result of the intervention

Confirmatory or Exploratory?

Confirmatory: those upon which conclusions will be drawn

Exploratory: those that may provide additional suggestive evidence

R1

Please enter response.

Please enter response.

Please enter response.

Please enter response.

Please enter response.

R2






R3






R4






* you may add rows by hitting the tab button, or right click and select insert row below

  1. Background

For each outcome research question listed in 1.1, whether descriptive or impact design, briefly summarize the previous literature or existing research that informs the stated research question and how the evaluation will expand the evidence base. Explain why the research questions are of specific interest to the program and/or community. Only a short summary paragraph description is needed below. Additional documentation, such as a literature review, may be appended to this document.

Research Question

Existing Research

Contribution to the Evidence Base

Interest to the Program and/or Community

R1

Provide short summary paragraph

Provide short summary paragraph

Provide short summary paragraph

R2




R3




R4




* you may add rows by hitting the tab button, or right click and select insert row below



  1. Logic Model

Clearly demonstrate how the research question(s) (and the related implementation features and/or participant outcomes) link to the proposed logic model and the theory of change for the program. You may append a copy of your logic model to this document.

Please enter response.



  1. Hypotheses

For each specified research question, state the hypothesized result(s) and briefly describe why these results are anticipated.

Research Question

Hypothesized Result

R1

Please enter your response here.

R2


R3


R4


* you may add rows by hitting the tab button, or right click and select insert row below

  1. Research Design

For each research question, briefly describe why the research design proposed will answer each research question(s). State whether the proposed evaluation is a descriptive or impact evaluation and justify why the proposed research design is best suited to answer the research question(s).

Research Question

Design

Justification

R1

Please enter your response here.

Please enter your response here.

R2



R3



R4



* you may add rows by hitting the tab button, or right click and select insert row below

  1. Ongoing grantee and local evaluator coordination

Describe how the grantee and local evaluator collaboratively worked together to identify the research question(s) and research design to ensure its feasibility and relevance. Describe how the grantee and local evaluator will continue to work together throughout the evaluation to proactively address unforeseen challenges as they arise and ensure the rigor and relevance of the evaluation and its findings. Describe how the grantee and local evaluator will coordinate dissemination efforts. Describe how these processes will occur while maintaining the independence of the evaluation.

Please enter response here.



  1. Impact Evaluations only: Methods to Develop Study Groups

If the research design includes the comparison of two or more groups (e.g., a program group and a comparison group), please specify how the groups will be formed and describe the programming for which each will be eligible and how they differ below. The control/comparison group and the program/treatment group should be assigned using a systematic approach appropriate to the research design. Note: If the research question(s) and study design do not necessitate comparisons, this issue does not need to be addressed.

Specify how the groups will be formed.

Please enter your response here.

Please describe the comparison/control group experience that is, the types of services available to the comparison/control group.

Please enter your response here.












How will the control/comparison group experience differ from the program group’s experience?

Please enter your response here.












Please list any other services that are similar to the services your program offers and are available in the areas where your program will operate.

Please enter your response here.








Are there plans to offer the program to individuals in the control/comparison group in the future? If so, please indicate when they will be offered to participate in the intervention.

Please enter your response here.












.

    1. Random Assignment to Develop Study Groups

If groups will be constructed by random assignment, please describe the process of random assignment and how random assignment will be monitored to prevent crossover of those assigned to specific study groups (e.g., individuals assigned to the comparison group who receive treatment) by addressing the questions below.



Who will conduct random assignment?

Please enter your response here.

When does random assignment occur (e.g., before or after enrollment)?

Please enter your response here.

How and when are study participants informed of their assignment status?

Please enter your response here.

Will groups be stratified in any way to ensure balance between treatment and control? If yes, what characteristics or methods will be used?

Please enter your response here.

What strategies will be used to ensure there is no re-assignment or non-random assignment to the treatment group?

Please enter your response here.

What strategies will be used to minimize crossovers?

Please enter your response here.

Who will be responsible for monitoring random assignment compliance?

Please enter your response here.

What methods will be used to monitor the comparability of the study groups?

Please enter your response here.



    1. Matching to Develop Study Groups

If a comparison group(s) will be constructed using an approach other than random assignment, please how comparison group will be formed by addressing the questions below.



How will the comparison group be identified?

Please enter your response here.

How will the program and comparison group be formed?

Please enter your response here.

When will program and comparison group be formed?

Please enter your response here.

What steps will be taken to increase the likelihood that participants in the program and comparison groups are similar?

Please enter your response here.

What metrics will be used to demonstrate the comparability of the research groups?

Please enter your response here.

What methods will be used to monitor the comparability of the research groups?

Please enter your response here.



    1. Other Methods to Develop Study Groups

If another type of evaluation research design is proposed, such as a regression discontinuity, single case, or other (non-matching) quasi- experimental designs; include an adequate description of the approach.

Please enter response here.



  1. Lead Staff

Define the roles of lead staff for the evaluation from both organizations below.

Name

Organization

Role in the Evaluation

Please enter your response here.

Please enter your response here.

Please enter your response here.










* you may add rows by hitting the tab button, or right click and select insert row below

Articulate the experience, skills, and knowledge of the staff for the evaluation (including whether they have conducted similar studies in this field), as well as their ability to coordinate and support planning, implementation, and analysis related to a comprehensive evaluation plan.

Please enter your response here.



  1. Sample

    1. Target Population(s)

For each target population identified in Section 1.2, please describe the target population(s), and explicitly state whether the population(s) differs from those who will be broadly served by the grant. Describe how the target population will be identified. Explicitly state the unit of analysis (e.g., non-residential father, unmarried couple).

Description of Target Population

How is the population different from those who will be broadly served by the grant?

How will the target population be identified?

Unit of Analysis

Please enter your response here.

Please enter your response here.

Please enter your response here.

Please enter your response here.











    1. Impact Evaluation Only: Sample Size

If an impact evaluation is proposed, state the intended sample size (overall and by year), estimated attrition, and the anticipated size of the analytic sample (for both program/treatment and control/comparison groups). If the estimated analytic sample is expected to vary by outcome measure (e.g., outcomes measured using administrative records vs. survey data), you may copy the table below and label accordingly.

Year

Estimated Sample Size

(# of individuals randomly assigned)

Estimated Size of Analytic Sample

(# of individuals at analysis)

Treatment

Comparison

Treatment

Comparison

Year 1

Enter number

Enter number

Enter number

Enter number

Year 2





Year 3





Year 4





Year 5





TOTAL







    1. RCTs Only: Power Analysis

For each confirmatory outcome, please provide power analyses demonstrating proposed sample sizes will be able to detect expected effect sizes for the outcomes targeted. Refer to previous studies of similar interventions for estimates of the required sample to inform power analyses. Note: If an impact evaluation is not proposed, this issue does not need to be addressed. You may use the table below to report the assumptions used in your power calculations, as well as the resulting minimum detectable impact for your confirmatory outcomes or provide them in the space below.


Outcome 1

Outcome 2

Outcome 3

Outcome Name




Continuous or binary?




Level of significance (e.g., 0.05 percent)




Number of sides of test (one- or two-tailed)




Power (e.g., 80 percent)




Total number of individuals in analytic sample




If binary, enter mean of outcome variable




If continuous outcome, enter the standard deviation of the outcome (>0)




Proportion of individual-level (or within-group) variance explained by covariates




For cluster RCTs: intraclass correlation coefficient




For cluster RCTs: proportion of group-level variance of outcome explained by covariates




Minimum detectable impact




Minimum detectable effect size






If you did not provide report your assumptions using the table above, please enter them here.



    1. Methods to Promote Sufficient Program Participation

Please describe methods to promote sufficient program participation in the table below.

What methods will you use to ensure sufficient sample is recruited, enrolls, and participates in the program?

Please enter your response here.

Who will be responsible for recruiting the evaluation sample?

Please enter your response here.

Please describe any incentives to be offered for program participation and/or completion and/or data collection and/or participation in the evaluation.

Please enter your response here.



  1. Data Collection

    1. Constructs and measures

Clearly articulate the constructs of interest, measures to evaluate those constructs, and specific data collection instruments. Provide any information on the reliability and validity of the data collection instruments. For standardized instruments, you may provide the citation for the instrument.

Construct

Measure

Instrument

Reliability and Validity

(if standardized instrument, you provide a citation for the instrument)

















* you may add rows by hitting the tab button, or right click and select insert row below



    1. Consent

Describe how and when program applicants will be informed of the study and will have the option of agreeing (i.e., consenting to) or declining to participate in the study.

Please enter your response here.



    1. Methods of Data Collection

If the evaluation will collect multiple waves of data, describe the timing of these waves below. When describing follow-up periods, specify whether the follow-up period will be post-baseline, post-random assignment, or post-program completion.

Wave of Data Collection

(e.g., baseline, short-term follow-up, long-term follow-up)

Timing of Data Collection












For each measure, describe how data will be collected detailing which data collection measures will be collected by which persons, and at what point in the programming or at what follow-up point.

Measure

Timing of Data Collection (baseline, wave of data collection)

Method of Data Collection

Who Is Responsible for Data Collection?

Impact Evaluations Only:

Will Methods or Collection Procedures Differ by Study Group?

Administrative Data Only:

Will data access require data sharing agreement?

























* you may add rows by hitting the tab button, or right click and select insert row below



    1. Ensuring and Monitoring Data Collection

Describe plans for training data collectors and for updating or retraining data collectors about procedures. Detail plans to regularly review data that have been submitted and to assess and swiftly address problems.

Please enter your response here.



    1. Tracking Participants and Reducing Attrition

If participants will complete post-program and/or follow-up surveys, please provide estimated response rates for each wave of data collection listed in 10.3. Please describe plans for tracking participants and reducing attrition below. Note: If no post-program or follow-up surveys are proposed, this issue does not need to be addressed.



For each wave of data collection listed in 10.3, what is your estimated response rate?

Baseline


Wave 1


Wave 2


Wave 3



Wave 4


What steps will be taken to track participants to conduct follow-up surveys with as many participants as possible?

Please enter your response here.

Please describe a plan for monitoring both overall and differential attrition (across treatment and comparison groups).

Please enter your response here.



  1. Privacy

Specify how the methods for data collection, storage, and transfer (e.g., to the federal government) will ensure privacy for study participants.

Please enter your response here.



  1. IRB/Protection of human subjects

Please describe the process for protection of human subjects, and IRB review and approval of the proposed program and evaluation plans. Name the specific IRB to which you expect to apply.

Please enter your response here.



  1. Data

    1. Databases

For each database used to enter data, please describe the database into which data will be entered (i.e., nFORM and/or other databases), including both performance measure data you plan to use in your local evaluation and any additional local evaluation data. Describe the process for data entry (i.e., who will enter the data into the database).

Database Name

Data Entered

Process for Data Entry










* you may add rows by hitting the tab button, or right click and select insert row below

    1. Data Reporting and Transfer

For each database provided in the table above, please indicate the ability to export individual-level reports to an Excel or comma-delimited format and whether identifying information is available for linking to data from other sources.

Database Name

Ability to Export Individual Reports?

What identifying information is available to facilitate linking to other data sources?










* you may add rows by hitting the tab button, or right click and select insert row below

    1. Current Security and Confidentiality Standards

For each database provided in Section 11.1, please Indicate the ability to be able to encrypt data access during transit (for example, accessed through an HTTPS connection); be able to encrypt data at rest (that is, when not in transit), have in place a data backup and recovery plan; require all users to have logins and passwords to access the data they are authorized to view; and have current anti- virus software installed to detect and address malware, such as viruses and worms.

Database Name

Ability to encrypt data during transit?

Ability to encrypt at rest?

Data Backup and Recovery Plan?

Require all users to have logins and passwords?

Current Anti-Virus Software Installed?



















* you may add rows by hitting the tab button, or right click and select insert row below

  1. Data Analysis

Briefly describe the planned approach for data analysis. If an impact analysis is proposed, name the key dependent and independent variables, and describe any methods to minimize Type I error (i.e., finding positive impacts by chance) such as limiting the number of impacts to be analyzed and/or multiple comparison correction. Describe proposed approach(es) for addressing missing data.

Please enter your response here.



  1. Data Archiving and Transfer

Briefly describe the planned approach data arching and transfer by addressing questions below.

What procedures and parameters are established for all aspects of data/information collection necessary to support archiving data collected as part of the evaluation?



Examples include informed consent, data maintenance, de-identifying data procedures.


Please enter response here.

Describe how the collection methods for all types of proposed data collection will support the archiving and transfer of each type.

Please enter response here.

How will consent form language represent plans to store data for sharing and/or transferring to other researchers?

Please enter response here.

Describe methods of data storage that will support archiving and/or transferring data.

Please enter response here.

Explain how data and analyses file construction and documentation will support data archiving and/or transferring.

Please enter response here.



  1. Dissemination

Briefly describe the planned dissemination efforts associated with the local evaluation, including any dissemination that will occur while the evaluation is ongoing (rather than after the evaluation is completed).

Please enter your response here.



Please describe any plans for study registration with an appropriate registry (e.g., clinicaltrials.gov, socialscienceregistry.org, osf.io, etc.).

Please enter your response here.



File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorVal
File Modified0000-00-00
File Created2021-01-12

© 2024 OMB.report | Privacy Policy