National Evaluation of i3 (NEi3) Data Collection Survey

National Evaluation of the Investing in Innovation (i3) Program

Appendix_A_NEi3_Data_Collection_Instrument_updated 10-25-2017

National Evaluation of i3 (NEi3) Data Collection Survey

OMB: 1850-0913

Document [docx]
Download: docx | pdf






Shape1 Shape2

Appendix A:


i3 Technical Assistance and Evaluation Project


Data Collection Instrument



This work is supported by the U.S. Department of Education under contract number ED-IES-10-C-0064, ED-IES-13-C-0005 and/or ED-IES-14-C-0007, Abt Associates, Prime Contractor.


Shape3

Cover Letter to Evaluators

Shape4

Shape5

Welcome [User_Name]!


The i3 Technical Assistance and Evaluation Project is being conducted by Abt Associates Inc. on behalf of the U.S. Department of Education (ED) Institute of Education Sciences (IES). A primary goal of the project is to assess the strength of evidence generated by independent local evaluations of i3 grants and to provide summaries of evaluation findings. The purpose of this data collection is to collect information from evaluators like you regarding the findings of your i3 independent evaluation. We look forward to working with you on this important endeavor!
 

Abt Associates and its subcontractors follow the confidentiality and data protection requirements of IES (The Education Sciences Reform Act of 2002, Title I, Part E, Section 183), which require that all collection, maintenance, use and wide dissemination of data conform to the requirements of the Privacy Act of 1974 (5 U.S.C. 552a), the Family Educational Rights and Privacy Act of 1974 (20 U.S.C. 1232g), and the Protection of Pupil Rights Amendment (20 U.S.C. 1232h). The study team will not be collecting any individually identifiable information. All of the data requested from independent evaluators will be in the form of aggregated reports of the methods, measures, plans, and results in their independent evaluations.


Responses to this data collection will be used to disseminate information about your study findings. Individual i3 grants will be identified in this study. The characteristics, results, and findings reported by independent evaluators, as well as assessments of the quality of the independent evaluations and i3 projects, may potentially be reported. This study cannot, however, associate responses with a specific school or individual participating in i3 independent evaluations, since we are not collecting that information.


Grantees and their independent evaluators are required to cooperate with the requirements of the i3 Technical Assistance and Evaluation Project. Participation in data collection is required. Your response is critical for producing valid and reliable data. You may skip any questions you do not wish to answer; however, we hope that you answer as many questions as you can. Your answers to questions will not affect your grant—now or in the future. Participation in this data collection will not impose any risks to you as a respondent. If you have any questions about your rights as a research participant, you can contact Teresa Doksum, the Chairperson of Abt’s IRB at 617-349-2896 or by email at [email protected].


This survey is authorized by law:American Recovery and Reinvestment Act of 2009 (ARRA), Section 14007, Title XIV (Public Law (P.L.) 111-5.


We estimate that the survey will take you approximately 15 hours to complete.


OMB Clearance Number: 1850-0913 Expiration Date: xx/xx/xxxx


According to the Paperwork Reduction Act of 1995, no persons are required to respond to a collection of information unless it displays a valid OMB control number. The valid OMB control number for this information collection is 1850-0913. The time required to complete this information collection is estimated to average approximately 15 hours per respondent, including the time to review instructions, gather the data needed, and complete and review the information collected. If you have any comments concerning the accuracy of the time estimate(s) or suggestions for improving this form, please write to: U.S. Department of Education, Washington, DC 20202. If you have comments or concerns regarding the status of your individual submission of this form, write directly to: U.S. Department of Education, Institute of Education Sciences, 555 New Jersey Avenue, N.W., Washington, DC 20208.

Overview of the i3 Data Collection Instrument



The data collection instrument is organized into three main sections:

  1. Background Section

  2. Fidelity of Implementation Section

  3. Impact Study Section


The
 Background Section asks you about the parties involved in developing, implementing and evaluating the i3-funded intervention on which you worked. It also asks some general questions about the grant, intervention, and evaluation. 

The
 Fidelity of Implementation Section asks you to describe the key components of the i3-funded intervention you evaluated, the measures constructed to evaluate fidelity of implementation for those components, and the findings from your implementation study. 

The
 Impact Study Section asks you to describe each of your evaluation’s confirmatory contrasts. Specifically, the survey will ask you to provide information about the results from the analysis for each contrast, including the estimated effects of the intervention and the size of the analysis sample. In addition to your confirmatory contrasts, you are welcome to provide the details of any additional exploratory contrasts from this study. This section is divided up into the following nine subsections:

  • Contrast Description: Key Features

  • Contrast Description: Design

  • Contrast Description: Outcome Measurement

  • Contrast Description: Baseline Measures

  • Confounds

  • Outcomes

  • Assessing Attrition

  • Baseline Measurement

  • Reported Findings

 

Getting Help

For help selecting appropriate data for entry and/or for further guidance, please call 301-347-5638.



Shape6

Background Section

Shape7

Shape8

Background Information

The following section asks you about the parties involved in developing, implementing and evaluating the i3-funded intervention on which you worked. It also asks some general questions about the grant, intervention, and evaluation.

  1. List each of the each of the impact studies in the table below. Separate studies are distinguished by different designs rather than different outcomes (e.g., a school-level RCT of impacts on elementary students, a school-level QED of impacts on high school students). Include an introductory sentence that summarizes the number of studies being conducted. If there is more than one impact study, cut and paste Section 2.1 and complete the information for each impact study separately.

Impact studies described in this document [examples]:



Title


Notes


Impact Study 1: School QED

e.g., Impact of the intervention on student and teacher outcomes (Schools assigned to conditions non-randomly)


Impact Study 2: Student RCT

e.g., Impact of the intervention on student outcomes (Students randomly assigned to conditions)


Impact Study 3




  1. List the name and address of the organization conducting the independent evaluation. Also include the name, phone number and email information for the primary contact person(s) responsible for carrying out the evaluation. ____________________________________________________________________________



  1. What is the name of the i3-funded intervention? ____________________________________________________________________________



  1. We would like to learn more about the cost of the independent evaluation you conducted. What is the total dollar amount of your evaluation subcontract? ____________________________________

  1. Do you know of any additional subcontracts your i3 grantee has with other entities related to evaluation (i.e., for data collection, survey development, consultation, etc.)?

Yes

No

  1. If you answered “Yes” above, please tell us more about the additional subcontract(s).

    1. What entity/entities is/are conducting the additional evaluation work? ______

    2. What tasks are associated with this/these additional subcontract(s)? ________

    3. What is the total dollar amount of this/these additional subcontract(s)? ______

  1. Please review the following summary description of the i3-funded intervention, and indicate its accuracy by selecting the appropriate box below. If the description requires any revision, please insert a corrected description in the area provided. Please note that this description is intended for inclusion in the NEi3 Findings Report; however, the NEi3 may condense, or seek clarifications for, proposed revisions.

Description: This section will be pre-populated by the NEi3 team with a summary of the intervention in sufficient detail to help readers understand what makes this intervention similar to or different from other interventions. The level of detail will be similar to what would be provided in an introductory section of a typical impact evaluation report. It will include the length of the intervention and the dosage, as well as information about the content, delivery, and implementation of the intervention.



The intervention description is correct. (If this box is checked, the NEi3 Description that appears above will be used in the NEi3 Findings Report.)

The intervention description requires some revision. (If this box is checked, please enter a revision below.)



Evaluator Revision:

Shape9

Please insert a revised description here if needed.









  1. Please indicate whether each of these statements is true, and if NOT, why this is the case:

Neither the grantee nor the intervention developer analyzed outcomes data for any confirmatory contrast.

True

Shape10

If False, please explain.

False





Neither the grantee nor the intervention developer reported findings (i.e., impact estimates and standard errors) to the i3 Technical Assistance and Evaluation project for any confirmatory contrast.

True

Shape11

If False, please explain.

False



  1. Please review the following description of the population served by this grant’s i3-funded intervention and indicate its accuracy by selecting the appropriate box below. If the description requires any revision, please insert a corrected description in the area provided.

Description: This section will be prepopulated with a description of the population served by this grant’s i3-funded intervention(s). The description will include the target age/grade, or teacher type (for example, fifth grade teacher, elementary school counselor, high school science teacher, etc.), of the intervention. It will also include the location of the intervention and a description of the type and number of schools served.


The intervention description is correct. (If this box is checked, the NEi3 Description that appears above will be used in the NEi3 Findings Report.)

The intervention description requires some revision. (If this box is checked, please enter a revision below.)



Evaluator Revision:

Please insert a revised description here if needed.

Shape12



  1. Sample Identification, Selection, and Assignment: Provide a narrative description that includes the following details:

If “district” is part of the sample identification and selection process, describe the identification/selection of study districts, including:

  • How the districts were chosen to be in the study (e.g., convenience sample, sampled from a defined population).

  • Articulate all inclusion and exclusion criteria.

  • Describe the time frame of recruitment.

  • If there are districts be served by the i3 grant that are not included in the impact evaluation, describe the difference between the districts that are served by the grant and the districts that are included in the evaluation, and how the latter were chosen.

  • If assignment to treatment versus control/comparison conditions occurs at the district level, provide a detailed description of the assignment procedure.

  • For RCTs:

    • Describe the randomization procedure. For example, if districts were randomized to T and C conditions within blocks (or matching strata), describe the blocks and procedures.

    • State the number of districts that were assigned to T and C conditions.

    • For RCTs give the date when districts learned their randomization status.

  • For QEDs

  • Describe the procedures for identifying the T and C groups. For example, for matched designs, describe the dimensions on which districts were matched and the procedures for matching.

Shape13

If “school” is part of the sample identification and selection process, describe the identification/selection of study schools, including:

  • How the schools were chosen to be in the study (e.g., convenience sample, sampled from a defined population).

  • Articulate all inclusion and exclusion criteria.

  • Describe the time frame of recruitment.

  • If there are schools be served by the i3 grant that are not included in the impact evaluation, describe the difference between the schools that are served by the grant and the schools that are included in the evaluation, and how the latter were chosen.

  • If assignment to treatment versus control/comparison conditions occurs at the school level, provide a detailed description of the assignment procedure.

  • For RCTs:

    • Describe the randomization procedure. For example, if schools were randomized to T and C conditions within blocks (or matching strata), describe the blocks and procedures.

    • State the number of schools that were assigned to T and C conditions.

    • For RCTs give the date when schools learned their randomization status.

  • For QEDs

  • Describe the procedures for identifying the T and C groups. For example, for matched designs, describe the dimensions on which schools were matched and the procedures for matching.

Shape14

If “teacher” is part of the sample identification and selection process, describe the identification/selection of study teachers, including:

  • How the teachers were chosen to be in the study (e.g., all teachers in school or in relevant grades are included, or only 1st year teachers are included, or teachers are selected as convenience sample, or sampled from a defined population).

  • Articulate all inclusion and exclusion criteria.

  • Describe the time frame of recruitment.

  • If there are teachers be served by the i3 grant that are not included in the impact evaluation, describe the difference between the teachers that are served by the grant and the teachers that are included in the evaluation, and how the latter were chosen.

  • If assignment to treatment versus control/comparison conditions occurs at the teacher level, provide a detailed description of the assignment procedure.

  • For RCTs:

    • Describe the randomization procedure. For example, if teachers were randomized to T and C conditions within blocks (or matching strata), describe the blocks and procedures.

    • State the number of teachers that were assigned to T and C conditions.

    • For RCTs give the date when teachers learned their randomization status.

  • For QEDs

  • Describe the procedures for identifying the T and C groups. For example, for matched designs, describe the dimensions on which teachers were matched and the procedures for matching.

  • If the study will estimate impacts on teacher outcomes, and if a cluster randomized design was used, with randomization at higher level than teachers (e.g., schools or districts were randomized to treatment or control conditions) clearly describe whether the teachers in the analytic sample:

    • Could only have joined the randomized clusters (e.g., schools or districts) prior to the point when the clusters learned their randomization status; or

    • Could have joined the randomized clusters after the point when clusters learned their randomization status. If so, state the date that the clusters learned their randomization status, and the latest possible date that teachers could have joined the clusters.

Shape15

If “class” is part of the sample identification and selection process, describe the identification/selection of study classes, including:

  • How the classes were chosen to be in the study (e.g., all classes in school or in relevant grades are included, or classes are selected as convenience sample, or sampled from a defined population).

  • Articulate all inclusion and exclusion criteria.

  • If there are classes be served by the i3 grant that are not included in the impact evaluation, describe the difference between the classes that are served by the grant and the classes that are included in the evaluation, and how the latter were chosen.

  • If assignment to treatment versus control/comparison conditions occurs at the class level, provide a detailed description of the assignment procedure.

  • For RCTs:

    • Describe the randomization procedure. For example, if classes were randomized to T and C conditions within blocks (or matching strata), describe the blocks and procedures.

    • State the number of classes that were assigned to T and C conditions

    • For RCTs give the date when classes learned their randomization status

  • For QEDs

  • Describe the procedures for identifying the T and C groups. For example, for matched designs, describe the dimensions on which classes were matched and the procedures for matching.

Shape16

If “student” is part of the sample identification and selection process, describe the identification/selection of study students, including:

  • How the students were chosen to be in the study (e.g., all students in school or in relevant grades are included, or only English Language Learner students are included, or students are selected as convenience sample, or sampled from a defined population).

  • Articulate all inclusion and exclusion criteria.

  • Describe the time frame of recruitment.

  • If there are students be served by the i3 grant that are not included in the impact evaluation, describe the difference between the students that are served by the grant and the students that are included in the evaluation, and how the latter were chosen.

  • If assignment to treatment versus control/comparison conditions occurs at the student level, provide a detailed description of the assignment procedure.

  • For RCTs:

    • Describe the randomization procedure. For example, if students were randomized to T and C conditions within blocks (or matching strata, e.g., within schools, or within grade-levels and within schools, or within cohort and within grade levels and within schools) describe the blocks and procedures.

    • State the number of students that were assigned to T and C conditions.

    • For RCTs give the date when students learned their randomization status.

  • For QEDs

  • Describe the procedures for identifying the T and C groups. For example, for matched designs, describe the dimensions on which students were matched and the procedures for matching.

  • If the study will estimate impacts on student outcomes, and if a cluster randomized design was used, with randomization at higher level than students (e.g., districts, schools, teachers, or classes were randomized to treatment or control conditions) clearly describe whether the students in the analytic sample:

    • Could only have joined the randomized clusters prior to the point when the clusters learned their randomization status (if so, state the number of students that were enrolled in the clusters prior to randomization); or

    • Could have joined the randomized clusters after the point when clusters learned their randomization status.

      • If the analytic sample of students included only students who were identified as belonging to the clusters soon after randomization (i.e., sample includes early joiners), state the date that the clusters learned their randomization status, and the latest possible date that students were identified as belonging to the clusters, and state the numbers of students that were enrolled in the clusters at that time.

      • If the analytic sample potentially included late joiners, state the date that the clusters learned their randomization status, and the latest possible date that students in the analytic sample could have joined the clusters.

Shape17












Shape18

Data Collection Instrument: Fidelity Study Section Instructions

Shape19

Shape20

Please complete a row for each of the key components of the i3 intervention that is included in your logic model and fidelity measurement system. The following instructions define the data that should be entered in each of the columns.


Column

Column Header

Instructions for completing corresponding cells

A

Key Component

Please enter the name of the key component.


Key components are the activities and inputs that are under the direct control of the individual or organization responsible for program implementation (e.g., program developer, grant recipient), and are considered by the developer to be essential for implementing the intervention.  Key components may include financial resources, professional development for teachers, curricular materials, or technology products.

B

Implementation Measure

Using your study’s logic model, design summary, and fidelity matrix as a guide (whenever applicable), please provide the total number of measurable indicators representing each component. Depending upon how you defined each key component, indicators might encompass a combination of activities at the teacher, classroom, school, or sample level.


Example: Fidelity of implementation for the key component “professional development” is based on four professional development indicators: a one-week summer institute, five webinars, two coaching sessions, and ongoing PD.

C

Year 1: Component Level Threshold for Fidelity of Implementation for the Unit that is the Basis for the Sample-Level Representation of Fidelity (i.e. definition of implementation with fidelity)

Please provide the definition of implementation with fidelity for the unit that is the basis for the sample-level representation of fidelity in Year 1 for each key component; i.e., the component level threshold that delineates implementation with fidelity from implementation without fidelity for a particular sample unit.


Example of unit-level: If the sample-level fidelity threshold is defined as the percent of schools that have received an adequate score on a key component, the unit that is the basis for the sample-level representation of fidelity is “school”. The definition of implementation with fidelity could be: “a single school will be considered to have implemented the key component with fidelity if at least 90% of teachers at the school: attended the one-week summer institute, attended four of five webinars, attended two of two coaching sessions, and received ongoing PD as needed (as self-reported in teacher survey)."


D

Year 1: Sample Size (of Units Measured at the Sample Level)

Please enter the sample size (N) of units associated with the sample-level threshold and results. For example, if sample-level threshold is 80% or more schools are “high” implementers, please enter the number of schools.

E

Year 1: Component Level Threshold for Fidelity of Implementation at the Sample Level (i.e. definition of implementation with fidelity)

Please provide the definition of implementation with fidelity at the sample-level in Year 1 for each key component; i.e., the component level threshold that delineates implementation with fidelity from implementation without fidelity across sample units.


Examples of sample-level: “At least 80% of schools implement with fidelity” or “60% or more of districts implement with fidelity”.


The sample-level threshold (e.g., across schools) should be a summary measure of fidelity score ranges and thresholds established at level of the unit that is the basis for the sample-level representation of fidelity (e.g., teacher, classroom, school).

F

Year 1: Component Level Fidelity Score for the Entire Sample

Please report the component level fidelity scores achieved in Year 1 of the intervention for each key component.


Examples: “85% of schools implemented with fidelity” or “70% of districts implemented with fidelity”

G

Year 1: Component Level Implementation with Fidelity at the Sample Level? (Yes/No)


H

Year 2: Component Level Threshold for Fidelity of Implementation for the Unit that is the Basis for the Sample-Level Representation of Fidelity (i.e. definition of implementation with fidelity)

Same instructions as in column C, replacing year one with year two.

I

Year 2: Sample Size (of Units Measured at the Sample Level

Same instructions as in column D.

J

Year 2: Component Level Threshold for Fidelity of Implementation at the Sample Level (i.e. definition of implementation with fidelity)

Same instructions as in column E, replacing year one with year two.

K

Year 2: Component Level Fidelity Score for the Entire Sample

Same instructions as in column F, replacing year one with year two.

L

Year 2: Component Level Implementation with Fidelity at the Sample Level? (Yes/No)

Same instructions as in column G.

M

Year 3: Component Level Threshold for Fidelity of Implementation for the Unit that is the Basis for the Sample-Level Representation of Fidelity (i.e. definition of implementation with fidelity)

Same instructions as in column C, replacing year one with year three.

N

Year 3: Sample Size (of Units Measured at the Sample Level

Same instructions as in column D.

O

Year 3: Component Level Threshold for Fidelity of Implementation at the Sample Level (i.e. definition of implementation with fidelity)

Same instructions as in column E, replacing year one with year three.

P

Year 3: Component Level Fidelity Score for the Entire Sample

Same instructions as in column F, replacing year one with year three.

Q

Year 3: Component Level Implementation with Fidelity at the Sample Level? (Yes/No)

Same instructions as in column G.

R

Year 4: Component Level Threshold for Fidelity of Implementation for the Unit that is the Basis for the Sample-Level Representation of Fidelity (i.e. definition of implementation with fidelity)

Same instructions as in column C, replacing year one with year four.

S

Year 4: Sample Size (of Units Measured at the Sample Level

Same instructions as in column D.



T

Year 4: Component Level Threshold for Fidelity of Implementation at the Sample Level (i.e. definition of implementation with fidelity)

Same instructions as in column E, replacing year one with year four.

U

Year 4: Component Level Fidelity Score for the Entire Sample

Same instructions as in column F, replacing year one with year four.

V

Year 4: Component Level Implementation with Fidelity at the Sample Level? (Yes/No)

Same instructions as in column G.

W

Year 5: Component Level Threshold for Fidelity of Implementation for the Unit that is the Basis for the Sample-Level Representation of Fidelity (i.e. definition of implementation with fidelity)

Same instructions as in column C, replacing year one with year five.

X

Year 5: Sample Size (of Units Measured at the Sample Level

Same instructions as in column D.

Y

Year 5: Component Level Threshold for Fidelity of Implementation at the Sample Level (i.e. definition of implementation with fidelity)

Same instructions as in column E, replacing year one with year five.

Z

Year 5: Component Level Fidelity Score for the Entire Sample

Same instructions as in column F, replacing year one with year five.

AA

Year 5: Component Level Implementation with Fidelity at the Sample Level? (Yes/No)

Same instructions as in column G.



Scale-Up Key Components (Scale-Up grants only)

AB

Key component / activities related to scale-up

Please enter each key component necessary to support scale-up efforts separately.

Scale-Up Goals (Scale-Up grants only)

AC

Goal

Please enter scale-up goals separately. Scale-up goals should be quantified and measurable, e.g., 500 additional teachers trained.

AD

Results at grant end

Please enter the results of each scale-up goal achieved by the end of the grant period, e.g., If 523 additional teachers were trained, enter “523” in the field

AE

Goal met

Please enter a final determination (yes/no) indicating whether each scale-up goal was met by grant end.

AF

Results Year 1

Optional: if you choose, please enter scale-up results for each goal at the end of each or any year(s) of implementation. For example, if scale-up goals were measured in years 3 and 4, enter results for each component in those fields and enter N/A (or leave blank) fields associated with years 1, 2, and 5.


AG

Results Year 2

AH

Results Year 3

AI

Results Year 4

AJ

Results Year 5



Shape21

Data Collection Instrument: Impact Study Section Instructions

Shape22

Shape23

Please complete a row for each contrast measured by your impact study. The following instructions define the data that should be entered in each of the columns.


Column

Column Header

Instructions for completing corresponding cells (for each contrast)

A

Contrast Name

Please enter the names of each of the evaluation’s confirmatory and exploratory contrasts. This field does not affect the assessment or reporting of the contrast; the contrast name simply facilities communication between the evaluator and the AR Team, and facilitates referencing contrasts between the data-entry fields of this survey. The contrast name should uniquely identify the contrast and differentiate among similar contrasts: e.g. “Math Achievement-confirmatory” and “Math Achievement-cohort 1” only might be the names of two similar contrasts.

B

Confirmatory or Exploratory?

Indicate whether this contrast is confirmatory or exploratory. Please note that any pre-specified contrasts that were registered as confirmatory will be pre-populated as such and cannot be changed to be exploratory. All other contrasts will be listed as exploratory.

C

Contrast Status

For each contrast, please select from among the following options:

- Contrast Confirmed if the effectiveness findings from this contrast are reported in this survey

- Not Reported if the effectiveness findings from this contrast are not reported in this survey and will not be reported in the future


D

Expected Reporting Date

If effectiveness findings for this contrast are not yet available, please provide the date when you plan to report effectiveness findings.


(Not applicable for contrasts for which data is provided in this survey or for contrasts for which findings will never be reported.)



E

Contrast Notes

This field is not required, but is available to provide contextual information or notes about each specific contrast as needed.



F

Outcome Domain

For confirmatory contrasts, this field is populated with the registered domain name of your pre-specified confirmatory contrast. As the domain is a defining feature of a contrast, it cannot be edited by the evaluator. Contact the helpdesk to correct typos or other minor errors.


For exploratory contrasts, please enter the name of the domains measured. If the exploratory contrast measures the same construct as one of the confirmatory contrasts, please use the same domain name.



G

Intervention Condition

For confirmatory contrasts, this field is populated with the registered description of the intervention tested by the pre-specified confirmatory contrast. As the intervention condition is a defining feature of a contrast, it cannot be edited.


For exploratory contrasts, please name the intervention(s) experienced by the treatment group in the study. Do not use acronyms. If the condition is the same as the one tested by a confirmatory contrast, please use the same name.


If the contrast involves comparing one treatment to another, please identify one treatment as the condition and another as the comparison condition for the purposes of responding to this survey. Use this identification throughout the survey when asked about the treatment/intervention group and the comparison group.



H

Comparison Condition

For confirmatory contrasts, this field is populated with the registered description of the counterfactual condition in the pre-specified confirmatory contrast. As the intervention condition is a defining feature of a contrast, it cannot be edited.


For exploratory contrasts, please name the intervention(s) experienced by the comparison group in the study. This may be the name of another intervention or Business as Usual, if no specific intervention. Do not use acronyms. If the condition is the same as the one tested by a confirmatory contrast, please use the same name.


If the contrast involves comparing one treatment to another, please identify one treatment as the condition and another as the comparison condition for the purposes of responding to this survey. Use this identification throughout the survey when asked about the treatment/intervention group and the comparison group.

I

Educational Level

For confirmatory contrasts, this field is populated with the educational level that was registered in the pre-specified confirmatory contrast. As the educational level is a defining feature of a contrast, it cannot be edited.


For exploratory contrasts, please report the educational level (e.g. elementary, middle or high school) of the students that experience the intervention. For teacher outcomes, report the educational level of the students in the teachers’ classrooms.

J

Design

Please select one of the following designs: RCT, RDD, QED, Comparison Group Design Without a Pretest, ITS, Pre-post.


Definitions:


Randomized controlled trial (RCT). Under this design, an evaluation uses a random process to assign study units to an intervention groups and to a control group, and measures the effect of the intervention by comparing the outcomes of units assigned to the two groups.


Regression discontinuity design (RDD). Under this design, an evaluation would use a continuous “scoring” rule to assign study units to the intervention. Units with scores above a pre-set cutoff value are assigned to the treatment group and units with scores below the cutoff value are assigned to the comparison group, or vice versa. The evaluation then compares the outcomes of the two groups close to the cutoff to measure the effect of the intervention.


Quasi-Experimental Design (QED). A QED compares outcomes for intervention participants with outcomes for a comparison group chosen through methods other than randomization (RCT) or a scoring rule with a well-defined threshold (RDD) and includes one or more pre-intervention measures. Comparative Interrupted Time Series designs are treated as QEDs by the NEi3.


Comparison Group Design Without a Pretest. A comparison group design without a pre-test compares outcomes for intervention participants with outcomes for a comparison group chosen through methods other than randomization (RCT) or a scoring rule with a well-defined threshold (RDD) and does not include pre-intervention observations.


Interrupted Time Series Design (ITS). An ITS design uses observations at multiple points in time both before and after the intervention was implemented, and compares the outcomes after the intervention with the trend established prior to the intervention. These designs do not use a comparison group.


Pre-post Design. A pre-post design compares outcomes for intervention participants after receiving the intervention to their outcomes before receiving the intervention. These designs do not use a comparison group.

K

Unit of Assignment

For each contrast, please report the unit of assignment or the level at which assignment to the intervention and comparison conditions occur (e.g., student, teacher, classroom, or school).

L

Intervention Group [alternately, “Treatment Group”]: Description

Please describe the intervention group.


Example 1 (Candy Valley): “2012 first-graders in all Candy Valley District Elementary Schools reading below grade level who do not have an IEP and were randomly assigned to the intervention.”


Example 2 (Apple Valley): “This contrast pools data across all treated cohorts and schools. All students in treated grades in Fuji Elementary (K-2), Granny Smith Elementary (K-1) and Red Delicious Elementary (K-2) who were present for data collection in Spring 2014 are included in the treatment group.”

M

Age/Grade during the Intervention

Please provide the age/grade of the students in the intervention group for the contrast during their exposure to the intervention.


Example 1 (Candy Valley): “The intervention was fielded to first grade students.”


Example 2 (Apple Valley): “Both Fuji and Red Delicious Elementary Schools introduced the intervention in the 2011-2012 school year to all kindergarten students, offered the intervention to all kindergarten and first graders in 2012-2013 and to kindergarten through second graders in 2013-2014. Granny Smith Elementary introduced the intervention in the 2012-2013 school year to all kindergarten students and offered the intervention to all kindergarten and first graders in 2013-2014.”

N

Length of Student Exposure to Intervention

Please describe how long (no. of semesters/years) the students in the intervention group were engaged in the intervention at the time the outcome was measured for this contrast.


Example 1 (Candy Valley): “Students received one year of treatment”


Example 2 (Apple Valley): “When outcomes were measured in Spring 2014, both Fuji and Red Delicious Elementary Schools students had the following pattern of exposure to treatment: second grade students had three years of treatment, first grade students had two years of treatment and kindergarten students had one year of treatment. When outcomes were measured in Spring 2014, Granny Smith Elementary first graders had two years of treatment and kindergarten students had one year of treatment.”

O

Comparison Group: Description

Please describe which schools, teachers and/or students will be included in the comparison group for this analysis.


Example 1 (Candy Valley): “2012 first-graders in all Candy Valley District Elementary Schools reading below grade level who do not have an IEP and were randomly assigned to the control group.”

Example 2 (Apple Valley): “This contrast pools data across all cohorts and schools matched to treated cohorts and schools. K-2 students from Honey Crisp Elementary (comparison for Fuji Elementary), K-1 students from Ginger Gold Elementary serves (comparison for Granny Smith Elementary), and K-2 students Jonagold Elementary (comparison for Red Delicious Elementary) who were present for data collection in Spring 2014 are included in the treatment group.”

P

Outcome or Instrument Name (full names, not acronyms)

Please report the full name of the outcome measure, including the name of the subtest or subscale if applicable. If the instrument for the outcome measure is not used in one of the confirmatory contrasts, please describe (1) what the instrument is designed to measure and (2) how the outcome measure is constructed.

Q

Construction of Outcome Measure (binary, ordinal, or continuous)

Please indicate whether the outcome measure is binary (dichotomous), ordinal (ordered, e.g., “failing, provisionally passing, proficient, advanced”), or continuous.

R

Unit of Observation for Outcome Measure

Please report the unit for which outcomes are observed in your data (e.g. student, teacher, classroom, school, district).


Example: If your data are school-level averages of student test scores, enter “School”.

S

Timing of Outcome Measure (year, Fall/Spring) for the Intervention Group

Please report when during the calendar year the outcome was measured.

T

Timing of Outcome Measure (age/grade) for the Intervention Group

Please report the age or grade level for which the outcome was measured.

U

Timing of Outcome Measure (year, Fall/Spring) for the Comparison Group

Please report when during the calendar year the outcome was measured.

V

Timing of Outcome Measure (age/grade) for the Comparison Group

Please report the age or grade level for which the outcome was measured.

W

Key Baseline Measures

Please report the baseline variable(s) on which you are establishing baseline equivalence.


(Field is not applicable to the following designs: comparison group designs without a pretest, ITS designs and pre-post designs.)

X

Construction of Baseline Measure (binary, ordinal, or continuous)

Please indicate whether the baseline measure is binary (dichotomous), ordinal (ordered, e.g., “failing, provisionally passing, proficient, advanced”), or continuous.


(Field is not applicable to the following designs: comparison group designs without a pretest, ITS designs and pre-post designs.)

Y

Unit of Observation for Baseline Measure

Please report the unit for which baseline measures are observed in your data (e.g. student, teacher, classroom, school, district).


Example: If your data are school-level averages of student test scores, enter “School”.


(Field is not applicable to the following designs: comparison group designs without a pretest, ITS designs and pre-post designs.)

Z

Timing of Baseline Measure (year, Fall/Spring,) for the Intervention Group

Please report when during the calendar year the data you will use to test baseline equivalence was measured.


(Field is not applicable to the following designs: comparison group designs without a pretest, ITS designs and pre-post designs.)

AA

Timing of Baseline Measure (age/grade) for the Intervention Group

Please report the age or grade level for which the data you will use to test baseline equivalence was measured.


(Field is not applicable to the following designs: comparison group designs without a pretest, ITS designs and pre-post designs.)

AB

Timing of Baseline Measure (year, Fall/Spring) for the Comparison Group

Please report when during the calendar year the data you will use to test baseline equivalence was measured.


(Field is not applicable to the following designs: comparison group designs without a pretest, ITS designs and pre-post designs.)

AC

Timing of Baseline Measure (age/grade) for the Comparison Group

Please report when the data you will use to test baseline equivalence was measured.


(Field is not applicable to the following designs: comparison group designs without a pretest, ITS designs and pre-post designs.)

AD

Intervention Condition: Other Activities

To your knowledge, were any other interventions (other than your project’s i3-funded intervention) targeting the outcome domain listed in [C] that were offered to the intervention group but not the comparison group? If so, please describe the other intervention.


AE

Fundamental Confound at the District Level: Intervention Condition

Does the intervention condition include schools/teachers/students from a single school district which does not also include comparison group members?

AF

Fundamental Confound at the School Level: Intervention Condition

Does the intervention condition include teachers/students from a single school which does not also include comparison group members?

AG

Comparison Condition: Other Activities

To your knowledge, were any other interventions (other than your project’s i3-funded intervention) targeting the outcome domain listed in [C] that were offered to the comparison group but not the intervention group? If so, please describe the other intervention.


AH

Fundamental Confound at the District Level: Comparison Condition

Does the comparison condition include schools/teachers/students from a single school district which does not also include intervention group members?

AI

Fundamental Confound at the School Level: Comparison Condition

Does the comparison condition include teachers/students from a single school which does not also include intervention group members?

AJ

Outcome or Instrument Name (full names, not acronyms)

This field displays the outcome measure you entered previously. This field is not editable. If you wish to edit this entry, go to the Contrast Description: Outcome Measurement tab.

AK

Who Provided the Outcome Data?


For each contrast, please report the name of the entity that provided the outcome data.

AL

Value and Type of Reliability Measure of Instrument for Outcome Data

Please provide a narrative description of the outcome and discuss the reliability and validity of the measure, including the value and type of reliability measure. For example, you may report an internal consistency, test-retest reliability, or inter-rater reliability measure.


For outcomes for which reliability statistics cannot be obtained, please describe why reliability statistics are not available.


Example 1: “The Orange Gulch District writing assessment was scored by trained reviewers according to a holistic rubric. A random subset of essays was double scored and demonstrated an inter-rater reliability of 89%.”


Example 2: “Attendance is the proportion of school days for which an individual student was present for at least one course at Orange Gulch High School. Reliability statistics cannot be calculated for attendance. However, it is a face valid measure of the pre-specified domain: High School Attendance.”

AM

Consistency of Collection

Please describe any differences in the process for collecting data on the outcome measure between the groups.


To be consistently collected, an outcome measure must be collected using the same rules or procedures in both groups. For example, data would not be consistently collected if intervention students take the post-test in January, but comparison students take the test in March – two months later.

AN

Treatment of Missing Data

Describe how the study addressed missing values for the outcome measure.

AO

Unit of Assignment

This field displays the unit of assignment you entered previously. This field is not editable. If you wish to edit this entry, go to the Contrast Description: Design tab.

(Section is not applicable to the following designs: QEDs, comparison group designs without a pretest, ITS designs and pre-post designs.)

AP

Unit of Observation for Outcome Measure

This field displays the unit of observation for outcome entered previously. This field is not editable. If you wish to edit this entry, go to the Contrast Description: Outcome Measurement tab.

(Section is not applicable to the following designs: QEDs, comparison group designs without a pretest, ITS designs and pre-post designs.)

AQ

Cluster Randomized Design?

Select “Yes” if the contrast is from a cluster randomized design.


Select “No”, otherwise


A cluster randomized design is one in which clusters (of individuals) were assigned by a process that was functionally random. These clusters may be associated with, for instance, particular teachers, classrooms, schools, or districts.


(Section is not applicable to the following designs: QEDs, comparison group designs without a pretest, ITS designs and pre-post designs.)

AR

Analytic Sample Includes Joiners?

Select “not applicable” if your design is not a cluster randomized design.


Select “yes” if your analytic sample includes individuals who were not present in the cluster at the time of random assignment.


Select “no” if your analytic sample is restricted to individuals who were present at the time of random assignment.


(Section is not applicable to the following designs: QEDs, comparison group designs without a pretest, ITS designs and pre-post designs.)

AS

Description of Joiners

Provide a narrative description of the joiners in your analytic sample. Include information on when they joined the sample relative to a) the point of random assignment and b) the point at which clusters were notified of their assignment status.


Example 1: The analysis sample includes all students in study schools in Year 3. Some students may have been transferred into study schools after the point of random assignment (Year 1) and may not have two full years of exposure.


Example 2: The student sample is comprised of all students in study teachers’ classrooms in Year 2 of the study. Because teacher rosters were not set at the point of random assignment (Year 1), all students in Year 2 are considered joiners.


(Not applicable if your sample does not include joiners.)

AT

Intervention Group: Units of Assignment Randomized

Enter the number of clusters randomly assigned to the intervention group.


Complete this field if the contrast is from a cluster randomized design AND the unit of assignment differs from the unit of analysis.


(Section is not applicable to the following designs: QEDs, comparison group designs without a pretest, ITS designs and pre-post designs.)

AU

Intervention Group: Reasons for Loss of Units of Assignment

List reasons for sample loss separated by a carriage return.


Complete this field if the contrast is from a cluster randomized design AND the unit of assignment differs from the unit of observation.


(Section is not applicable to the following designs: QEDs, comparison group designs without a pretest, ITS designs and pre-post designs.)

AV

Intervention Group: Units of Assignment Lost (for each reason)

For each reason for sample loss, list the number of units lost.


Complete this field if the contrast is from a cluster randomized design AND the unit of assignment differs from the unit of observation.


(Section is not applicable to the following designs: QEDs, comparison group designs without a pretest, ITS designs and pre-post designs.)

AW

Intervention Group: Units of Assignment in Analytic Sample

Enter the number of clusters for the intervention group for the analytic sample.


Complete this field if the contrast is from a cluster randomized design AND the unit of assignment differs from the unit of analysis


(Section is not applicable to the following designs: QEDs, comparison group designs without a pretest, ITS designs and pre-post designs.)

AX

Intervention Group: Units of Observation Randomized or in Randomized Cluster

Enter the sample size for the intervention group at the time of random assignment


Complete this field the contrast is from:

  1. a cluster randomized design AND the unit of assignment differs from the unit of analysis AND the contrast does not include joiners

  2. any randomized design where the unit of analysis is the same as the unit of assignment


(Section is not applicable to the following designs: QEDs, comparison group designs without a pretest, ITS designs and pre-post designs.)

AY

Intervention Group: Units of Observation in Remaining Clusters

Enter the sample size for the intervention group at the time of random assignment in clusters that appear in the analytic sample. This excludes units of observation for which the entire cluster left the sample.


E.g. For designs where students are the unit of observation and schools are the unit of assignment, enter the number of eligible students at the time of random assignment in schools that did not attrit


This field is not applicable if the contrast is from a cluster randomized design AND the unit of assignment differs from the unit of observation AND the contrast includes joiners


(Section is not applicable to the following designs: QEDs, comparison group designs without a pretest, ITS designs and pre-post designs.)

AZ

Intervention Group: Reasons for Loss of Units of Observation

List reasons for sample loss separated by a carriage return.


This field is not applicable if the contrast is from a cluster randomized design AND the unit of assignment differs from the unit of observation AND the contrast includes joiners


(Section is not applicable to the following designs: QEDs, comparison group designs without a pretest, ITS designs and pre-post designs.)

BA

Intervention Group: Units of Observation Lost (for each reason)

For each reason for sample loss, list the number of units lost.


This field is not applicable if the contrast is from a cluster randomized design AND the unit of assignment differs from the unit of observation AND the contrast includes joiners


(Section is not applicable to the following designs: QEDs, comparison group designs without a pretest, ITS designs and pre-post designs.)

BB

Intervention Group: Units of Observation in Analytic Sample

Enter the sample size for the intervention group for the analytic sample


This field is not applicable if the contrast is from a cluster randomized design AND the unit of assignment differs from the unit of observation AND the contrast includes joiners

(Section is not applicable to the following designs: QEDs, comparison group designs without a pretest, ITS designs and pre-post designs.)

BC

Comparison Group: Units of Assignment Randomized

Enter the number of clusters randomly assigned to the comparison group.


Complete this field if the contrast is from a cluster randomized design AND the unit of assignment differs from the unit of analysis


(Section is not applicable to the following designs: QEDs, comparison group designs without a pretest, ITS designs and pre-post designs.)

BD

Comparison Group: Reasons for Loss of Units of Assignment

List reasons for sample loss separated by a carriage return.


Complete this field if the contrast is from a cluster randomized design AND the unit of assignment differs from the unit of observation.


(Section is not applicable to the following designs: QEDs, comparison group designs without a pretest, ITS designs and pre-post designs.)

BE

Comparison Group: Units of Assignment Lost (for each reason)

For each reason for sample loss, list the number of units lost.


Complete this field if the contrast is from a cluster randomized design AND the unit of assignment differs from the unit of observation.


(Section is not applicable to the following designs: QEDs, comparison group designs without a pretest, ITS designs and pre-post designs.)

BF

Comparison Group: Units of Assignment in Analytic Sample

Enter the number of clusters for the comparison group for the analytic sample.


Complete this field if the contrast is from a cluster randomized design AND the unit of assignment differs from the unit of analysis


(Section is not applicable to the following designs: QEDs, comparison group designs without a pretest, ITS designs and pre-post designs.)

BG

Comparison Group: Units of Observation Randomized or in Randomized Cluster

Enter the sample size for the comparison group at the time of random assignment.


E.g. For designs where students are the unit of observation and schools are the unit of assignment, enter the number of eligible students in all schools at the time of random assignment


This field is not applicable if the contrast is from a cluster randomized design AND the unit of assignment differs from the unit of observation AND the contrast includes joiners


(Section is not applicable to the following designs: QEDs, comparison group designs without a pretest, ITS designs and pre-post designs.)

BH

Comparison Group: Units of Observation in Remaining Clusters

Enter the sample size for the comparison group at the time of random assignment in clusters that appear in the analytic sample. This excludes units of observation for which the entire cluster left the sample.


E.g. For designs where students are the unit of observation and schools are the unit of assignment, enter the number of eligible students at the time of random assignment in schools that did not attrit


This field is not applicable if the contrast is from a cluster randomized design AND the unit of assignment differs from the unit of observation AND the contrast includes joiners


(Section is not applicable to the following designs: QEDs, comparison group designs without a pretest, ITS designs and pre-post designs.)

BI

Comparison Group: Reasons for Loss of Units of Observation

List reasons for sample loss separated by a carriage return.


This field is not applicable if the contrast is from a cluster randomized design AND the unit of assignment differs from the unit of observation AND the contrast includes joiners


(Section is not applicable to the following designs: QEDs, comparison group designs without a pretest, ITS designs and pre-post designs.)

BJ

Comparison Group: Units of Observation Lost (for each reason)

For each reason for sample loss, list the number of units lost.


This field is not applicable if the contrast is from a cluster randomized design AND the unit of assignment differs from the unit of observation AND the contrast includes joiners


(Section is not applicable to the following designs: QEDs, comparison group designs without a pretest, ITS designs and pre-post designs.)

BK

Comparison Group: Units of Observation in Analytic Sample

Enter the sample size for the comparison group for the analytic sample


This field is not applicable if the contrast is from a cluster randomized design AND the unit of assignment differs from the unit of observation AND the contrast includes joiners

(Section is not applicable to the following designs: QEDs, comparison group designs without a pretest, ITS designs and pre-post designs.)

Baseline Measurement

BL

Key Baseline Measures

This field displays the key baseline measure(s) you entered previously. This field is not editable. If you wish to edit this entry, go to the Contrast Description: Baseline Measures tab.


(Section is not applicable to the following designs: comparison group designs without a pretest, ITS designs and pre-post designs.)

BM

Construction of Baseline Measure (binary, ordinal, or continuous)

This field displays the description of the construction of the baseline measure you entered previously. This field is not editable. If you wish to edit this entry, go to the Contrast Description: Baseline Measures tab.

(Section is not applicable to the following designs: comparison group designs without a pretest, ITS designs and pre-post designs.)

BN

Unit of Observation for Baseline Measure

This field displays the level of observation for the baseline measure you entered previously. This field is not editable. If you wish to edit this entry, go to the Contrast Description: Baseline Measures tab.

(Section is not applicable to the following designs: comparison group designs without a pretest, ITS designs and pre-post designs.)

BO

Treatment of Missing Data for Baseline Balance Testing

Describe how the study addressed missing values for the baseline measure for baseline balance testing.

BP

Narrative Description of Sample for Baseline Balance Testing

Describe the eligibility criteria for inclusion in the sample for baseline balance testing. If balance is established using cluster-level averages (e.g. school-level average student test score), describe which individual units are included in the average.

BQ

Type of Information Used to Assess Baseline Equivalence

Select one option that best represents the information generated by your baseline balance testing:

  • Unadjusted Mean and Standard Deviation

  • T-statistic

  • Dichotomous

  • T/C Difference


This field determines which of the following fields you need to complete. Fields that do not need to be completed are populated with “Not applicable”


(Section is not applicable to the following designs: comparison group designs without a pretest, ITS designs and pre-post designs.)

BR

Narrative Description of Analysis for Baseline Balance Testing

Provide a brief narrative description of the method used to assess baseline balance.


Example 1: Baseline summary statistics for both groups in the analytic sample were calculated. Data are from unadjusted means and standard deviations.


Example 2: Differences in baseline means were calculated using a statistical model with the same structural components as the statistical model that was used to estimate impacts on the outcome variable for the same contrast. This is a two level model with students nested in schools. The pretest variable is the independent variable. The only covariates included in the model are an intercept term and a school level treatment indicator.

BS

Baseline Balance Testing Sample Size: Intervention Group

Enter the sample size for the intervention group for baseline balance testing


(Section is not applicable to the following designs: comparison group designs without a pretest, ITS designs and pre-post designs.)

BT

Mean of
Baseline Measure: Intervention Group


Enter the pre-intervention mean for the intervention group. For binary measures, enter a decimal value.


(Section is not applicable to the following designs: comparison group designs without a pretest, ITS designs and pre-post designs.)

BU

Standard Deviation of Baseline Measure: Intervention Group

Enter the pre-intervention standard deviation for the intervention group.


(Section is not applicable to the following designs: comparison group designs without a pretest, ITS designs and pre-post designs.)

BV

Baseline Balance Testing Sample Size: Comparison Group

Enter the sample size for the comparison group for baseline balance testing


(Section is not applicable to the following designs: comparison group designs without a pretest, ITS designs and pre-post designs.)

BW

Mean of
Baseline Measure: Comparison Group


Enter the pre-intervention mean for the comparison group. For binary measures, enter a decimal value.


(Section is not applicable to the following designs: comparison group designs without a pretest, ITS designs and pre-post designs.)

BX

Standard Deviation of Baseline Measure: Comparison Group

Enter the pre-intervention standard deviation for the comparison group.


(Section is not applicable to the following designs: comparison group designs without a pretest, ITS designs and pre-post designs.)


BY

Baseline Intervention-Comparison Difference

Enter the difference between the intervention and comparison group means.

BZ

T-Statistic for Baseline Balance Test

Enter the t-statistic for the pre-intervention difference in means between the treatment and the comparison


(Section is not applicable to the following designs: comparison group designs without a pretest, ITS designs and pre-post designs.)

Reported Findings

CA

Type of Findings Reported

Select one of the following types of data to use to report findings:

  • Unadjusted means

  • Adjusted means, e.g. ANOVA

  • T-test

  • Coefficient from a linear regression model, e.g. OLS, HLM, etc.

  • Coefficient from a non-linear regression model, e.g. logit or probit


This field determines which of the following fields you need to complete. Fields that do not need to be completed are populated with “Not applicable”

CB

Narrative Description of Analysis for Impact Evaluation

Provide a brief narrative description of the method used to estimate impact. Include information that describes:

- the nested or longitudinal structure of the model

- any adjustments for clustering of standard errors

- if the outcome is a binary or categorical measure and, if so, what kind of estimation is performed

- whether the pre-test variable is included and the level of inclusion

- the level at which the treatment variable enters the model


Example 1: Impact estimates come from a two-level regression model with students nested in schools. The pretest variable is included as a student-level covariate. Treatment enters the model at the school level.


Example 2: We use logistic regression to estimate the impact of the intervention on attendance: a binary measure defined for each student. Estimates are log-odds-ratios. Standard errors are clustered at the school-level.

CC

Intervention Group: Sample Size

Please report the size of the intervention group sample for the analysis.

CD

Intervention Group: Number of Clusters in Analytic Sample

Please report the number of clusters assigned to the intervention group that appear in the analysis sample.


Although your analysis may account for clustering at multiple levels, this field only asks about clusters at the level of assignment: e.g. number of schools, if schools were assigned.

CE

Comparison Group: Sample Size

Please report the size of the comparison group sample for the analysis.

CF

Comparison Group: Number of Clusters in Analytic Sample

Please report the number of clusters assigned to the comparison group that appear in the analysis sample.


Although your analysis may account for clustering at multiple levels, this field only asks about clusters at the level of assignment: e.g. number of schools, if schools were assigned.

CG

Intraclass Correlation Coefficient

Please report the intraclass correlation coefficient to describe the similarity of units at the level of observation within the cluster at the level of assignment


Although your analysis may account for clustering at multiple levels, this field only asks about clusters at the level of assignment, e.g. the correlation between outcomes of students within the same school, if schools were assigned.

CH

Unadjusted Outcome Mean for the Intervention Group

Please provide the unadjusted mean of the outcome measure in the intervention group.



CI

Adjusted Outcome Mean for the Intervention Group

Please provide the adjusted mean of the outcome measure in the intervention group.



CJ

Unadjusted Standard Deviation in the Outcome Measure of the Intervention Group

Please provide the unadjusted standard deviation of the outcome measure in the intervention group.

CK

Unadjusted Outcome Mean for the Comparison Group

Please provide the unadjusted mean of the outcome measure in the comparison group.



CL

Adjusted Outcome Mean for the Comparison Group

Please provide the unadjusted mean of the outcome measure in the comparison group.



CM

Unadjusted Standard Deviation in the Outcome Measure of the Comparison Group

Please provide the unadjusted standard deviation of the outcome measure in the comparison group.

CN

Impact Estimate (e.g., coefficient from OLS or HLM)

Please report the regression coefficient that estimates of the effect of the intervention. If your impact estimate is the sum of multiple regression coefficients, please enter this sum here.



CO

Standard Error of Impact Estimate

Please insert the estimated standard error of the regression coefficient here. If your impact estimate is the sum of multiple coefficients, please enter the standard deviation of the summed variable.



CP

Two-tailed p-value

Please insert the associated p-value that has not been adjusted for multiple comparisons here. If your impact estimate is the sum of multiple coefficients, please enter the associated p-value here.



CQ

Degrees of Freedom

Please enter the degrees of freedom of your statistical analysis.



File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File TitleTIF District Survey
SubjectSelf Administered Questionnaire
AuthorMathematica Staff
File Modified0000-00-00
File Created2021-01-21

© 2024 OMB.report | Privacy Policy