JC Cascades OMB Package Supporting Statement Part B_20190410_clean

JC Cascades OMB Package Supporting Statement Part B_20190410_clean.docx

Cascades Job Corps College and Career Academy Pilot Program Evaluation

OMB: 1290-0023

Document [docx]
Download: docx | pdf

Part B: Collection of Information Employing Statistical Methods

The U. S. Department of Labor (DOL) contracted with Abt Associates (in partnership with MDRC) to conduct an evaluation of the Cascades Job Corps College and Career Academy Pilot program. As required under the Paperwork Reduction Act, DOL is seeking approval from the Office of Management and Budget (OMB) for data collection instruments associated with the evaluation. The Job Corps program is the Federal government’s largest investment in residential job training for disadvantaged youth. The pilot program will test innovative and promising models that could improve outcomes for students; particularly youth, ages 16 to 21. The evaluation, funded by DOL, will use multiple approaches including an impact study and implementation analysis of the Cascades Job Corps College and Career Academy (CCCA) pilot program.

OMB approved initial data collection activities for the CCCA Evaluation under OMB control number 1290-0012 (approved on February 6, 2017). Those approved data collection activities included the Baseline Information Form to support the impact study, tracking data to support the planned 18-month follow-up survey, and stakeholder interview and student focus groups to support the implementation study.

This supporting statement is the second OMB submission regarding data collection activities for the evaluation of the CCCA pilot. DOL is seeking clearance in this submission for the 18-month follow-up survey. The survey will provide critical information on the experiences and educational and economic outcomes of study for both treatment and control members. Specific outcomes to be considered include the receipt of training and related supports, receipt of credentials, employment, socio-emotional skills, engagement in risky behaviors, receipt of public benefits, and opinions on the education and training services received.

B.1 Respondent Universe and Sampling Methods

The respondent universe for this study consists of all individuals who apply to the CCCA Job Corps program, are deemed eligible for the program, and consent to participate in the study. There is no sampling for inclusion in the study. After an initial pilot period with a 3:1 treatment:control (from May 5, 2017 – September 5, 2017) randomization,1 eligible applicants who consent to participate are randomized into the treatment or control group at a 1:1 ratio. There is thus no formal probability sampling or subsampling. The evaluators estimate that 2,200 individuals will participate in the study.

The follow-up survey will be conducted approximately 18 months after random assignment and will be administered to up to 1,000 sample members. Recruitment and random assignment for the CCCA Evaluation began on February 7, 2017 when the CCCA program launched and after receipt of OMB approval for the baseline data collection activities. Because this is a pilot program, the evaluation team and DOL built in a pilot period to allow the program to become fully operational and thus the evaluator is not planning to administer the follow-up survey to early-enrolled individuals. (The evaluator will obtain and analyze administrative data on employment and earnings (via NDNH data) and program participation (via program data) for all individuals randomized; this will allow comparisons—albeit with low power—of pilot/post-pilot differences in outcomes.) All individuals enrolled during the survey cohort phase will be asked to complete the 18-month follow-up survey. If the site enrolls more individuals than projected, those randomly assigned after the 1,000 participant target is met will not be included in the survey. Exhibit B.1 shows expected sample sizes and predicted response rates for each of the study groups.

Exhibit B.1 Projected Sample Sizes and Response Rates for 18-month Follow-up Survey

Study Group

In Study

Follow-Up Survey Cohort

Expected Response Rate

Treatment (offered CCCA)

1,100

500

80%

Control (not offered CCCA)

1,100

500

80%

Total

2,200

1,000

80%



The evaluation projects an 80% response rate for the follow-up survey. Several recent in-person data collection efforts have achieved similar response rates, including the Family Options Study: Long Term Tracking (OMB Control Number 2528-0259), which achieved a response rate of 81.4 at 18 months follow-up. Similarly, the National and Tribal Evaluation of the Second Generation of Health Profession Opportunity Grants (OMB Control Number 0970-0394), and the Pathways for Advancing Careers and Education project (OMB Control Number 0970-03970) achieved response rates of 76.2 percent and 77.2 percent, respectively, at 15-months follow up.

B.2 Procedures for Collection of Information

B.2.1 Statistical methodology for stratification and sample selection

No sampling will be required for the CCCA 18-month follow-up survey. (Weighting for survey non-response is discussed below; see Section B.3.4.)

B.2.2 Estimation Procedures

We start this section with a restatement of the CCCA Evaluation Service Contrast and Impact research questions outlined in Section A.1.1 of Supporting Statement A.

  • Service Contrast Analysis: How did program group members’ experiences differ from what their experiences would have been in the absence of CCCA (i.e., what are the impacts on services received)? Specifically: What impact did CCCA have on the dosage of education and training services received? What impact did CCCA have on total months of full-time equivalent enrollment in education or training [this is the confirmatory research question for the 18-months survey]? What impact did CCCA have on total months of education, training, or employment (including military service)? What impact did CCCA have on receipt of instruction on non-cognitive skills (e.g., social/emotional intelligence)? How do these impacts vary by student characteristics?

  • Impact Analysis: How do program group members’ outcomes differ from control group members’ outcomes (i.e., what are the impacts of the program)? Specifically: What impact did CCCA have on education, employment, and earnings2 outcomes? Does CCCA improve critical social-emotional skills, such as self-efficacy and engagement in risky behaviors? How do these impacts vary by student characteristics?

The aim of the study is to estimate net impact on outcomes for individuals with access to the CCCA program, relative to what outcomes would have been without the program, holding all else equal.

Our approach builds on the random assignment design of the evaluation. With random assignment, outcomes for the control group—those randomly chosen not to be offered program services—are a valid proxy for what the outcomes would have been for the treatment group, if they likewise had not been offered the program. It follows that comparing observed outcomes for the treatment and control groups provides an unbiased estimate of the causal effect of being offered the CCCA training program.

This evaluation will estimate program impacts for each of the areas specified in the research questions. Because this is a random assignment study, a simple comparison of mean outcomes for treatment and control participants will yield valid estimates of the causal impact of being offered the CCCA program. Nevertheless, estimating impacts via linear regression on baseline characteristics among the sample participants will yield more precise estimates of impacts:

[B.1]

where yi reflects the outcome variable for an individual randomized i, which is modelled as potentially varying with whether she is offered the program Di (equal to 1 if i is offered the program, or 0 if she is a control), her background characteristics Xi (measured at randomization), and an idiosyncratic time-specific error i (assumed to be on average equal to 0). The parameter of interest is , the level difference in outcome y for those offered the program (D = 1), relative to those who are not (D = 0).

The final set of background characteristics included in the model will be determined based upon assessment of the variation of each item as well as the resultant increase in the precision of the impact estimates inclusion of the item will provide (relative to the reduction in degrees of freedom). Exhibit B.2 lists the independent variables the evaluation team tentatively plans to include as background characteristics X, primarily demographics collected in the Baseline Information Form (OMB approval #1290-0012) and from the Job Corps’ national MIS.

Exhibit B.2 Tentative List of Background Characteristics

Background Characteristic (Measured at Baseline)

Gender

Black / Non-Black

Hispanic / Non-Hispanic

Age

Education (dummy variables for grades 6-12, GED, HS diploma, some college, occupational degree or higher; excluded category is less than high school)

Currently employed (from BIF)

Track (healthcare/IT)

Marital status

Lagged employment and earnings (from NDNH)

Number of dependent children

Speaks language other than English at home

Mother’s highest education

Father’s highest education

Mother currently employed

Father currently employed

Current SNAP recipient

Ever a TANF recipient

Ever arrested

Ever convicted of a crime

Ever worked full-time

Earnings at baseline

Months out of school

Ever had an Individualized Learning Plan

Ever suspended

Ever repeated a grade

Overall health

Self-efficacy

Highest education expected to complete



For missing baseline covariate data, the evaluation will use the dummy variable adjustment procedure described in Puma et al.3 This strategy sets missing cases to a constant and adds “missing data flags” to the regression model. The approach is easy to implement, and Puma et al. show that it works well for random assignment studies.

The evaluator plans to report estimates of impact relative to the control group. As is standard practice in the analysis of random assignment data, the team will use linear regression as the main estimation approach both for continuous outcomes (e.g., earnings or hours worked) and for binary outcomes (e.g., any employment). 4, 5 In particular, the evaluation team will use weighted least squares for outcomes measured in the follow-up survey, using weights that adjust for survey non-response. In addition, robust standard errors will be used to adjust for heteroscedasticity (including the heteroscedasticity induced by applying linear regression to binary outcomes). In the event of missing baseline covariate data, the dummy variable adjustment procedure described in Puma et al.6 will be implemented (see discussion at the end of Section B.3.4).

The evaluator will estimate subgroup impacts by including an interaction between treatment and subgroup membership (defined at baseline) in the regression models. This approach allows tests for homogeneity of impacts across the subgroup. Standard procedure is only to discuss subgroup impacts when homogeneity of impacts can be rejected.

B.2.3 Degree of Accuracy Required

Exhibit B.2 reports minimum detectable impacts (MDIs) for binary outcomes (with various baseline rates) from administrative data outcomes (N=2,200), survey outcomes (N=800, with a design factor for survey non-response) and for subgroup impacts. Specifically, it reports minimum detectable impacts (MDIs) for binary outcomes, such as receipt of a degree, certificate, or credential, for various potential sample sizes and baseline rates (i.e., observed average outcomes among control group participants). MDIs are the smallest true impacts that the study has at least an 80-percent probability of detecting as statistically significantly different from zero.

Exhibit B.2 Minimum Detectable Impacts (MDIs) for Binary Outcomes


Baseline Probability

SubGroup

N=T+C

10%/90%

25%/75%

33%/67%

50%

33%/67%

200

10.2 p.p.

14.7 p.p.

16.0 p.p.

17.0 p.p.

31.9 p.p.

400

7.2 p.p.

10.4 p.p.

11.3 p.p.

12.0 p.p.

22.6 p.p.

600

5.9 p.p.

8.5 p.p.

9.2 p.p.

9.8 p.p.

18.4 p.p.

800

5.1 p.p.

7.4 p.p.

8.0 p.p.

8.5 p.p.

16.0 p.p.

1000

4.6 p.p.

6.6 p.p.

7.1 p.p.

7.6 p.p.

14.3 p.p.

1200

4.2 p.p.

6.0 p.p.

6.5 p.p.

6.9 p.p.

13.0 p.p.

1400

3.8 p.p.

5.6 p.p.

6.0 p.p.

6.4 p.p.

12.1 p.p.

1600

3.6 p.p.

5.2 p.p.

5.6 p.p.

6.0 p.p.

11.3 p.p.

1800

3.4 p.p.

4.9 p.p.

5.3 p.p.

5.7 p.p.

10.6 p.p.

2000

3.2 p.p.

4.6 p.p.

5.0 p.p.

5.4 p.p.

10.1 p.p.

2200

3.0 p.p.

4.3 p.p.

4.7 p.p.

5.1 p.p.

9.8 p.p.

Notes:

p.p.—percentage points.

alpha=0.05, 1-beta=80%, 2-sided test; R-square=30%

Design effect (DEFF, for survey non-response)=1.05.




The MDIs are computed using the formula:

All the MDI calculations are based on a number of assumptions, some of which vary by the outcome measure involved. These assumptions are as follows:

• A 1:1 treatment-control ratio, implying a value of P of 0.5.

• An 80 percent follow-up survey response rate.

• Two-tailed statistical tests.

• Conventional power parameters (alpha=0.05; beta=0.80), implying a value of factor of 2.80.

See the footnotes to the table for assumed baseline rates, variances, and R2s.

Suppose that two-thirds of the control group is working two years after randomization. Then, using administrative data (i.e., the NDNH) and a sample size of 2,200, we could detect an impact of 4.7 percentage points. If CCCA is successful, impact on employment should be considerably larger than 4.7 percentage points (i.e., 71.4=66.7+4.7 percent). This MDI of 4.7 percentage points is one-third of the size of the 15.0 percentage point impact detected on receipt of a GED in the National Job Corps Study.7 Similarly, suppose that half of the control group gets some certificate by 18 months after randomization. Then, using the survey data (i.e., a sample size of 800), we would detect an impact of 8.5 percentage points. Again, if CCCA is successful, impact on certificate receipt should be considerably larger than 8.5 percentage points (i.e., 58.5=50.0+8.5 percent). This MDI of 8.5 percentage points is less than half the size of the impact of 22.3 percentage points detected for receipt of vocational, technical, or trade certificates in the National Job Corps Study.8

B.2.4 Who Will Collect the Data and How Will It Be Done

The follow-up survey will be administered by Abt’s in house survey group through a combination of phone calls, field locating and interviewing (for those who could not be reached on the phone), and on-site interviews (for those still at the Cascades Job Corps Center) with the use of CAPI (Computer-Assisted Personal Interviewing). Survey respondents will not see a paper or electronic copy of the questionnaire. Abt will use trained interviewers to contact study participants to complete the survey. All interviewers will be well trained and appropriately monitored, with corrective actions by supervisors where necessary.

B.2.5 Unusual Problems Requiring Specialized Sampling Procedures

Not applicable.

B.2.6 Periodic Data Collection Cycles

The 18-month follow-up surveys will be administered once. Building on experience conducting follow-up surveys with similar populations, the evaluator is implementing pro-active tracking of study participants between the time they are randomly assigned and the follow-up survey. These efforts are intended to update study participant contact information. Abt will send participants an email, text, or U.S. mail reminder for this purpose approximately every three months between sample enrollment and survey administration. Doing so will ensure that the researchers can effectively and efficiently contact study participants for the 18-month follow-up survey. Tracking efforts were approved for the CCCA Evaluation under OMB control number 1290-0012 (approved on February 6, 2017).

B.3 Methods to Maximize Response Rates and Address Non-response

The methods to maximize response rates are discussed with regard first to participant tracking and locating, and then regarding the use of monetary tokens of appreciation.

B.3.1 Participant Tracking and Locating

The CCCA Evaluation team developed a participant tracking system in order to maximize response to the 18-month follow-up survey (tracking approved under OMB control number 1290-0012). The tracking planned for the CCCA Evaluation begins with a welcome letter, sent to all sample members approximately one month after enrollment. The welcome letter provides information about the tracking and survey data collection activities, and provides respondents with the option of updating their contact information, as appropriate. Additionally, Abt will send a text, email, or letter via mail approximately once every three months following random assignment to remind sample members of their participation in the study and ask for a contact update. Participants will receive $2 for their time completing each quarterly contact update, thus participants could receive a maximum payment of $12 for their time completing the contact updates.

B.3.2 Tokens of Appreciation

Offering appropriate monetary gifts to study participants in appreciation for their time can help ensure a high response rate, which is necessary to ensure unbiased impact analysis. Those who complete the 18-month follow-up survey will receive a gift card for $25 as a token of appreciation for their time spent participating in the survey. Section A.9 of Supporting Statement Part A discusses this incentive payment in detail.

B.3.3 Sample Control during the Data Collection Period

During the data collection period, the research team will minimize non-response levels and the risk of non-response bias in the following ways:

  • Using trained in-person interviewers who are skilled at working with the sample population and skilled in maintaining rapport with respondents, to minimize the number of break-offs, and thus the incidence of non-response bias.

  • Using an advance letter that clearly conveys the purpose of the survey to study participants, the incentive structure, and reassurances about privacy, so that they will perceive that cooperating is worthwhile.

  • Using updated contact information captured through the tracking communications (approved under OMB control number 1290-0012 on February 6, 2017) conducted quarterly to keep the sample member engaged in the study and to enable the research team to locate him or her for the follow-up data collection.

  • Taking additional tracking and locating steps, as needed, when the research team does not find sample members at the phone numbers or addresses previously collected. Additional tracking and locating steps can include following up in person with the respondent, reaching out to alternate contacts that have been provided by the respondent (either at the time of enrollment into the study or during one of several requests for contact updates), as well as Accurint searches to find updated contact information (phone number and address) for the respondent.

  • Employing a rigorous telephone process to ensure that all available contact information is utilized to make contact with participants.

  • Administering the survey in person in instances where the participant cannot be surveyed by phone or where the participant is on-center at the Cascades Job Corps Center.

  • Administering the survey in-person at additional Job Corps Centers where the concentration of control group members currently in attendance is high enough to warrant an in-person visit.

  • Requiring the survey supervisors to manage the sample in a manner that helps to ensure that response rates achieved are relatively equal across treatment and control groups and sites.

The researchers will link data from various sources through a unique study identification number. This will ensure that survey responses are stored separately from personal identifying information, thus ensuring respondent privacy.

B.3.4 Nonresponse Bias Analysis and Nonresponse Weighting Adjustment

If, despite our best efforts, the response rate is below 80 percent, the evaluator will conduct a nonresponse bias analysis. The framework of nonresponse analysis and its application in one of Abt’s studies is given in Dursa, et al.9 (2015). We will compare distribution of the original and responding sample on the available administrative data, and then also compare the distributional summaries of these variables with nonresponse adjustment (NRA) weights, calculated as discussed next.

Regardless of the final response rate, the evaluator will construct nonresponse adjustment (NRA) weights. Using both baseline data collected just prior to random assignment and post-random assignment administrative data from the NDNH, the evaluator will estimate response propensity by a logistic regression model. Within the experimental arm, study participants will be allocated to nonresponse adjustment cells defined by the intervals of response propensity. Each cell will contain approximately the same number of study participants. Within each nonresponse adjustment cell, the empirical response rate will be calculated. Respondents will then be given NRA weights equal to the inverse empirical response rate for their respective cell.10 The use of nonresponse adjustment cells typically results in smaller design effects. The number of cells will be set as a function of model quality (five is a conventional value). The empirical response rates for a cell should be monotonically related to the average predicted response propensity. The evaluator will start with a large number of cells and reduce that number until it obtains the desired monotonic relationship.

Once provisional weights have been developed, the evaluator will look for residual nonresponse bias by comparing the estimates of the effects of the CCCA-funded services on outcomes measured in the NDNH administrative data (which should be available for all study participants), estimated with the NRA weights in the sample of survey respondents vs. the estimates of the same effects estimated on the entire randomized sample (including survey nonrespondents) without weights. If they are similar (e.g., within each other’s confidence intervals), then Abt will be reasonably confident that it has ameliorated nonresponse bias. If, on the other hand, there are important differences, then Abt will search for ways to improve our models and recalculate the weights as in Judkins, et al.11 and Kolenikov12.

Finally, the evaluation will utilize a dummy variable adjustment approach will be used to address item-non-response in the baseline covariate data. This strategy sets missing cases to a constant and adds “missing data flags” to the impact analysis model. This approach is easy to implement, and Puma et al.13 show that it works well for experimentally-designed evaluations. As detailed by Puma et al., the dummy variable adjustment approach involves the following three steps:

  1. For each baseline covariate X with missing data, create a new variable Z that is set equal to X for all cases where X is non-missing, and set to a constant value for those cases where X is missing.

  2. Create a new “missing data flag” variable D, which is set equal to one for cases where X is missing and set equal to zero for cases where X is not missing.

  3. In the impact analysis model use Z and D (not X) as baseline covariates. This allows for the impact model to estimate the relationship between Y and X (via Z) when X is not missing, and to estimate the relationship between Y and D when X is missing.

B.4 Tests of Procedures or Methods to be Undertaken

The 18-month follow-up survey questions have been drawn from:

  1. The 18-month follow-up survey for the Ready to Work (RTW) Partnership Grants Evaluation, conducted by Abt Associates for the Department of Labor (DOL) (OMB No. 1291-0010);

  2. The 15-month combined follow-up survey for the Pathways for Advancing Careers and Education (PACE) Evaluation and the Health Profession Opportunity Grants (HPOG) Evaluation, conducted by Abt Associates for the Administration for Children and Families (ACF) at the Department of Health and Human Services (OMB No. 0970-0397);

  3. The 12-month follow up survey for the YouthBuild Impact Evaluation, conducted by Mathematica Policy Research for the DOL (OMB No. 1205-0503);

  4. The Baseline Information Form (BIF) for the Cascades Job Corps College and Career Academy (CCCA) Research Study, conducted by Abt Associates for DOL (OMB No. 1290-0012);

  5. The 36-month follow-up survey for the Health Profession Opportunity Grants (HPOG) Evaluation, conducted by Abt Associates for ACF (OMB No. 0970-0394); and

There are also several questions that were newly-developed for this survey. It should be noted that some of the items used in the combined PACE/HPOG survey–questions on attitudes toward work and self–are drawn from questions and scales used in even earlier studies.

The estimated length of the survey is approximately 35 minutes. This estimate is drawn from the time required to field the relevant portions of the other surveys listed in the previous paragraph. Additionally, Abt conducted a formal pretest of the follow-up survey, with a convenience sample of no more than nine respondents. These pretests provided more definitive estimates about the length of the survey and their various components. The pretest also showed that the questions, introduction scripts, and wording worked well.

B.5 Individuals Consulted on Statistical Aspects of the Design

Consultations on the statistical methods used in this study have been undertaken to ensure the technical soundness of the research. The following individuals were consulted in preparing this submission to OMB:

        1. DOL

Gloria Salas-Kos Contracting Officer’s Representative, Employment and Training Administration

Jessica Lohmann Chief Evaluation Office

        1. Abt Associates

Julie Williams Project Director (301) 634-1782

Jacob Klerman Co-Principal Investigator (617) 520-2613

David Judkins Statistician (301) 347-5952

Daniel Litwok Analyst (301) 347-5770

Cass Meagher Survey Operations (301) 347-5134


MDRC

Jean Grossman Co-Principal Investigator (609) 258-6974

The evaluation team also assembled a technical working group consisting of three experts in the following areas: (1) experience with Job Corps and/or disconnected youth; (2) experience with workforce development and job training; (3) random assignment evaluation; and (4) survey methods. These experts, listed below, reviewed and commented on the evaluation study design and data collection procedures.

Technical Working Group

  • Peter Schochet, Mathematica Policy Research

  • Grace Kilbane, Executive Director Cleveland/Cuyahoga County Workforce Investment Board (former National Director of Job Corps )

  • Carlos Flores, Cal Poly San Luis Obispo


B.6 Individuals Responsible for Data Collection and Analysis

The following individuals are responsible for data collection and analysis for this study:

Abt Associates

Jacob Klerman Co-Principal Investigator

Jean Grossman Co-Principal Investigator

Daniel Litwok Analyst

Cass Meagher Survey Operations



1 Randomization began on February 22, 2017 at a 1:1 treatment:control ratio. On May 5, 2017, this ratio was adjusted to increase the number of students arriving on-center during the pilot period. On September 5, 2017, the ratio reverted to 1:1 treatment:control.

2 Earnings are a key long-term outcome for a job training program. However, with target times on campus of at least two years, an 18-month post-randomization survey is too early to detect impacts on earnings.

3 Puma, Michael J., Robert B. Olsen, Stephen H. Bell, and Cristofer Price. 2009. What to Do When Data Are Missing in Group Randomized Controlled Trials (NCEE 2009-0049). Washington, DC: National Center for Education Evaluation and Regional Assistance, Institute of Education Sciences, U.S. Department of Education. http://ies.ed.gov/ncee/pdf/20090049.pdf.

4 As a robustness test, we also propose to use logistic models for binary outcomes to ensure that the signs and significance of the estimates from the linear and logistic models are the same.

5 As is standard practice, we use ordinary least squares (the linear probability model) even for dependent variables that are bounded, such as hours and earnings (bounded below at 0).

6 Puma et. al., What to Do When Data Are Missing in Group Randomized Controlled Trials (see footnote 3).

7 Schochet, Peter Z., J. Burghardt, and S. McConnell. 2008. "Does Job Corps Work? Impact Findings from the National Job Corps Study." The American Economic Review 98.5: 1864-1886.

8 Ibid.

9 Dursa, Erin K., Aaron Schneiderman, Heather Hammer, Stanislav Kolenikov. 2015. Nonresponse Bias Measurement and Adjustment in the Follow up Study of a National Cohort of Gulf War and Gulf War Era Veterans (Wave 3). Proceedings of the Survey Research Methods Section of the American Statistical Association, Alexandria, VA. http://www.asasrms.org/Proceedings/y2015/files/234231.pdf

10 An alternative propensity adjustment method could use the directly modeled estimates of response propensity. However, these estimates can sometimes be close to zero, creating very large weights, which in turn lead to large survey design effects.

11 Judkins, et al. (Judkins, David, D. Morganstein, P. Zador, A. Piesse, B. Barrett, and P. Mukhopadhyay. 2007. Variable Selection and Raking in Propensity Scoring. Statistics in Medicine, 26, 1022–1033.) showed how to perfectly balance respondents and nonrespondents in a limited number of dimensions using a procedure that is call “weight raking.” Using these weights, tabulations of respondents agree perfectly with tabulations based on the entire sample. This has been demonstrated to work on as many as about a dozen categorical variables at a time. 

12 Kolenikov, Stanislav. 2014. Calibrating survey data using iterative proportional fitting (raking), Stata Journal, 14, (1), 22-59.

13 Puma et. al., What to Do When Data Are Missing in Group Randomized Controlled Trials (see footnote 3).

CCCA Pilot Evaluation Supporting Statement for OMB Clearance Request Part B pg. 10

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File TitleAbt Single-Sided Body Template
AuthorJan Nicholson
File Modified0000-00-00
File Created2021-01-15

© 2024 OMB.report | Privacy Policy