Statistical Methodology Report

2015WGRR_SMR.pdf

Workplace and Gender Relations Survey

Statistical Methodology Report

OMB: 0704-0615

Document [pdf]
Download: pdf | pdf
2015 Workplace and Gender
Relations Survey of Reserve
Component Members
Statistical Methodology Report

Additional copies of this report may be obtained from:
Defense Technical Information Center
ATTN: DTIC-BRR
8725 John J. Kingman Rd., Suite #0944
Ft. Belvoir, VA 22060-6218
Or from:
http://www.dtic.mil/
Ask for report by ADA630232

DMDC Report No. 2016-004
March 2016

2015 WORKPLACE AND GENDER RELATIONS
SURVEY OF RESERVE COMPONENT MEMBERS:
STATISTICAL METHODOLOGY REPORT

Defense Research, Surveys, and Statistics Center (RSSC)

Defense Manpower Data Center
Defense Research, Surveys, and Statistics Center (RSSC)
4800 Mark Center Drive, Suite 04E25-01, Alexandria, VA 22350-4000

Acknowledgments
Defense Manpower Data Center (DMDC) is indebted to numerous people for their
assistance with the 2015 Workplace and Gender Relations Survey of Reserve Component
Members: Statistical Methodology Report which was conducted on behalf of the Office of the
Under Secretary of Defense for Personnel and Readiness (OUSD[P&R]). The survey program is
conducted under the leadership of Dr. Paul Rosenfeld, Director, Defense Research, Surveys, and
Statistics Center (RSSC).
RSSC’s Statistical Methods Branch, under the guidance of Mr. David McGrath, Branch
Chief, is responsible for all statistical aspects used in DMDC’s survey program, including
sampling, weighting, nonresponse bias (NRB) analysis, imputation, and statistical hypothesis
testing. Mr. Eric Falk, Team Lead of the Statistical Methods Branch, was responsible for the
sampling for the 2015 Workplace and Gender Relations Survey of Reserve Component Members
(2015 WGRR). Mr. Tim Markham, mathematical statistician within the Statistical Methods
Branch, used the DMDC Sampling Tool to design the sample. Ms. Carole Massey and Ms. Sue
Reinhold, DMDC, provided the data processing support. Dr. Bob Fay, Dr. Minsun Riddles, and
Mr. Richard Sigman, Westat, developed complex weights for this survey and developed the
weighting section of this report. Mr. Eric Falk, Mr. Tim Markham, Mr. Dave McGrath, Mr. Jeff
Schneider, and Ms. Ada Harris wrote this methodology report.

ii

Table of Contents
Page
Introduction ................................................................................................................................1 
Sample Design and Selection.....................................................................................................1 
Target Population .................................................................................................................1 
Sampling Frame ...................................................................................................................2 
Sample Design .....................................................................................................................2 
Sample Allocation ................................................................................................................3 
Survey Administration ...............................................................................................................5 
Weighting ...................................................................................................................................5 
Case Dispositions .................................................................................................................5 
Nonresponse Adjustments and Final Weights .....................................................................8 
Comparison to the 2014 RAND Military Workplace Study..............................................15 
Variance Estimation ...........................................................................................................16 
Multiple Comparison Adjustment............................................................................................16 
Location, Completion, and Response Rates ............................................................................18 
Ineligibility Rate ................................................................................................................19 
Estimated Ineligible Postal Non-Deliverable/Not Located Rate .......................................19 
Estimated Ineligible Nonresponse .....................................................................................19 
Adjusted Location Rate......................................................................................................19 
Adjusted Completion Rate .................................................................................................19 
Adjusted Response Rate ....................................................................................................20 
Nonresponse Bias Analysis......................................................................................................21 
Summary of Findings .........................................................................................................22 
Section 1: Compare Known Population Values with Weighted Survey Estimates ..........23 
Likelihood of Victims to Respond to Surveys ...................................................................27 
Summary of Comparison of Known Population Values with Weighted Survey
Estimates ............................................................................................................................28 
Section 2: Analyze Item Missing Data and Drop Offs for Sexual Assault
Questions............................................................................................................................28 
Item Missing Data in Sexual Assault Behavior Questions ................................................29 
Drop-off Analysis for Sexual Assault Questions ...............................................................33 
Summary of Item Missing Data and Drop-off for Sexual Assault ....................................35 
Section 3: Analysis of DMDC’s Survey of Nonrespondents............................................36 
Weighting the 2015 WGRR-N...........................................................................................38 
Summary of Analysis of DMDC’s survey of nonrespondents ..........................................40 
Section 4: Evaluate the Sensitivity of Different Post-Survey Adjustments
(Weighting Methods) on Survey Estimates .......................................................................40 
DMDC Weighting Methodology .......................................................................................40 
Comparison of Adjustment Stages and Final Weights ......................................................41 
Comparison of Key Estimates ...........................................................................................43 
Summary of Evaluate the Sensitivity of Different Post-Survey Adjustments
(Weighting Methods) on Survey Estimates .......................................................................44 
References ................................................................................................................................45 

iii

Table of Contents (Continued)
Page

Appendixes
A. Domain Based Sampling Size and Expected Response ............................................................47 
B. Categorical Variables Used for the Eligibility and Completion Adjustments ..........................53 
C. Distribution of Weights and Adjustment Factors by Eligibility Status for Female ..................57 
D. Distribution of Weights and Adjustment Factors by Eligibility Status for Male......................61 

List of Tables
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.

Stratifying Variables ............................................................................................................3
Sample Size by Stratification Variables ..............................................................................5
Case Dispositions for Weighting .........................................................................................7
Complete Eligible Respondents by Stratification Variables ................................................8
Variables Used for the Eligibility and Completion Adjustments ......................................11
Description of Raking Dimensions ....................................................................................13
Distribution of Weights and Adjustment Factors by Eligibility Status .............................14
Sum of Weights by Eligibility Status.................................................................................15
Disposition Codes for Response Rates ..............................................................................19
Comparison of the Final Weighted Respondents Relative to the Drawn Sample .............20
Location, Completion, and Response Rates ......................................................................21
Rates for Full Sample and Stratification Level of Variable ...............................................21
2015 WGRR Reporting Questions.....................................................................................25
Summary of Sexual Assault Reports in DSAID by Component .......................................26
Estimated vs. Actual Number of Reported Sexual Assaults ..............................................27
Comparison of Response Rates from Reported Survivors to Full Sample ........................28
2015 WGRR Sexual Assault (SA) Questions ....................................................................29
Breakdown of Sample Cases to Assess Item Missing Data for Sexual Assault
Questions............................................................................................................................31
Missing Data Analysis for Answers to SA1-SA6 ..............................................................32
Comparison Between 2015 WGRR and 2014 SOFR on Missingness for NonSensitive Items ...................................................................................................................33
2015 WGRR-N Comparison Questions .............................................................................36
Sample Disposition Codes for 2015 WGRR-N .................................................................37
Comparison of 2015 WGRR Sample with Nonresponse Sample 2015 WGRR-N ...........38
Comparison of WGRR Survey with Nonresponse Study Control Questions ....................39

iv

Table of Contents (Continued)
Page
25.
26.
27.
28.

Comparison between DMDC and Westat Weighting Methods for Eligibility,
Completion and Poststratification Adjustments.................................................................42
Comparison between DMDC Method Final Weights and Westat Method Final
Weights ..............................................................................................................................43
Comparison of DMDC and Westat Key Survey Estimates (Female Only) .......................44
Comparison of DMDC and Westat Key Survey Estimates (Male Only) ..........................44

v

2015 WORKPLACE AND GENDER RELATIONS SURVEY OF
RESERVE COMPONENT MEMBERS:
STATISTICAL METHODOLOGY REPORT
Introduction
This report describes the statistical methodologies for the 2015 Workplace and Gender
Relations Survey of Reserve Component Members (2015 WGRR). The first section describes the
sample design and selection of the sample. The second section describes weighting and variance
estimation, as well as a comparison to the 2014 RAND Military Workplace Study. The third
section describes the statistical tests used for the 2015 WGRR. The fourth section describes the
calculation of location, completion, and response rates for the full sample and population
subgroups. The final section contains the nonresponse bias (NRB) analysis. Estimates for all
survey questions are found in the 2015 Workplace and Gender Relations Survey of Reserve
Component Members: Tabulation Volume (DMDC, 2016a).
Sample Design and Selection
Target Population
The 2015 WGRR was designed to represent individuals meeting the following criteria:


Members of the Selected Reserve who are in Reserve Unit, Active Guard/Reserve
(AGR/FTS/AR; Title 10 and Title 32), and Individual Mobilization Augmentee
(IMA) programs from:
o Army National Guard (ARNG),
o US Army Reserve (USAR),
o US Navy Reserve (USNR),
o US Marine Corps Reserve (USMCR),
o Air National Guard (ANG), or
o US Air Force Reserve (USAFR);



Up to and including paygrade O6 as of March 2015; Reserve component members
who entered the Service after March 2015 are excluded from the population.



The sampling frame was developed five months prior to fielding the survey so the
sampling population included those that had been in the Selected Reserve for at least
five months.

1

Data were collected on the web between August 7, 2015 and October 19, 2015. If
sample members had not responded within the first month of the fielding period, they
were sent the paper-and-pen survey.
Sampling Frame
The sampling frame consisted of 817,007 Reserve component members using the March
2015 Reserve Components Common Personnel Data System (RCCPDS) Master File. Auxiliary
frame data was obtained from the following files:


March 2015 Reserve Family Database File (contains the member’s family
information, (e.g. marital status and children))



March 2015 Contingency Tracking System (CTS) File (contains deployment
information)



April 2015 Defense Enrollment Eligibility Reporting System (DEERS) Medical
Point-In-Time Extract (PITE) (contains personnel information)



Time on Active Duty (TOAD) File, pulled August 2015 (contains activation
information)

In addition, after selecting the sample, DMDC performed additional checks to verify the
member was still eligible before the survey fielded. Any ineligible member in the sample was
excluded from any further mailings and notifications; this saved additional costs associated with
the survey process. Using the May 2014 RCCPDS, DMDC determined 10,630 sample members
(2.2 percent unweighted) were record ineligible and excluded them from mailings and
notifications (see Table 3).
Sample Design
The sample for the 2015 WGRR survey used a single-stage stratified design. Four
population characteristics defined the stratification dimensions for the 2015 WGRR sample:


Reserve component (Army National Guard, Army Reserve, Navy Reserve, Marine
Corps Reserve, Air National Guard, Air Force Reserve)



Gender (Male, Female),



Paygrade grouping (E1-E4, E5-E9, W1-W5, O1-O3, O4-O6), and



Reserve program (Troop Program Unit [TPU], Active Guard/Reserve [AGR],
Military Technician [MilTech], and Individual Mobilization Augmentee [IMA]).

Table 1 shows these four variables and associated variable levels.

2

Table 1.
Stratifying Variables
Variable

Variable Name

Reserve Component

RORG_CD

Gender

RSEX2

Paygrade Grouping

RPAYGRP9

Reserve Program

RPROG1

Categories
1. Army National Guard
2. US Army Reserve
3. US Navy Reserve
4. US Marine Corps Reserve
5. Air National Guard
6. US Air Force Reserve
1. Male
2. Female
1. E1-E4
2. E5-E9
3. W1-W5
4. O1-O3
5. O4-O6
1. TPU
2. AGR
3. MilTech
4. IMA

DMDC partitioned the population frame of 817,007 members into 128 strata that were
initially determined by a full cross-classification of the four stratification variables. Levels were
collapsed when there were less than 200 members in the stratum, usually for Reserve program
and rarely paygrade grouping. Dimensions within Reserve component and gender were always
preserved.
DMDC selected individuals with equal probability and without replacement within each
stratum. However, because allocation was not proportional to the size of the strata, selection
probabilities varied among strata and individuals were not selected with equal probability
overall. To achieve adequate sample sizes for all domains (reporting categories) DMDC used a
non-proportional allocation. Appendix A shows the estimation domains along with their sample
sizes, expected number of respondents, and estimated precisions.
Sample Allocation
DMDC based the total sample size on a census of females and 50 percent sample of
males. The goal was to achieve reliable precision on estimates for outcomes associated with
reporting a sexual assault (i.e., retaliation) and other measures that were only asked of a very
small subset of members, especially for males. Given estimated variable survey costs and
anticipated eligibility and response rates, DMDC used an optimization algorithm to determine
the minimum-cost allocation that simultaneously satisfied the domain precision requirements.
Response rates from previous surveys were used to estimate eligibility and response rates for all
strata. The 2013 Status of Forces Survey of Reserve Component Members, the 2014 Status of
3

Forces Survey of Reserve Component Members, and the 2012 Workplace and Gender Relations
Survey of Reserve Component Members were used to estimate these rates.
DMDC determined the sample allocation given the census of females and 50 percent of
males by means of the DMDC Sample Planning Tool (SPT), Version 2.1 (Dever & Mason,
2003). This application is based on the method originally developed by J. R. Chromy (1987) and
described in Mason, Wheeless, George, Dever, Riemer, and Elig (1995). The SPT defines
domain variance equations in terms of unknown stratum sample sizes and user-specified
precision constraints. A cost function is defined in terms of the unknown stratum sample sizes
and the per-unit cost of data collection, editing, and processing. The variance equations are
solved simultaneously, subject to the constraints imposed, for the sample size that minimizes the
cost function. Estimated eligibility rates are used and they modify the estimated prevalence rates
used in the variance equations, thus affecting the allocation; response rates inflate the allocation,
thus affecting the final sample size. Prevalence rates refer to a percentage that is used in
determining the estimated variance used for the calculation of the sample size. For example,
DMDC used 50 percent since it is most conservative and yields the largest estimated sample size.
There were 93 reporting domains defined for the 2015 WGRR and the initial goal was to
achieve below 5 percent precision on estimates. The precision requirement for each domain is
typically based on an estimated prevalence rate of 0.5 with a 95 percent confidence interval halfwidth no greater than 0.05. However, given the rarity of events covered by many of the 2015
WGRR questions, DMDC ensured that a much tighter precision would be met for questions seen
by all respondents, while making it likely that confidence interval half-widths of 0.05 could be
met for questions that are relevant to only a small portion of respondents. Therefore, DMDC
tightened the precision constraints until the sample included a census of all females and at least
50 percent of all males.
The 2015 WGRR total sample size was 485,774; Table 2 provides the sample sizes by
stratification variables.

4

Table 2.
Sample Size by Stratification Variables
Stratification
Variable
Sample
Gender
Male
Female
Paygrade Grouping
E1-E4
E5-E9
W1-W5
O1-O3
O4-O6
Reserve Program
TPU
AGR
MilTech
IMA

Army National US Army US Navy US Marine Air National US Air Force
Guard
Reserve Reserve Corps Reserve
Guard
Reserve
485,774
186,481 121,036
36,245
36,364
61,695
43,953
Total

331,332
154,442

130,498
55,983

75,898
45,138

23,406
12,839

34,750
1,614

41,090
20,605

25,690
18,263

238,102
179,140
5,773
33,684
29,075

104,338
62,876
3,823
11,143
4,301

62,526
39,436
1,651
10,659
6,764

9,995
18,417
40
3,235
4,558

25,784
6,315
259
1,729
2,277

20,656
31,745
0
4,002
5,292

14,803
20,351
0
2,916
5,883

414,431
33,432
29,273
8,638

167,140
10,010
9,331
0

109,335
6,890
3,213
1,598

30,467
5,737
0
41

32,111
1,621
0
2,632

41,924
8,132
11,639
0

33,454
1,042
5,090
4,367

Survey Administration
Information about administration of the survey and detailed documentation of the survey
dataset are found in the 2015 Workplace and Gender Relations Survey of Reserve Component
Members: Administration, Datasets, and Codebook (DMDC, 2016b).
Weighting
Analytical weights for the 2015 WGRR were created to account for unequal probabilities
of selection and varying response rates among population subgroups. Sampling weights were
computed as the inverse of the selection probabilities. The sampling weights were then adjusted
for nonresponse using models that considered over 50 possible correlates of nonresponse. The
adjusted weights were post-stratified to match population totals and to reduce bias unaccounted
for by the previous weighting steps.
Case Dispositions
As the first step in the weighting process, case dispositions were assigned based on
eligibility for the survey and on completion of the questionnaire. Execution of the weighting
process and computation of response rates both depended on this classification.
Final case dispositions for weighting were determined using information from personnel
records, field operations (as recorded in the Survey Control System [SCS]), and returned
questionnaires. No single source of information is entirely complete and correct for determining

5

the case disposition; inconsistencies among sources were resolved according to the order of
precedence shown in Table 3. This order of execution is critical to resolving case dispositions.
For example, suppose an individual in the sample refused the survey, with the reason that it was
too long; in the absence of any other information, the disposition would be “eligible
nonrespondent.” Another example would be if we were provided a proxy report that the sample
member had been hospitalized and was unable to complete the survey, in this instance the
disposition would be “ineligible.”
Case disposition counts for the 2015 WGRR are shown in Table 3. Table 4 presents the
number of complete eligible respondents (SAMP_DC=4) by stratification variables: gender,
paygrade groups, reserve programs, and reserve organizations.

6

Table 3.
Case Dispositions for Weighting
Case Disposition
(SAMP_DC)
1. Record ineligible

Information
Eligibility Known Sample
Conditions
Source
Size
Personnel record DMDC determined whether sampled
NA
10,630
members had a record in the DEERS pointin-time extract (PITE) prior to fielding the
survey. No record in DEERS indicated the
member either separated from the military,
passed away, etc.
2. Ineligible by self- or Survey Control The sampled member or a proxy reported
Yes
210
proxy-report
System (SCS)
that member was ineligible due to such
reasons as "Retired," “Ill,” “Incarcerated,”
“No longer employed by DoD,” or
“Deceased.”
3. Ineligible by survey Survey eligibility The sampled member was determined to be
Yes
1,331
self-report
question
ineligible based on their response to Q1 of
the survey questionnaire asking if retired or
separated.
4. Eligible, complete Item response rate Respondents needed to answer one of the
Yes
87,127
response
six critical questions related to sexual
assault.
5. Eligible, incomplete Item response rate Survey is not blank but none of the critical
Yes
1,985
response
sexual assault questions were answered.
8. Refusal
SCS
Survey is returned blank due to such
No
1,176
reasons as “Refused-too long,” “Refusedinappropriate/intrusive,” “Refused-other,”
“Unreachable at this address,” “Refused by
current resident,” “Refused additional emails,” or “Concerned about
security/confidentiality.”
9. Blank return
SCS
Blank questionnaire returned with no
No
611
reason given.
10. PND
SCS
Postal non-deliverable or original address is
No
46,592
non-locatable.
11. Nonrespondent
Remainder
Remaining sampled members did not
No
336,112
respond to survey.
Total
485,774

7

Table 4.
Complete Eligible Respondents by Stratification Variables
Stratification
Variable
Sample
Gender
Male
Female
Paygrade Grouping
E1-E4
E5-E9
W1-W5
O1-O3
O4-O6
Reserve Program
TPU
AGR
MilTech
IMA

Total
87,127

Army
US Army
National
Reserve
Guard
25,172
18,674

US Marine
Air National
Corps
Guard
Reserve
8,053
4,002
19,195

US Navy
Reserve

US Air
Force
Reserve
12,031

52,421
34,706

15,329
9,843

10,288
8,386

5,028
3,025

3,673
329

11,730
7,465

6,373
5,658

18,575
45,178
2,203
9,125
12,046

6,096
12,798
1,425
2,830
2,023

3,834
8,807
667
2,642
2,724

1,034
4,095
21
1,014
1,889

1,883
1,022
90
334
673

3,512
11,972
0
1,422
2,289

2,216
6,484
0
883
2,448

57,902
13,773
12,737
2,715

16,526
4,711
3,935
0

14,166
2,711
1,246
551

6,514
1,519
0
20

2,822
511
0
669

9,916
3,876
5,403
0

7,958
445
2,153
1,475

Nonresponse Adjustments and Final Weights
After case dispositions were resolved, the sampling weights were adjusted for
nonresponse. First, the sampling weights for cases of known eligibility (SAMP_DC = 2, 3, 4, or
5) were adjusted to account for cases of unknown eligibility (SAMP_DC = 8, 9, 10, or 11).
Next, the eligibility-adjusted weights for eligible respondents with completed questionnaires
(SAMP_DC = 4) were adjusted to account for eligible sample members who returned an
incomplete questionnaire (SAMP_DC = 5). All weights for the record ineligibles
(SAMP_DC=1) are set to 0 and this weight is transferred to the other cases.
The weighting adjustment factors for eligibility and completion were computed as the
inverse of model-predicted probabilities. The 2015 models used to predict these probabilities
changed substantially from the models for the 2012 WGRR. The 2015 models paralleled those
developed by RAND for the 2014 RAND Military Workplace Study (2014 RMWS) (Morral,
Gore, & Schell, 2014, 2015), which surveyed both the active duty and Reserve members. The
sample size for the 2015 WGRR, 485,774, was considerably larger than either the sample size for
the 2012 WGRR, 75,436, or the Reserve sample size in the 2014 RMWS, 67,559.
For the 2012 WGRR, a logistic regression model was used to predict the probability of
known eligibility for the survey (that is, the probability of determining eligibility). A second
logistic regression model was used to predict the probability of response among eligible sample
members (complete response vs. nonresponse). CHAID (Chi-squared Automatic Interaction
Detector) was used to determine the best predictors for each logistic regression model. The

8

models were weighted. Predictors included the following population characteristics: paygrade
group, gender, reserve program, reserve component, education, family status, combat/noncombat flag, deployment, and race/ethnicity.
The methods used to adjust for nonresponse to the 2015 WGRR survey more closely
paralleled methods in the 2014 RMWS than the methods used for the 2012 WGRR. The RMWS
methods began with the identification of key survey outcome variables. For the larger survey of
the active duty component in 2014, RAND identified six key survey outcome variables: three
types of sexual assault (penetrative, non-penetrative, and attempted penetrative) and three types
of Military Equal Opportunity (MEO) violations (sexually hostile work environment, sexual quid
pro quo, and gender discrimination). Because of the smaller sample size of the 2014 Reserve
component sample, however, RAND focused on just three outcome variables: Sexual
harassment, gender discrimination, and sexual assault.
The 2014 RMWS nonresponse adjustment involved two steps, each of which produced a
set of models. For both Reserve and active duty sample populations, the first step used data from
the eligible respondents with completed responses to develop models for the key outcome
variables. The models were fitted separately by gender. For Reserve members of each gender,
the three outcomes were modeled as a function of an extensive set of administrative variables
available for both respondents and nonrespondents, resulting in six separate models in total.
Predicted values or combination variables (Morral, 2015) were computed for both respondents
and nonrespondents, and then these combination variables were used in a second model for the
probability of response, along with a limited number of other predictors: gender, reserve
component, paygrade, and survey form type (paper vs. web). The reciprocals of the predicted
values from the second model were used as nonresponse adjustments and applied to the
respondents.
The approach to weighting used in the 2014 RMWS incorporated two significant
innovations. First, a specific form of machine learning model, generalized boosted regression
(GBM), was used in place of the logistic regression model used in the 2012 WGRR. In general,
the GBM model adapts more readily to complex relationships among the dependent variable and
candidate independent variables. Second, previous work of Little and Vartivarian (2004) guided
the 2014 RMWS approach. Little and Vartivarian argued only information related to key survey
outcomes should be included in a nonresponse model, otherwise additional information will only
increase the variance without reducing bias for the key outcomes. The 2014 RMWS used GBM
to summarize the relationship between the extensive auxiliary information and each of the key
outcome variables in the form of the predicted values from the GBM fit. The nonresponse
adjustment based on the combination variables and a limited number of other characteristics
could be then expected to reduce nonresponse bias while limiting the increase in sampling
variance.
Preliminary analyses published by RAND (Morral, Gore, & Schell, 2014, 2015)
suggested advantages to the nonresponse approach used in the 2014 RMWS, although their
evidence was based primarily on the active duty results the survey. On this basis, nonresponse
adjustment for the 2015 WGRR adopted these methods, but modifications were necessary. Using
completed 2015 cases, six outcome variables were modeled for females: sexual harassment,
gender discrimination, sexual quid pro quo, sexual assault, non-penetrative sexual assault, and

9

penetrative sexual assault. For males, only sexual harassment, gender discrimination, and sexual
assault were modeled because few incidents were reported for the other three variables. Table 5
provides a list of the candidate auxiliary variables considered for the GBM models. Appendix B
provides a more detailed version of the table identifying the levels for each categorical variable
in Table 5.
Unlike the 2014 RMWS, the survey protocol of the 2015 WGRR excluded sample
respondents who were no longer Reserve component members. Consequently, the 2015 WGRR
paralleled the 2012 WGRR in the division of nonrespondents by eligibility and completion of the
survey. The first step of modeling nonresponse in the 2015 WGRR followed the general plan of
the 2014 RMWS, creating combination variables corresponding to each key variable. The second
step in the 2014 RMWS of modeling the propensity to respond became two steps in the 2015
WGRR: (1) modeling known eligibility status to derive a nonresponse adjustment for known
eligibility status, and (2) fitting a model to eligible cases to determine the probability of
completing the survey in order to derive a second nonresponse adjustment. Both the eligibility
and completion models incorporated the combination variables as well as auxiliary variables that
are used in the raking adjustments that follow: paygrade, reserve program, race/ethnicity, and
component. Both sets of models were fitted separately by gender. Like the 2012 WGRR
analysis, the GBM models were weighted; the first by the sampling weight, and the second by
the eligibility-adjusted weight resulting from multiplying the sampling weight by the eligibility
status adjustment.
To further detail the nonresponse adjustments used in the 2015 WGRR, in Table 3, case
dispositions 2, 3, 4, and 5 denote cases with known eligibility, whereas case dispositions 8, 9, 10,
and 11 correspond to cases for which eligibility is unknown. Consequently, the first of the two
nonresponse adjustments increased the weights for case dispositions 2, 3, 4, and 5 to represent
dispositions 8, 9, 10, and 11. The second adjustment increased the weights of complete cases
with disposition 4 to compensate for incomplete eligible cases with disposition 5.
To increase response to the 2015 WGRR, nonrespondents to the web version of the
survey were sent a paper form of the questionnaire. The paper version included the key survey
items, but it omitted many secondary items on the web questionnaire, presenting the recipient
with approximately 100 questions instead of the approximately 230 on the web version. The
primary set of weights was based on responses from the full data set including both the web and
paper versions. To support analysis of items only on the web version, a second set of weights
was produced, following the same steps as the full data set including the paper questionnaire.
For this weighting, all paper questionnaire respondents were treated as nonrespondents,
including in the fitting of the GBM models. This second set of weights is intended solely for
analysis of web-only items. The primary set of weights provides the basis for estimating the key
outcomes from the survey items collected on both the web and paper versions of the
questionnaire.

10

Table 5.
Variables Used for the Eligibility and Completion Adjustments
Demographic Factors
 AFQT Score Percentile
 Age as of August 2015
 Family Status
 Education
 Race/Ethnic Category
 US Citizenship Origin Code
 US Citizenship Status Code
Military Career Factors
 Military Accession Program
 Active Duty Status
 Active Duty and Special Operations Status
 Active Duty Begin Date
 Active Duty End Date
 Number of Occurrences on Active Duty from July
2014 through July 2015
 Number of Days Activated from July 2014 through
July 2015
 Active Guard & Reserve, or Full Time National
Guard Duty Statue ID
 Combat Occupation Flag
 Current Deployment Status
 Number of Deployments
 Deployment Flag in the Last 12 months
 Deployment Flag in the Last 24 months
 Number of Months Deployed since 9/11/2011
 Number of Months Deployed from August 2014
through August 2015
 Duty DoD Occupation Code
 Paygrade
 Primary Occupation Code
 DoD Primary Occupation Area Code1
 Primary Regular Component Service Indicator
 Selected Reserve Obligated Service Projected End
Date
 Date of First Affiliation or Enlistment in Reserve
Component
 Date of First Appointment, Enlistment, or
Conscription into a Uniformed Service of the US
 Eligibility Status as of August 2015

Military Career Factors (continued)
 Years of service
 Reserve Category Programs
 Reserve Organization Code
 Reserve Category Group Code
 Reserve Subcategory Code
 Reserve Category Code
 Reserve Component Category Code
Military Environment Factors
 Number of Members1 in Member’s Assigned UIC
 Number of Males1 in Member’s Assigned UIC
 Number of Members1 in Member’s Duty UIC
 Number of Males2 in Member’s Duty UIC
 Number of Members1 in Member’s Primary
Occupation3
 Number of Males2 in Member’s Primary Occupation3
 Number of Members1 in Member’s Area Occupation3
 Number of Males2 in Member’s Area Occupation
 Percent of Males2 in Member’s Assigned UIC.
 Percent of Males2 in Member’s Duty UIC.
 Percent of Males2 in Member’s Primary Occupation3
 Percent of Males2 in Member’s Area Occupation3
 Assigned UIC Change of Station Flag as of March
2014
 Assigned UIC Change of Station Flag as of August
2015
 Duty UIC Change of Station Flag as of March 2014
 Duty UIC Change of Station Flag as of August 2015
Survey Fielding Factors
 Invalid Army Email Address Flag
 Email Address Purchase Flag
 Email Address Flag
 Number of Email Addresses
 First Letter Returned as PND
 Mail Address Flag
 Change in Mailing Address since Sample Frame
Development
 Home Address Flag

1

Reserve members
Reserve males
3
Collapsed primary occupation (first 2 digits of primary occupation code)
2

The nonresponse-adjusted weights were then modified through a process called raking.
The purpose of raking is to use known information about the survey population to increase the
precision of population estimates. This information consists of totals for different levels of
variables (such as demographic characteristics). For example, the variable of gender has two

11

levels: male and female. During the raking process, sampled individuals are first categorized
into the cells of a table defined by two or more variables—called raking dimensions. The goal of
raking is to adjust the weights so that they add up to the known totals—called control totals—for
the different levels within each raking dimension. Proceeding one dimension at a time, raking
computes a proportional adjustment to the weights associated with each level of the raking
dimension.1 After all dimensions are adjusted, the process is repeated until the totals for all
levels of the raking dimensions are equal to the corresponding control totals (at least within a
specified tolerance).
Control totals were computed from information from the sampling frame. There were
five raking dimensions, defined by gender, pay-grade groupings, Reserve component, Reserve
program, and race/ethnicity, as follows:


Paygrade groupings (7 levels)



Gender (2 levels) by pay-grade groupings (5 levels)



Gender (2 levels) by Reserve program (4 levels)



Gender (2 levels) by race/ethnicity (2 levels)



Gender (2 levels) by Reserve component (6 levels) by pay-grade groupings (2 levels)

Table 6 provides additional details about the levels of the variables used to define the five
raking dimensions.

1

Raking is so named because it is analogous to smoothing the soil in a garden plot by alternately working soil back
and forth with a rake in two perpendicular directions.

12

Table 6.
Description of Raking Dimensions
Dimension #
1

2

Variable 1
Description
Categories
Paygrade
E1–E3,
groupings
E4,
E5–E6,
E7–E9,
W1–W5,
O1–O3,
O4–O6
Gender
Female,
Male

Variable 2
Description
Categories

Paygrade
groupings

3

Gender

Female,
Male

Reserve
program

4

Gender

Female,
Male

Race/ethnicity

5

Gender

Female,
Male

Reserve
component

Variable 3
Description
Categories

E1–E4,
E5–E9,
W1–E5,
O1–O3,
O4–O6,
TPU,
AGR,
MilTech,
IMA
Non-Hispanic
White,
Total Minority
ARNG,
Paygrade
ANG,
groupings
USAR,
USNR,
USMCR,
USAFR

Enlisted,
Officers

Table 7 summarizes the distributions of the sampling weights, intermediate weights, final
weights, and corresponding adjustment factors by eligibility status for the primary weights.
Eligible respondents are those individuals who were not only eligible to participate in the survey
but also completed at least one of the critical sexual assault questions. Record ineligible
individuals are those who were not eligible to participate in the survey according to
administrative records; no weights were computed for these cases. Table 7 also indicates the
mean of the sampling weights, intermediate weights, and final weights by eligibility status. Two
tables in Appendix C and Appendix D show summary of weights by gender.
The sampling weights, which are the reciprocals of the probability of selection into the
sample, take the value 1 for the census of females, who were all selected for the study. The mean
of the sampling weights for males is 2.32. The nonresponse adjustment for eligibility status that
follows next makes the biggest single adjustment to the weights, in terms of increasing both the
mean and the coefficient of variation (c.v.) of the weights. The two remaining adjustments for
nonresponse among the eligible population and the final raking have a modest effect on
increasing the mean weight. The corresponding factors shown in the last two columns of Table 7

13

have small c.v.’s; in other words, the factors in each column differ from each other by relatively
small amounts.

Table 7.
Distribution of Weights and Adjustment Factors by Eligibility Status
Eligibility
Status

Eligibility
Sampling Status
Statistic
Weight Adjusted
Weight

Eligible
Respondents

N
MIN
MAX
MEAN
STD
CV
Eligible,
N
Incomplete
MIN
Response
MAX
MEAN
STD
CV
Self/Proxy
N
Ineligibles
MIN
MAX
MEAN
STD
CV
Nonrespondents N
MIN
MAX
MEAN
STD
CV
Record
N
Ineligibles
MIN
MAX
MEAN
STD
CV

Complete Final Weight
Complete
Eligible
With
Eligibility
Eligible
Response Nonresponse Status
Response
Adjusted and Raking
Factor
Factor
Weight
Factors
87,127
87,127
87,127
87,127
1.17
1.18
1.07
1.01
125.96
135.92
64.72
1.12
8.74
9.10
5.05
1.02
8.36
8.95
4.64
0.01
0.96
0.98
0.92
0.01
1,985
1,985
1,985
1,985
0.00
0.00
1.18
0.00
0.00
0.00
30.53
0.00
0.00
0.00
5.31
0.00
0.00
0.00
4.76
0.00
0.90
1,541
1,541
1,541
1,541
1.43
1.48
1.40
1.00
114.73
123.58
64.72
1.00
15.06
15.89
9.07
1.00
14.62
15.57
8.34
0.00
0.97
0.98
0.92
0.00
384,491
384,491
384,491
384,491
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00

87,127
1.00
5.69
1.79
0.96
0.53
1,985
1.00
5.69
1.65
0.92
0.56
1,541
1.00
5.69
1.74
0.80
0.46
384,491
1.00
5.69
1.66
0.65
0.39
10,630
1.00
5.69
1.59
0.58

87,127
1.14
121.51
8.55
8.17
0.96
1,985
1.18
57.32
8.48
8.35
0.98
1,541
1.43
114.73
15.06
14.62
0.97
384,491
0.00
0.00
0.00
0.00
10,630
1.00
5.69
1.59
0.58

10,630
1.00
5.69
1.59
0.58

0.36

0.36

0.36

14

10,630
0.00
0.00
0.00
0.00

10,630

10,630

Raking
Factor
87,127
0.77
1.35
1.03
0.08
0.08
1,985

1,541
0.77
1.34
1.05
0.08
0.08
384,491

10,630

Under simplifying assumptions that were applied based on Kish (Kish, 1965) which
approximates the relative increase due to weight variation as approximately 1 plus the c.v. of the
weights squared. Because the coefficient of variation of the weights is less than 1, especially
when analyzed for females, 0.84, and males, 0.81, separately, the increase in variance due to
weighting is less than a factor of 2. Given the task of the weighting adjustments is to
compensate for differential nonresponse and its possible impact on the bias of key outcome
variables, the outcomes shown in Table 7 appear reasonable.
Table 8 exhibits the sum of the weights at different stages of weighting. The weights
adjusted for known eligibility status distribute the sampling weights for nonrespondents with
unknown eligibility status among the remaining dispositions. The eligible response adjusted
weights then compensate for eligible respondents providing incomplete surveys. By design, the
final raking adjustments redistribute record ineligibles and other dispositions excluded from the
final weights to match total number in the original frame.

Table 8.
Sum of Weights by Eligibility Status
Sum of Final
Sum of Eligibility Sum of Complete
Weights With
Sum of Sampling
Eligibility Category
Status Adjusted Eligible Response Nonresponse and
Weights
Weights
Adjusted Weights
Raking
Adjustments
Eligible Respondents
156,111
744,723
761,538
792,528
Eligible, Incomplete
3,273
16,841
0
0
Response
Self/proxy Ineligibles
2,677
23,208
23,208
24,479
Nonrespondents
638,042
0
0
0
Record ineligibles
16,904
16,904
16,904
0
817,007
801,676
801,650
817,007
Total

Comparison to the 2014 RAND Military Workplace Study
RAND found that increasing the number of weighting variables and using GBM
improved the 2014 RMWS survey weights, therefore, DMDC decided to also use this approach
for the 2015 WGRR. The description of the 2015 WGRR weighting was set in the context of the
2014 RMWS in the preceding section. The comparison is further elaborated here.
The software used for the 2015 WGRR was built on the approach used in 2014 RMWS.
Both weightings used the statistical computing software R and specifically functions from the
packages gbm (Ridgeway, 2009) and twang (Ridgeway, 2004). RAND researchers provided the
specific R scripts they used for their final production runs of the 2014 RMWS weighting.

15

The weighting for the 2015 WGRR also differed in some respects from the 2014 RMWS.
The 2015 WGRR weighting incorporated the two nonresponse steps used in the 2012 WGRR,
necessitating use of weights throughout the analysis. Some of the modeling in the 2014 RMWS
had been unweighted. In 2015 WGRR, the nonresponse models were separated by gender at both
the initial stage of creating the combined variables and at the second stage of creating the
nonresponse weights, whereas both genders had been combined in the second stage in the 2014
RMWS.
Variance Estimation
Sampling error is the uncertainty associated with an estimate that is based on data
gathered from a sample of the population rather than the full population. Note that sample-based
estimates will vary depending on the particular sample selected from the population. Measures
of the magnitude of sampling error, such as the variance and the standard error (the square root
of the variance), reflect the variation in the estimates over all possible samples that could have
been selected from the population using the same sampling methodology. Analysis of the 2015
WGRR data required a variance estimation procedure that accounted for the weighting
procedures. The final step of the weighting process was to define strata for variance estimation
by Taylor series linearization. The 2015 WGRR variance estimation strata correspond exactly
with the 128 strata used to select the sample. For each strata/variance strata, DMDC ensured that
there were at least 30 complete eligible responses with non-zero final weights.
Multiple Comparison Adjustment
When statistically comparing groups (e.g., Army vs. Navy estimates of the effectiveness
of the training), a statistical hypothesis whether there are no differences (null hypothesis) versus
there are differences (alternative hypothesis) is tested. DMDC mainly uses independent two
sample t-tests for its statistical tests. The conclusions are usually based on the p-value associated
with the test-statistic. If the p-value is less than the critical value then the null hypothesis is
rejected. Any time a null hypothesis is rejected (a conclusion that estimates are significantly
different), it is possible this conclusion is incorrect. In reality, the null hypothesis may have been
true, and the significant result may have been due to chance. A p-value of 0.05 means there is a
five percent chance of finding a difference as large as the observed result if the null hypothesis
were true.
In survey research there is often interest in conducting multiple comparisons. For
example, 1) testing whether the percentage of sexual assaults among Army Reserve is the same
as the percentage of sexual assaults across all other components, and 2) testing that the
percentage of sexual harassments for Navy Reserve is the same as the percentage of sexual
harassments with all other components and so on. When performing multiple independent
comparisons on the same data the question becomes: “Does the interpretation of the p-value for a
single statistical test hold for multiple comparisons?” If 200 independent statistical
(significance) tests were conducted at the 0.05 significance level, and the null hypothesis is
supported for all, 10 of the tests would be expected to be significant at the p-value < 0.05 level
simply due to chance. These 10 tests would have incorrectly assumed to be statistically
significant—known as false positives or false discoveries. When a single significance test is
conducted, the error rate—the probability of false discoveries—is the p-value itself. When more

16

than one significance test is conducted, the probability of false discoveries and the number of
false discoveries increases, i.e., the more tests that are conducted the greater the number of false
discoveries.
This is known in statistical hypothesis testing as the multiple comparisons problem.
Therefore, it is important to control the false discoveries when performing multiple independent
tests to reach more accurate conclusions. Numerous techniques have been developed to control
the false positive error rate associated with conducting multiple statistical tests (multiple
comparisons) and there is no universally accepted approach for dealing with it.
The method that DMDC uses to control for false discoveries is known as the False
Discovery Rate correction (FDR) developed by Benjamini and Hochberg (1995). FDR is
defined as the expected percentage of erroneous rejections among all rejections. The goal is to
control the false discovery rate which is the proportion of "discoveries" (significant results) that
are actually false positives. The approach can be summarized as follows:


Determine the number of comparisons (tests) of interest, call it m;



Determine the tolerable False Discovery Rate (FDR Rate), call it α;



Calculate the p-value for each statistical test;



Sort the individual p-values from smallest to largest and rank them, call the rank k.



For each ranked p-value calculate the FDR-adjusted alpha (threshold) which is
defined as (k * ∝ )/m



Determine the cutoff delineating statistically significant results from non-significant
results in the sorted file as follows: Look for the maximum rank (k) such that the
ordered p-value is less than the FDR-adjusted alpha (i.e., look for the maximum k
after which the p-value becomes greater than the threshold), call this maximum k the
cutoff. Any comparison (p-value) with rank less than the cutoff is considered
statistically significant.

DMDC computed the FDR thresholds (FDR adjusted alpha) separately for the two types
of comparisons—current year and trends. For both types of tests, DMDC implemented FDR
Multiple Comparison corrections to control the expected rate of false discoveries (Type I errors)
at ∝ = 0.05. For the current year estimates from the 2015 WGRR, DMDC performed 59,724
separate statistical tests (e.g., testing whether the sexual assault rate among Army Reserve is the
same as the sexual assault rate across all other components). Of the 59,724 current year
statistical tests, 19,165 were statistically significant. In addition, DMDC performed another 180
separate statistical tests for trends to compare estimates from the 2015 WGRR to the 2014
RMWS. For trends, 27 of the 180 statistical tests were significant.

17

Location, Completion, and Response Rates
Location, completion, and response rates were calculated in accordance with the
recommendations of the American Association for Public Opinion Research (AAPOR, 2015
Standard Definitions), which estimates the proportion of eligible respondents among cases of
unknown eligibility.
The location rate (LR) uses AAPOR standard formula CON2 and is defined as
LR 

( I  P )  ( R  NC  O )
adjusted located sample N L
.


( I  P ) ( R  NC  O )  e(UO ) adjusted eligible sample N E

The completion rate (CR) uses AAPOR standard formula COMR and is defined as
CR 

N
( I  P)
usable responses

 R.
( I  P )  ( R  NC  O ) adjusted located sample N L

The response rate (RR) uses AAPOR standard formula RR4 and is defined as
RR 

N
( I  P)
usable responses

 R.
( I  P )( R  NC  O )  e(UO ) adjusted eligible sample N E

Where
I = Fully complete responses according to RR4 (> 80% complete)
P = Partially complete responses according to RR4 (50 – 80% complete)
R = Refusal and break-off according to RR4 (< 50% complete)
NC = Non-contact
O = Other
e(UO) = Estimated eligibility of cases unknown
NL = Adjusted located sample
NE = Adjusted eligible sample
NR = Usable responses
Table 9 shows the corresponding sample disposition codes associated with the response
categories.

18

Table 9.
Disposition Codes for Response Rates
Response Category
Eligible Sample
Located Sample
Usable Response
Not Returned
Eligibility Determined
Self-Report Ineligible

SAMP_DC Values
4, 5, 8, 9, 10, 11
4, 5, 8, 9, 11
4
11
2, 3, 4, 5, 8, 9
2, 3

Ineligibility Rate
The ineligibility rate (IR) is defined as the following and needs to be calculated for both
weighted and unweighted to be applied to Table 9:
IR = Self Report Ineligible/Eligibility Determined.
Estimated Ineligible Postal Non-Deliverable/Not Located Rate
The estimated ineligible postal non-deliverable or not located (IPNDR) is defined as:
IPNDR = (Eligible Sample - Located Sample) * IR.
Estimated Ineligible Nonresponse
The estimated ineligible nonresponse (EINR) is defined as:
EINR = (Not Returned) * IR.
Adjusted Location Rate
The adjusted location rate (ALR) is defined as:
ALR = (Located Sample - EINR)/(Eligible Sample - IPNDR - EINR).
Adjusted Completion Rate
The adjusted completion rate (ACR) is defined as:
ACR = (Eligible Response)/(Located Sample - EINR).

19

Adjusted Response Rate
The adjusted response rate (ARR) is defined as:
ARR = (Eligible Response)/(Eligible Sample - IPNDR - EINR).
Table 10 shows the weighted sample counts used to compute the overall response rates.
The final response rate is the product of the location rate and the completion rate. Table
11 shows both weighted and unweighted location, completion, and response rates for the 2015
WGRR.
Finally, Table 12 shows weighted location, completion, and response rates for the full
sample by the stratification variables. The final weighted response rate for the survey was 19.8
percent which rounds to 20.0 percent.

Table 10.
Comparison of the Final Weighted Respondents Relative to the Drawn Sample
Case Disposition Categories
Drawn sample and population
Ineligible on master files
Self-reported ineligible
Total: Ineligible
Eligible sample
Not located (estimated ineligible)
Not located (estimated eligible)
Total not located
Located sample
Requested removal from survey mailings
Returned blank
Skipped key questions
Did not return a survey (estimated ineligible)
Did not return a survey (estimated eligible)
Total: Nonresponse
Eligible responses

Sample Counts
Weighted Estimates
Sample
Percent Estimated Percent
Size
Total
485,774
100%
817,007
100%
-10,630
2.2%
-16,904
2.1%
-1,541
0.3%
-2,677
0.3%
-12,171
2.5%
-19,581
2.4%
473,603
97.5%
797,427
97.6%
-777
0.2%
-1,201
0.1%
-45,815
9.4%
-72,978
8.9%
-46,592
9.6%
-74,179
9.1%
427,011
87.9%
723,248
88.5%
-1,176
0.2%
-2,248
0.3%
-611
0.1%
-1,081
0.1%
-1,985
0.4%
-3,273
0.4%
-5,603
1.2%
-9,072
1.1%
-330,509
68.0%
-551,463
67.5%
-339,884
70.0%
-567,137
69.4%
87,127
17.9%
156,111
19.1%

20

Table 11.
Location, Completion, and Response Rates
Type of Rate
Computation
Location
Adjusted located sample/Adjusted eligible sample
Completion
Usable responses/Adjusted located sample
Response
Usable responses/Adjusted eligible sample

Unweighted Weighted
90.2%
90.7%
20.7%
21.9%
18.6%
19.8%

Table 12.
Rates for Full Sample and Stratification Level of Variable
Domain
Variable
Sample
Reserve
Component
(RORG_CD)

Domain

Full Sample
Army National Guard
US Army Reserve
US Navy Reserve
US Marine Corps
Reserve
Air National Guard
US Air Force Reserve
Male
Gender
(RSEX2)
Female
E1-E4
Paygrade
Grouping
E5-E9
(RPAYGRP9) W1-W5
O1-O3
O4-O6
TPU
Reserve
Program
AGR
(RPROG1)
MilTech
IMA

Sample Eligible Sum of Location Completion Response
Size Responses Weights
Rate
Rate
Rate
485,774
87,127 817,007
90.7%
21.9%
19.8%
186,481
25,172 348,599
89.7%
18.0%
16.1%
121,036
18,674 197,698
90.3%
19.1%
17.2%
36,245
8,053
58,227
85.9%
28.1%
24.1%
36,364
4,002
38,468
88.6%
13.7%
12.2%
61,695
43,953
331,332
154,442
238,102
179,140
5,773
33,684
29,075
414,431
33,432
29,273
8,638

19,195
12,031
52,421
34,706
18,575
45,178
2,203
9,125
12,046
57,902
13,773
12,737
2,715

104,818
69,197
662,565
154,442
352,772
336,347
12,187
59,530
56,171
666,695
76,747
61,484
12,082

96.0%
94.5%
90.8%
90.4%
86.6%
93.2%
97.0%
93.2%
97.5%
89.7%
93.9%
97.1%
96.6%

33.1%
29.8%
21.0%
25.8%
9.0%
28.4%
40.6%
29.0%
44.4%
16.4%
45.5%
44.6%
34.5%

31.8%
28.2%
19.0%
23.3%
7.8%
26.5%
39.4%
27.0%
43.3%
14.7%
42.7%
43.4%
33.3%

Nonresponse Bias Analysis
Survey nonresponse has the potential to introduce bias in the estimates of key outcomes.
To the extent that nonrespondents and respondents differ on observed characteristics, DMDC can
use weights to adjust the sample so the weighted respondents match the full population on the
most critical characteristics. This eliminates the portion of nonresponse bias (NRB) associated
with those observed variables if these variables are strongly associated with the behaviors.
When all NRB can be eliminated in this manner, the missingness is called ignorable or missing
at random (Little & Rubin, 2002). The more observable demographic variables that were

21

incorporated into the weights, the more plausible it is to assume that the weights eliminate any
NRB.
The objective of this research was to assess the extent of NRB for the estimated
percentage of sexual assaults (henceforth this rate will be referred to as sexual assault) in the
Reserve components. The purpose of the percentage of sexual assaults was to provide the policy
offices and the Department with an overall estimate of Reserve component members who
experienced sexual assault in the last 12 months. The level of nonresponse bias can vary for
every question on the survey, but DMDC focused on the sexual assault rate because this tended
to be one of the more central questions on the survey. Nonresponse bias occurs when survey
respondents are systematically different from nonrespondents. Statistically, the bias in a
respondent mean (e.g., sexual assault rate) is a function of the response rate and the relationship
(covariance) between response propensities and the estimated statistics (i.e., sexual assault rate),
and takes the following form:
	

	
̅

	
̅

, where

= covariance between

and response

propensity,
NRB can occur with high or low survey response rates, but the decrease in overall survey
response rates within the Department as well as civilian studies in the past decade has resulted in
a greater focus on potential NRB. DMDC investigated the presence of NRB using many
different methods, and this report summarizes the following methods and results:
1. Compare known population values with weighted survey estimates,
2. Analyze item missing data for sexual assault questions,
3. Analysis of DMDC’s survey of nonrespondents,
4. Evaluate the sensitivity of different post-survey adjustments (weighting methods) on
survey estimates,
Summary of Findings
NRB is difficult to assess. Most authors recommend averaging across several different
studies to measure NRB (Montaquila & Olson, 2012). DMDC has taken that approach here and
conducted four studies to assess NRB in sexual assault estimates. Based on these four studies,
DMDC does not find evidence of significant NRB in sexual assault estimates from the 2015
WGRR.
We summarize the results from each study below:
1. Compare known population values with weighted survey estimates —DMDC
compared weighted estimates of officially reported sexual assaults from the survey
with actual reported sexual assaults to SAPRO. The survey estimates are higher than
the official reports (but within the margins of error), and a possible conclusion is
sexual assault victims are more likely to complete the 2015 WGRR. However, this

22

finding would contradict earlier NRB studies on sexual assault where both DMDC
and RAND concluded that sexual assault was underestimated (DMDC, 2013 and
Morral, 2014) and should be taken with caution. From this analysis, DMDC
concludes there is little evidence of NRB in sexual assault estimates from the 2015
WGRR.
2. Analyze item missing data for sexual assault questions—Item missing data rates
for the 2015 WGRR sexual assault questions are similar to missing data rates from
other DMDC surveys. In addition, there is no evidence that prior victims of sexual
assault are completing the survey at different rates than members who have not been
previously assaulted. From this analysis, DMDC concludes there is little evidence of
NRB in sexual assault estimates from the 2015 WGRR.
3. Analysis of DMDC’s survey of nonrespondents—Estimates from the 2015 WGRR
nonresponse study (2015 WGRR-N) are comparable to production estimates. The
estimates for three out of four matching questions are within one percentage point of
each other, and none of the four differences is statistically significant. From this
analysis, DMDC concludes there is little evidence of NRB in sexual assault estimates
from the 2015 WGRR.
4. Evaluate the sensitivity of different post-survey adjustments (weighting
methods) on survey estimates—Analysis of estimates using two different weighting
methods show both the weights and key survey estimates are robust to the choice of
weighting methods. From this analysis, DMDC concludes there is little evidence of
NRB in sexual assault estimates from the 2015 WGRR.
Section 1: Compare Known Population Values with Weighted Survey Estimates
To assess total survey error, one common method is to compare a known parameter to a
weighted estimate from the survey. If DMDC sampling, measurement, weighting, and analysis
methods performed well, confidence interval of estimates should frequently contain the true
parameters. In this investigation, DMDC examined the number of reported sexual assaults in the
Reserve component. A similar type of analysis was performed by RAND for the 2014 RMWS
(Morral, 2016). It is important to point out that DMDC does not know the true number of sexual
assaults in the US military. Many sexual assaults are not reported for many reasons including
potential retaliation. However, reported sexual assaults to the US military can be compared.
DMDC was able to compare the number of sexual assault reports filed by Reserve and National
Guard members to weighted estimates from survey respondents to assess NRB (and overall total
survey error).
The Sexual Assault Prevention and Response Office (SAPRO) provided DMDC with
summary information of the number of reported sexual assaults (unrestricted and restricted) to
either a Service Sexual Assault Response Coordinator (SARC) or the National Guard SARC.
The report containing the summary information was collected in the Defense Sexual Assault
Incident Database (DSAID). For a record to be entered into DSAID, the survivor needed to
complete a Victim Reporting Preference Statement (DDForm 2910) that indicates whether the

23

survivor would like to make either a restricted or unrestricted report. The information is then
captured in DSAID. DMDC’s requested the number of sexual assaults reported from August 1,
2014 through October 31, 2015 (14 months of data). DMDC requested 14 months of data in
order to mirror the 12 month time frame corresponding with the survey administration time in
the following three time periods:


August 1, 2014 through July 31, 2015



September 1, 2014 through August 31, 2015



October 1, 2014 through September 30, 2015

The 2015 WGRR survey fielded from August 2015 through October 2015 and the
summaries shown above would serve as a comparable 12 month period. The summary files
contained information about the number of sexual assaults by Reserve component as well as if
the report type was restricted or unrestricted. On the 2015 WGRR survey, sexual assault
survivors were asked follow-up questions to determine 1) if they filed a formal report of sexual
assault, 2) type of report filed, and 3) to verify the sexual assault occurred within the last 12
months. The 2015 WGRR survey questions regarding these behaviors are displayed in Table 13.

24

Table 13.
2015 WGRR Reporting Questions
2015 WGRR Reporting Questions

The DSAID summary file provided by SAPRO contained an average of 836 restricted
and unrestricted reports of sexual assault within the Reserve components during the 12 month
periods. Table 14 contains a summary of the average 12 month reports, by component and type
of report.

25

Table 14.
Summary of Sexual Assault Reports in DSAID by Component
Service/Component
Restricted
Unrestricted
Total
151
451
Army
602
National Guard
116
315
431
Reserve
35
136
171
14
29
Navy
43
Reserve
14
29
43
3
19
Marine Corps
22
Reserve
3
19
22
59
109
Air Force
168
National Guard
37
58
95
Reserve
22
51
73
1
0
Not Available
1
National Guard
1
0
1
Total
228
608
836
Note. There is one report that does not have enough information to determine the Service and is categorized in the
table as not available

DMDC used three criteria from the survey to compare the information provided by
respondents on the 2015 WGRR to the summary of actual numbers reported in a 12 month period
on DSAID. The three questions were:


Answered “Yes” to reported the sexual assault (Q179),



Answered “restricted” or “unrestricted” (Q180), and



Answered confirms the sexual assault occurred in the past 12 months (Q201)

There were 169 respondents from the survey that indicated in Q179 they had reported a
sexual assault. Of the 169 respondents, 119 indicated the sexual assault occurred in the last 12
months (Q201) and that they filed either a restricted or unrestricted report. The weighted
estimate based on these 119 respondents is 1,013, compared with the 836 cases from DSAID.
While the confidence interval from the survey estimate is within the number of DSAID cases, the
survey has overestimated the known true value by a fairly large amount (21.2%). The survey
estimates for restricted and unrestricted reports both similarly overestimate the true values.
There were 40 responding members that indicated they filed a restricted report and 79 members
that indicated they filed an unrestricted report. Table 15 shows a summary of the number of
respondents, estimates from the survey, lower and upper 95-percent confidence bounds, and the
actual number of reports from the DSAID database. Although the true number of reports falls
within the confidence intervals in each case, the weighted estimate is consistently higher.
Potential reasons for this difference could be that responding members are either mistakenly
saying they filed a report (measurement error) or it is possible that members who filed a report
respond at higher rates. Another potential reason for overestimating the number of DSAID cases
was the survey question itself. For all paper surveys, the sexual assault measure time reference

26

was a static date (August 10, 2014). This is different from the web form that used a dynamic
date. For example, someone filling out the paper survey could have responded as late in the
fielding period as October 9, 2015, for these respondents the question could potentially be
relevant for more than the intended 12 months. In all situations described, the actual number
falls within the confidence interval and, this provides some evidence that estimates have less
concern for NRB. RAND performed a similar analysis for the active duty survey in the 2014
RMWS and found evidence that the true number reported was actually more than the estimate.
This led RAND to conclude that sexual assault victims that report are less likely to respond to
the survey.

Table 15.
Estimated vs. Actual Number of Reported Sexual Assaults
Number of
95% Confidence 95% Confidence
Weighted Total
Type of Sexual Respondents that
Interval Lower Interval Upper
Estimate from
Assaults
filed a Report?
Bound of
Bound of
Survey
Estimate
Estimate
Restricted
40
254
176
332
Unrestricted
79
759
566
953
Total
119
1,013
805
1,222

Number of
Reports in
DSAID
228
608
836

Note. The number of reports in DSAID is the average of the three 12-month totals within DSAID (August 1, 2014 through July 31, 2015,
September 1, 2014 through August 31, 2015, and October 1, 2014 through September 30, 2015).

Likelihood of Victims to Respond to Surveys
DMDC determined NRB could exist if sexual assault survivors that made either an
unrestricted or restricted report were either more or less likely to complete the survey. If sexual
assault survivors that made a formal report were more likely to complete the survey this could
provide some evidence for overestimating questions related to having experienced a sexual
assault. While DMDC can control for many factors, adjusting experiencing a sexual assault in
the past 12 months was not possible. If sexual assault survivors that filed a report were more or
less likely to fill out the survey, this could be evidence for over or under estimating questions
related to sexual assault. This analysis, however, is limited since it includes only those sexual
assault survivors that made an unrestricted report of an assault. If those sexual assault survivors
that do not report an assault or make a restricted report have different response rate behaviors
than those that make an unrestricted report, the analysis could be misleading. However, since
DMDC does not have a list of all sexual assault survivors this subset of the total will be used.
The DSAID file provided by SAPRO contained 1,044 records of which 752 were
unrestricted and 292 were restricted. The restricted reports did not contain names or social
security numbers (SSN) and thus the potential for matching was the 752 records. After removing
blank, invalid, and duplicate SSNs, there were 714 potential records to match to the sample.
DMDC matched the 714 SSNs to the 2015 WGRR sample of 485,774 and found that 578
matched. DMDC then determined the response rate for these matches and determined a sample
weighted response rate of 19.6 percent was associated with the sexual assault survivors that

27

reported an incident. This estimate is not significantly different than the response rate of 19.9
percent overall. Based on this analysis, there is no evidence to conclude that sexual assault
survivors who made an unrestricted report are more or less likely to respond to the survey.

Table 16.
Comparison of Response Rates from Reported Survivors to Full Sample
Full Sample
Sexual Assault Survivors that Reported
Number in Sample
Response Rate
Number in Sample
Response Rate
485,774
19.9%
578
19.6%
Note. The estimates are based on the 752 unrestricted reports made

Summary of Comparison of Known Population Values with Weighted Survey
Estimates
The purpose of this section of the NRB analysis is to determine whether there were any
differences between population estimates made from the sample respondents compared to a
source with trusted data (cases in the DSAID database). Differences between population
estimates and known data would be an indication of possible total survey error caused in part by
nonresponse. A second analysis compared response rates from sexual assault survivors who
made an unrestricted report to response rates from all members of the survey. Differences in
response rates would indicate sexual assault survivors potentially respond at different rates and
since they cannot be controlled in the weighting, this would be a concern for NRB. In both
cases, there were no reasons to draw any conclusions that NRB existed in the estimates related to
sexual assault. The overall number of actual sexual assault survivors that made an unrestricted
or restricted report fell within the confidence bound; this result was true for unrestricted and
restricted reports as well. In addition, the response rates for sexual assault survivors that made
an unrestricted report were no different than the overall sample. From this analysis DMDC
concludes there is little or no evidence of NRB associated with the estimates of sexual assaults.
Section 2: Analyze Item Missing Data and Drop Offs for Sexual Assault
Questions
In this section, DMDC analyzed item missing data for the six key WGRR 1501 sexual
assault (SA) behavior questions from all web returns to investigate whether respondents may
refuse these sensitive questions at different rates. In addition, DMDC conducted an analysis to
investigate if certain members quit the survey all together (i.e., drop-off) because of the sensitive
and graphic sexual assault questions. If the decision to refuse to answer the question or drop-off
is not random (i.e., those who avoid the SA questions have different sexual assault rates than
complete respondents), then a source of NRB exists. DMDC cannot directly test this hypothesis
because the sexual assault status for these item missing cases is unknown. However, DMDC
indirectly assesses NRB by examining the missing data patterns and characteristics of members
who drop-off from the survey.

28

Item Missing Data in Sexual Assault Behavior Questions
The six SA behavior questions are displayed in Table 17.

Table 17.
2015 WGRR Sexual Assault (SA) Questions
2015 WGRR Sexual Assault Questions (SA1 through SA6)a

a

The 2015 WGRR incorporated dynamic text in the SA questions to reflect “12 months prior dates” based on when
the respondent started the survey. In addition, dynamic text was also used based on the gender of the respondent.
DMDC uses [ ] to indicate in questions and descriptions where dynamic text was used.

As described in Table 3 (Case Dispositions for Weighting), 485,774 sample members
were selected for the 2015 WGRR. Most sampled members did not respond to the survey
(337,899 members, 69.5%); this is typical of other military surveys DMDC has conducted and
these unit nonrespondents provide no information for this analysis. Unit nonrespondents include
returning a blank survey, survey refusals, and other nonrespondents. DMDC keeps data on the
reasons for refusals and inspection of the data revealed 101 (~0.0%) sample members identified
the refusal reason as survey content was intrusive. Blank surveys (11 cases by web) are surveys
that are returned with no answered questions; respondent’s motives for failing to start the survey
(and therefore answers to the SA questions) are unknown, but DMDC suspects some respondents
have learned they can avoid future e-mail follow-ups by submitting a blank survey. There were
46,592 (9.6%) sampled members that were not located (i.e., e-mail bounce back or postal nondeliverable). In addition, 12,171 (2.5%) members were ineligible to complete the survey (i.e., no
29

longer a member of the Selected Reserve either determined by record ineligibles or survey or
proxy responses). The remaining sample count of complete and incomplete (subsequently
referred to as partial) survey respondents is 89,112 (18.3%).
Table 18 shows the breakdown of missingness for the complete eligible and partial web
and paper respondents for the six SA questions for web and paper respondents. The Total
Missing column indicates the number of missingness for each of the questions (ranging from
2.4% to 3.3%). The following columns indicate the number of missing complete respondents
(MC) and missing partial respondents (MP). There are the same number of MP respondents
(1,985) for each SA question because incomplete eligibles (SAMP_DC=5) failed to answer at
least one sexual assault questions. MC respondents answered at least one sexual assault question
(at least one question from SA1 through SA6) and were missing on one or more other sexual
assault questions. In addition, this information is broken down by gender. The percentage of
missing data for SA questions is slightly higher for females (about one percentage point for each
SA question). Because women have higher sexual assault rates than men, it is possible that
women skip the SA questions at higher rates because a sexual assault increased the likelihood
that they avoid these questions. If this were true, it may cause some underreporting of sexual
assault in the 2015 WGRR. It is important to note that the missing data rates for the SA questions
are similar to other base questions from the 2015 WGRR survey. The average missing data
percentage for base questions2 prior to the sexual assault questions is 1.5 percent. The average
missing data for base questions after the sexual assault question is 4.4 percent.
Questions SA4 and SA5 are likely less sensitive to potential respondents than the other
four SA questions because they avoid the graphic description of body parts and only involve
descriptions of touching. Generally, the missing data rates increase in a relatively linear way as
respondents progress through the survey due to survey drop-offs. Table 18 shows that the
missing completes increase from SA1 through SA3 but decrease at SA4 and SA5 (both 3.1%)
before increasing again to SA6 (up to 3.3%). The lower missing data rates for the less sensitive
questions may provide some evidence that the more graphic SA questions caused respondents to
either skip questions or quit the survey.

2

Base questions were defined as questions that were not embedded in any skip pattern

30

Table 18.
Breakdown of Sample Cases to Assess Item Missing Data for Sexual Assault Questions
Sexual Answered
Assault
Yes
No
Question

Total
Missing a
(MC+MP)

Missing
Missing Partial
Complete
(MP)
(MC)
(SAMP_DC=5)
(SAMP_DC=4)
2,176
191
1,985
(2.4%)

SA1
(Q67)

343 86,593

SA2
(Q81)

235 86,411

2,466
(2.8%)

481

SA3
(Q97)

98 86,122

2,892
(3.2%)

SA4
(Q113)

1,033 85,288

SA5
(Q129)
SA6
(Q145)

Missing By Gender
Male
(n=53,421)

Female
(n=35,691)

1,102
(2.1%)

1,074
(3.0%)

1,985

1,275
(2.4%)

1,191
(3.3%)

907

1,985

1,531
(2.9%)

1,361
(3.8%)

2,791
(3.1%)

806

1,985

1,467
(2.7%)

1,324
(3.7%)

286 86,038

2,788
(3.1%)

803

1,985

1,489
(2.8%)

1,299
(3.6%)

198 85,931

2,983
(3.3%)

998

1,985

1,587
(3.0%)

1,396
(3.9%)

a
The Total Missing denominator is 89,112 members. For example, in the SA1 row, 343 victims (Yes) + 86,593 No’s + 2,176 Total Missing
equals 89,112.

DMDC explored the missing data patterns for “yes” and “no” answers to each SA
question. Table 19 shows that the overwhelming majority of respondents who answer ‘Yes’ to
an SA question complete all other SA questions (ranging from 95.0% to 98.0%). This analysis
shows that SA survivors generally answer the full set of SA questions. Respondents that selected
“No” for SA1-SA6 answer the full sexual assault set at even higher rates (ranging from 97.1% to
98.0%). One possibility is respondents that have experienced an SA behavior may be less likely
to answer additional sensitive questions because it may provoke negative emotions. A second
possibility is the survey is simply longer for the SA victims. For each “yes” to SA1-SA6, the
respondent is presented the legal criterial item bank (11 to 13 item), until the legal requirement is
met. For example, if a respondent answered “yes” to SA1, they see 11 legal criteria items. If the
behavior met the legal criteria, and the respondent experienced additional behaviors (SA2-SA6),
they are not presented the legal criteria item bank again. Table 19 shows limited evidence that
some SA behaviors may be slightly underreported because SA survivors skip SA questions at a
higher rate than non-survivors.

31

Table 19.
Missing Data Analysis for Answers to SA1-SA6
Sexual
Assault
Question

Answered “Yes” to at least one SA Question Answered “No” to at least one SA Question

Answered
“Yes” to
Question
SA1 (Q67)
SA2 (Q81)
SA3 (Q97)
SA4 (Q113)
SA5 (Q129)
SA6 (Q145)

343
235
98
1,033
286
198

Completed all
Percent
other SA
completed all
questions
other SA
questions
326
95.0%
228
97.0%
94
95.9%
993
96.1%
281
98.3%
194
98.0%

Answered
“No” to
Question
86,593
86,411
86,112
85,288
86,038
85,931

Completed all
Percent
other SA
completed all
questions
other SA
questions
84,043
97.1%
84,141
97.4%
84,275
97.9%
83,376
97.8%
84,088
97.7%
84,175
98.0%

DMDC examined item missing data from the 2014 Status of Forces Survey of Reserve
Component Members (2014 SOFR) to compare with 2015 WGRR missing data. Both surveys
have the same target population of Reserve and National Guard members, but the 2014 SOFR
questions are less sensitive. Table 20 shows the average missing rate for 2014 SOFR base
questions Q1-Q71 is 3.4 percent. DMDC used base questions for this analysis because they are
seen by all respondents (i.e., unaffected by skip patterns). There are a total of 49 base questions
from Q1-Q71. The set Q1-Q71 are also similar in location within the questionnaire to the 18
base questions within the 2015 WGRR (Q1-Q21). The average missing data rate for the 2015
WGRR (Q1-Q21) is 1.5 percent, which is lower than the 2014 SOFR base questions.
In addition, the analysis compares key questions on each survey with single item
presentation that are not embedded in any skip logic. DMDC has observed from prior surveys
across different questionnaire content that group items and questions within deep skip logic have
higher rates of missing data. Therefore, if 2015 WGRR has higher missing data rates, it may be
due to question sensitivity. The most common questions of interest on 2014 SOFR are Q40,
Q42, and Q55. DMDC finds the less sensitive 2014 SOFR questions have similar missing data
rates to the sensitive 2015 WGRR questions. For example, the 2014 SOFR missing data rates for
questions on satisfaction, retention, and stress were 1.6, 1.7, and 3.1 percent, respectively. The
average 2015 WGRR missing rate for SA questions was 2.9 percent, very similar to the 2014
SOFR questions. If SA questions from 2015 WGRR had substantially higher missing data rates
than the 2014 SOFR, it may provide evidence that the sensitive questions negatively impact
respondents, and perhaps disproportionately SA victims. However, this analysis provides no
evidence that this is occurring.

32

Table 20.
Comparison Between 2015 WGRR and 2014 SOFR on Missingness for Non-Sensitive Items
Survey

2014 SOFR
2015 WGRR
2014 SOFR
2014 SOFR

2014 SOFR
2015 WGRR

Question

Q1-Q71. Base items - Average Missing rate
Q1-Q21. Base items - Average Missing rate
Q40. Overall, how satisfied are you with the military way of
life?
Q42. Suppose that you have to decide whether to continue to
participate in the National Guard/Reserve. Assuming you could
stay, how likely is it that you would choose to do so?
Q55. Overall, how would you rate the current level of stress in
your military life?
SA1-SA6 - Average Missing rate

Average
Missing Data
Rate
3.4%
1.5%
1.6%
1.7%

3.1%
2.9%

Drop-off Analysis for Sexual Assault Questions
As mentioned previously, partial respondents are members who started the survey but
failed to answer at least one of the sexual assault questions. DMDC wanted to understand why
the partial respondents did not complete the survey. Specifically, did partial respondents avoid
answering any of the SA questions due to their sensitive nature, or did they quit the survey prior
to the SA questions for another reason. To perform this analysis DMDC conducted a drop-off
analysis.
The drop-off analysis shows the last question that a web survey respondent answered on
the survey with a valid response. Drop-off analysis is limited strictly to web respondents
because they cannot advance to see further questions without hitting the forward button, while
paper can see all of the questions. For example, if a respondent answered Q1-10 and quit, the
drop-off analysis would indicate the last question the respondent answered was Q10, and the first
question they saw but did not answer was Q11. This drop-off analysis does not account for
“standard item missing data,” for instance when a respondent skips one question (accidentally or
on purpose), but returns to answer further questions. For instance, if a member answered Q1-10,
skipped to 12 and answered Q12-20, and then answered no further questions, the drop-off
analysis would include the member in the count where Q20 was last answered.
Analysis of drop-offs for partial respondents is only considered for the 1,958 web
respondents because it’s impossible to determine which question a paper respondent was viewing
when they quit the survey. Due to the complexity of the survey instrument and the defined
sections that surround certain questions, DMDC grouped survey items based on their content for
questions preceding SA1 (Q67). Four content modules were identified by DMDC as follows:
1. Demographics—Reserve component member status and gender. (Q1, Q2)

33

2. Time Reference—Important key events to provide frame of reference for respondents
on the time frame of “12 months prior to taking the survey.” (Q3-Q5)
3. Gender-Related MEO Violations—Experiences of MEO violations (gender
discrimination and sexual harassment) in the 12 months prior to the survey. (Q6Q21)
4. The Gender-Related MEO Violation with Greatest Effect—Circumstances pertaining
to the experience of MEO violation(s) in the past 12 months that had the greatest
impact on the respondent. (Q22-Q66)
Table 22 shows the drop-off analysis based on the content modules defined for the 1,958
partial web respondents. It is important to note that drop-offs are in the minority of all
respondents who started the survey. For the modules identified, between 97.5 to 99.9 percent of
members who saw those specific questions answered them. In addition, these rates are consistent
across other DMDC surveys regardless of the nature of the content. The first three modules
(Demographics, Time Reference and MEO Questions) are all base items and respondents see
these questions in a linear order. As respondents progress through the survey the level of
sensitive information increases. For instance, the Demographics module asks if a member was
part of the Reserve component (Q1) and their gender (Q2). This information is not considered
sensitive, and 99.9 percent of all of those who saw the question answered it. The first time
reference question, “Do you currently live in the same house or building that you did [a year
ago]?” (Q3) could be interpreted as a sensitive question by some, and makes up 8.3 percent of
the drop-offs.
The gender-related MEO violation module is composed of sixteen questions asking the
member about gender-related experiences in the military. These questions are sensitive, and
include a question that could potentially be classified as a sexual assault, “Since [a year ago], did
someone from work intentionally touch you in a sexual way when you did not want them to?”
(Q16). While the vast majority of the respondents answered these questions (98.5% of all
respondents who saw Q6 through Q21 answered them) this section also comprised the largest
source of drop-offs on the entire survey. Another majority of drop-offs exited the survey in the
MEO Violation with Greatest Effect module of questions (27.6% of drop-offs). The drop-off
pattern was relatively consistent attrition throughout the MEO section, and no single question
had an extreme spike in drop-offs.
The last module displayed in Table 22 is the first sexual assault question, SA1 (Q67).
There are five distinct paths respondents can take to view the first sexual assault question (SA1)
based on skip logic. DMDC determined that the total number of partial respondents who
dropped off “most likely” while viewing the first sexual assault question (SA1) was 221
members (11.3 % of the drop-offs). This question was determined to cause the single largest
number of drop-offs on the survey. However, it is important to note that while the 221 members
are the largest number of drop-offs for an individual question, similar numbers of members
dropped off at the Time Reference section (8.3% of the drop-offs) which has less sensitive
questions. Interestingly, Table 20 shows that males tend to drop-off at much higher rates during
the first five questions, but females started to drop-off at higher rates through the MEO section,
but this effect did not carry through to the SA1 question. Because the SA1 question creates the

34

largest number of drop-offs, DMDC interprets this as being of some concern that the sensitive
questions have higher rates of missing data; however, the missing data rates for all WGRR
questions are still low.

Table 20.
2015 WGRR Drop-Off Analysis
Module

Definition

Demographics (Q1-Q2) Last question was Q1
Time Reference (Q3-Q5) Last question was
between Q2 to Q4
MEO Questions (Q6Last question was
Q21)
between Q5 to Q20
MEO Legal Criteria
Last question was
(Q22-52), MEO
between Q21 to Q64
Violation with Greatest and did not see Q67
Effect (Q54-Q66)
Sexual Assault (Q67)
Last question was
directly before Q67a
All other questions
Last question was
between Q208 to
Q236
Total

Number of Number of Drop- Percent of all Percent of
Drop-offs
offs By Gender
Drop-offs
Total who
(n=1,958) Answered web
Male Female
Question b
(n=76,577)
65
163

42
107

23
56

3.3%
8.3%

99.9%
99.7%

928

519

409

47.4%

98.5%

540

151

389

27.6%

97.5%

221

135

86

11.3%

97.5%

41

25

16

1.9%

91.5%

1,958

979

979

100%

a

Respondents take different paths to the Sexual Assault question depending on the answers to the MEO questions.
b
Percent who completed last question in module

Summary of Item Missing Data and Drop-off for Sexual Assault
DMDC assessed the possible effects of item missing data on NRB through an analysis of
item missing data across the six SA questions on the 2015 WGRR as well as conducting an
analysis of the members that decided to drop-off entirely from the survey. The item missing data
rates on the 2015 WGRR are comparable to SOFR surveys, and therefore it does not appear that
the graphic and sensitive WGRR questions turn off large numbers of respondents. Most
members who saw the sexual assault questions answered them (between 97.5-99.5%), but
women were slightly more likely than men to skip the SA questions (~3% compared to ~2%). In
addition, the pattern of drop-offs throughout the survey also does not show that members are
offended by the sensitivity of the survey. DMDC assumes that some NRB is introduced because
the largest single drop-off for partial respondents was directly preceding the SA1 question, and
women skipped this question at higher rates than men. However, DMDC also believes that the
impact is likely small as other non-sensitive questions have similar drop-off rates.

35

Section 3: Analysis of DMDC’s Survey of Nonrespondents
If survey respondents and nonrespondents have different sexual assault propensities that
cannot be accounted for during survey weighting, it would result in biased survey estimates of
sexual assault. DMDC conducted a nonresponse study (2015 WGRR-N) based on a sample of
nonrespondents from the original survey as another method to assess NRB. The 2015 WGRR-N
was a web questionnaire with only e-mail notifications (the 2015 WGRR had postal notifications
and paper form), and the questionnaire was much shorter than the 2015 WGRR. One purpose of
the survey was to evaluate NRB by comparing the responses from the follow-up survey to the
original survey. In particular, there were four questions that were asked on both the 2015 WGRR
and 2015 WGRR-N that could be used to assess NRB (see Table 21).

Table 21.
2015 WGRR-N Comparison Questions
2015 WGRR-N Comparison Questions

If estimates from these matching questions were significantly different, this could be
evidence of NRB in these questions (and potentially other correlated questions). As described
earlier in the report, a sample of 485,774 was selected from the population of 817,007. There
were 336,112 nonrespondents (SAMP_DC=11) from the 2015 WGRR and a sample of 59,973
was selected for the 2015 WGRR-N to assess both objectives described earlier. The sample was
selected using the same sampling strata as the 2015 WGRR, since there were enough
nonrespondents per stratum, and the sampling tool was used to determine the sample allocation.

36

DMDC created sampling disposition codes using the criteria shown in Table 22. The table
shows that 1,330 of the sample were considered to be complete eligible based on the criteria.

Table 22.
Sample Disposition Codes for 2015 WGRR-N
Sample Disposition Code
(SAMP_DC)
2

4
5
11
Total

Sample Disposition Code Description
Self Report Ineligibles: If the respondent
answered “No” to Question 1 on the survey
“Were you a member of a Reserve component
on November 30, 2015?”
Complete Eligibles: If respondent was eligible
and completed 50% or more of the questions.
Incomplete Eligibles: If respondent was eligible
but failed to complete 50% or more of the
questions
Nonrespondents: All others

Number of Members
25

1,330
97

58,521
59,973

Table 23 shows population, sample size, respondents, and response rate by key domains
(e.g., gender) for both the 2015 WGRR and 2015 WGRR-N. The weighted response rates for both
surveys have similar patterns (although rates are much lower for the 2015 WGRR-N). For
example, Air National Guard and Air Force Reserve were the highest responders for both the
2015 WGRR (31.1% and 27.4%) and 2015 WGRR-N (6.4% and 4.6%). Warrant Officers
responded at the highest rate for both surveys, although for 2015 WGRR-N the number of
respondents is relatively small (less than 30).

37

Table 23.
Comparison of 2015 WGRR Sample with Nonresponse Sample 2015 WGRR-N
Domain
Variable

Domain

Full Sample
Army National
Guard
US Army
Reserve
US Navy Reserve
US Marine Corps
Reserve
Air National
Guard
US Air Force
Reserve
Male
Gender
(RSEX2)
Female
E1–E4
Paygrade
Grouping
E5–E9
(RPAYGRP9) W1–W5
O1–O3
O4–O6
TPU
Reserve
Program
AGR
(RPROG1) MilTech
IMA
Sample
Reserve
Component
(RORG_CD)

2015 WGRR
WGRR-N
Weighted
Weighted
Population Sample Eligible
Sample Eligible
Response
Response
Size Responses
Size Responses
Rates
Rates
817,007 485,774
87,127
19.8% 59,973
1,330
2.2%
348,599 186,481
25,172
16.1% 13,608
153
1.1%
197,698 121,036

18,674

17.2%

11,746

206

1.8%

58,227 36,245
38,468 36,364

8,053
4,002

24.1%
12.2%

8,016
13,852

218
56

2.7%
0.4%

104,818 61,695

19,195

31.8%

6,085

392

6.4%

69,197 43,953

12,031

28.2%

6,666

305

4.6%

52,421
34,706
18,575
45,178
2,203
9,125
12,046
57,902
13,773
12,737
2,715

19.0%
23.3%
7.8%
26.5%
39.4%
27.0%
43.3%
14.7%
42.7%
43.4%
33.3%

36,525
23,448
38,215
16,074
269
3,005
2,410
54,245
2,692
1,737
1,299

654
676
380
644
18
135
153
966
164
151
49

2.9%
2.9%
1.0%
4.0%
6.7%
4.5%
6.3%
1.8%
6.1%
8.7%
3.8%

662,565
154,442
352,772
336,347
12,187
59,530
56,171
666,695
76,747
61,484
12,082

331,332
154,442
238,102
179,140
5,773
33,684
29,075
414,431
33,432
29,273
8,638

Weighting the 2015 WGRR-N
Because some members were not eligible for the 2015 WGRR-N sample, it is not a
probability sample of all Reserve members and there is no method to create base weights.
However, weights can be constructed that approximately represent the Reserve component
population by direct post-stratification. DMDC weighted the 2015 WGRR-N using the following
process:


Complete eligibles were placed into post-stratification cells based on a cross
classification of gender (male/female), paygrade (E1–E4, E5–E9, W1–O3, O4–O6),
and component (Army Reserve, Army National Guard, Navy Reserve, Marine Corps
Reserve, Air Force Reserve, and Air National Guard).



Population totals for these initial post-stratification cells were determined based on
the total sampling population for the 2015 WGRR survey.
38



Post-stratification cells were combined if any had less than 20 complete eligibles and
were collapsed only across Reserve component levels. The rationale for combining at
the component level was that experiences of males who are E1–E4 are more likely to
be more similar across component than any other potential collapsing. One example
was that male/E1–E4 for Army Reserve, Army National Guard, and Marine Corps
Reserve were combined for 44 complete eligible respondents. After collapsing,
DMDC created 26 post-stratification cells3.



DMDC computed final weights by dividing the total population size by the number of
complete eligibles within the collapsed strata (Nh/nh).

Table 24 shows the estimates and corresponding margins of error for the four questions
that overlapped between the two surveys. For example, for Retention, DMDC estimated that
78.0 percent of Selected Reserve members indicated they were likely or very likely to stay in the
National Guard/Reserve from the 2015 WGRR and 77.7 percent indicated this on the 2015
WGRR-N. The margins of error for the 2015 WGRR are small, whereas they are larger for the
WGRR-N because of small number of respondents. Estimates for all questions are extremely
close except the Health question, and the confidence intervals overlap even for the Health
question.

Table 24.
Comparison of WGRR Survey with Nonresponse Study Control Questions
Question

Weighted Estimates
WGRR Survey
Nonresponse Survey
(n=87,127)
(n=1,330)
78.0%±1.0
77.7%±5.1

RETENTION: Assuming you could stay [in the
National Guard/Reserve], how likely is it you would
choose to do so? (% Saying Likely or Very Likely)
HEALTH: In general, would you say your health is…?
(% Saying Good or Excellent)
SAFETY AT HOME: To what extent do/would you
feel safe from being sexually assaulted at your home
duty station? (% Saying Safe or Very Safe) – note
flipped from TAB SHELL
SAFETY AWAY: To what extent do/would you feel
safe from being sexually assaulted during military
operations, training, or exercises away from - note
flipped from TAB SHELL your home duty station?

77.0%±1.0

72.3%±5.2

96.0%±1.0

96.7%±1.5

94.0%±1.0

93.7%±2.4

The similar estimates from the 2015 WGRR and nonresponse study fail to detect any
evidence of NRB in production estimates for these four questions. DMDC further researched
whether any inferences could be made about the sexual assault questions. Two of the four
questions relate to safety at home and away related to sexual assault, so DMDC hypothesized a
3

Details of the weighting and collapsing of post-strata are available upon request.

39

relationship with the sexual assault questions. To investigate this, DMDC examined correlations
between the sexual assault questions and these four questions from the 2015 WGRR. DMDC
developed a sexual assault measure (SA_R_ADJ) based on answers to the six sexual assault
questions and their corresponding legal definitions. There were 965 sexual assault survivors
identified from the survey and 648 answered the safety at home question. Using the PhiCoefficient (binary correlation) the unweighted estimated correlation was -0.1568 and the
weighted correlation was -0.1791 (females unweighted was -0.15 and males was -0.14). The
negative correlation was expected because SA victims should feel less safe. For safety away, the
correlations were both negatively correlated and nearly identical (-0.14 and -0.17 unweighted
and weighted).
Summary of Analysis of DMDC’s survey of nonrespondents
The purpose of this NRB analysis was to compare estimates from four identical questions
asked on the 2015 WGRR-N and 2015 WGRR to assess NRB. If estimates were substantively
and statistically different, this would be evidence of NRB for these estimates in the 2015 WGRR.
Given the small number of respondents and larger margins of error (MOEs) in the 2015 WGRRN, the four estimates are very similar across the two studies. Estimates from the two surveys are
within 1 percentage point of each other for three of the four questions (Health has a 4.7
percentage point difference) but the MOE is large for 2015 WGRR-N, and all estimates have
overlapping confidence intervals. This study fails to detect any evidence of NRB in production
estimates for these four questions. Additionally, there is some indication that measures
correlated with these questions may also have low levels of NRB. While the SA questions are
correlated with two measures from the WGRR-N, the correlations are fairly small and DMDC
advises against drawing conclusions for the SA questions.

Section 4: Evaluate the Sensitivity of Different Post-Survey Adjustments
(Weighting Methods) on Survey Estimates
Production weights for the 2015 WGRR were produced by Westat by first developing
models that account for each member’s propensity of experiencing unwanted sexual behaviors,
and then using those estimated propensities throughout the weighting process. This method is
consistent with RAND’s approach for the 2014 RMWS, but represents a change from how
DMDC previously conducts survey weighting. DMDC independently developed a set of weights
using our typical methods to assess the effects of different weighting approaches on survey
estimates. This section uses the DMDC weights as a validity check to determine if large
differences in the weights exist, and if these potential differences lead to more or less NRB in
survey estimates.
DMDC Weighting Methodology
DMDC’s standard weighting procedures have many similarities to the methods recently
used by RAND and Westat. Both methods estimate response propensities and make weighting
adjustments based on the inverse of those propensities. However, there are two key differences;
first, RAND and Westat used machine learning programs called Generalized Boosted Models

40

(GBM) to estimate the propensities while DMDC uses logistic models with single classification
trees (CHAID). Second, RAND and Westat first estimate propensities for several sexual assault
characteristics, and then use those estimated propensities to predict survey response. DMDC
skips this step and directly models survey eligibility and response propensities. DMDC weighted
the 2015 WGRR using three main steps:


Step 1: Adjust weights for nonresponse based on eligibility as follows:
– Transfer the weight of the 384,491 nonrespondents (SAMP_DC = 8, 9, 10, 11) to
the 88,668 cases with known eligibility (SAMP_DC = 2, 3, 4, 5). CHAID (Chisquared Automatic Interaction Detection), a decision tree technique based on Chisquare tests, was used to determine the best predictors for the logistic model. A
logistic regression model was used to predict the probability of eligibility for the
survey (known eligibility vs. unknown eligibility). Weighting adjustment factors
for eligibility were computed as the inverse of the logistic model-predicted
probabilities. The model was weighted using the sampling weight (base weight).
Predictors in the eligibility model were the same variables used by Westat with
the exception of the model-predicted probabilities for unwanted sexual behaviors.



Step 2: Adjust weights for survey completion as follows:
– Transfer the eligibility weight (created in Step 1) of the 1,985 incomplete survey
responses (SAMP_DC = 5) to the 87,127 complete-eligible respondents
(SAMP_DC = 4). Weighting adjustments for completion use the same
methodology as Step 1 (CHAID and logistic model).



Step 3: Create final weights
– The weights were poststratified to match population totals and to reduce variance
and bias unaccounted for by the previous weighting adjustments. DMDC
calculated the final weight as the product of adjustment factors in Steps 1, 2 and
3. Poststratification cells were defined by the cross-classification of Service,
gender, paygrade and reserve program. Many of the crossings were collapsed
since the goal was to create poststratification cells with more than 30 respondents.
Within each post-stratification cell, the non-response-adjusted weights for eligible
respondents and self-reported ineligibles (SAMP_DC = 2, 3, 4) were adjusted to
match population counts. There were 121 poststratification cells.

Comparison of Adjustment Stages and Final Weights
Table 25 compares the DMDC and Westat methods for each of the weight adjustments
discussed in Steps 1 through 3: eligibility, completion, and poststratification. The comparison
shows the univariate distribution of each weighting adjustment factor. The results indicate that
some edge cases, such as the maximum or minimum adjustments, do differ in both the eligibility
model and poststratification adjustments. The maximum cell in the DMDC method had a much

41

larger eligibility adjustment than the Westat method (239.00 to 45.41)4. In addition, the DMDC
method also differs in the minimum poststratification adjustment (0.47 to 0.77). However, while
these differences do exist for extreme cases, the table in aggregate is very similar even when
considering the 99 percent and one percent quantile values in the univariate distribution.
Furthermore, the mean value for each adjustment is nearly identical. Although the DMDC
method carries slightly more variance, all adjustments outside of the tails seem to convey the
same adjustments in both instances.

Table 25.
Comparison between DMDC and Westat Weighting Methods for Eligibility, Completion and
Poststratification Adjustments
Statistic
Mean
Standard
Deviation
100% Max
99%
95%
90%
75% Q3
50% Median
25% Q1
10%
5%
1%
0% Min

DMDC Method (Adjustment Factors)
Westat Method (Adjustment Factors)
Eligibility Completion Poststratification Eligibility Completion Poststratification
5.15
1.02
1.01
5.14
1.02
1.03
6.18
0.01
0.05
4.69
0.01
0.08

239.00
27.74
17.44
10.87
5.24
2.98
2.12
1.72
1.55
1.44
1.20

1.19
1.07
1.04
1.04
1.03
1.02
1.01
1.01
1.01
1.00
1.00

1.37
1.12
1.08
1.07
1.05
1.02
0.98
0.96
0.94
0.86
0.47

45.41
22.00
16.00
11.80
6.03
3.13
2.21
1.82
1.67
1.46
1.09

1.18
1.05
1.04
1.03
1.03
1.02
1.02
1.02
1.02
1.00
1.00

1.36
1.29
1.14
1.12
1.09
1.02
0.96
0.91
0.89
0.88
0.77

Table 26 extends the comparison of DMDC and Westat methods by showing the
distribution of final weights. The final weight takes into account all of the previous weighting
adjustments. Again, DMDC sees somewhat erratic behavior at the tails for the maximum weight
value, but are very close in most of the other quantiles. DMDC concludes that overall both
methods produce similar distributions of weights.

4

DMDC completed this set of weights for the purposes of comparing and contrasting with the Westat method. In a
production method, DMDC would have adjusted the larger eligibility adjustment to be closer to Westat’s value. The
large cell was caused by an extremely large number of nonrespondents detected in a particular combination of input
variables, and the large weight was only in that specific cell (as indicated by the 99% quantile).

42

Table 26.
Comparison between DMDC Method Final Weights and Westat Method Final Weights
Mean
Standard Deviation
100% Max
99%
95%
90%
75% Q3
50% Median
25% Q1
10%
5%
1%
0% Min

DMDC Method
9.1
11.3

Westat Method
9.1
8.8

483.9
51.1
30.8
18.7
10.1
5.7
3.2
2.0
1.7
1.4
1.1

86.2
43.8
29.0
19.2
11.5
6.1
3.4
2.1
1.8
1.5
1.1

Comparison of Key Estimates
Finally, DMDC compared differences in weighted survey estimates based on the DMDC
and Westat sets of weights. The final comparison between weighting method is based on the
key survey estimates regarding sexual assault questions. Table 27 and Table 28 shows seven
estimates associated with sexual assault and sexual harassment for females and males. For
example, the estimates of any sexual assault occurring with females using DMDC standard
methods was 3.5 percent and the estimate was 3.4 percent using the GBM weighting approach.
All comparisons are nearly identical for both weighting approaches and the largest difference in
the female estimate table is the Gender Discrimination results which differ by 0.2 percentage
points.

43

Table 27.
Comparison of DMDC and Westat Key Survey Estimates (Female Only)
Question
Sexual Quid Pro Quo
Sexual Assault-Penetrative
Sexual Assault-Any Type
Sexual Assault-Attempted Touch
Gender Discrimination
Sexual Harassment
Sexual Assault Rate–Adjusted for
telescoping

Variable
QPQ
SA_PEN
SA_RATE
SA_TOUCH
SDISC
SEXHAR
SA_R_ADJ

DMDC Estimate Westat Estimate
1.4% ± 0.1%
1.4% ± 0.1%
1.4% ± 0.1%
1.4% ± 0.1%
3.5% ± 0.2%
3.4% ± 0.1%
1.9% ± 0.1%
1.8% ± 0.1%
11.1% ± 0.3%
10.9% ± 0.2%
18.5% ± 0.4%
18.6% ± 0.3%
3.2% ± 0.2%
3.2% ± 0.1%

Similarly the largest difference in the male estimates is the Sexual Harassment question,
which also differs by 0.2 percentage points. Overall all of the other estimates are nearly
identical. In addition, all of the confidence intervals for either weighting method overlap.

Table 28.
Comparison of DMDC and Westat Key Survey Estimates (Male Only)
Question
Sexual Quid Pro Quo
Sexual Assault-Penetrative
Sexual Assault-Any Type
Sexual Assault-Attempted Touch
Gender Discrimination
Sexual Harassment
Sexual Assault Rate–Adjusted for
telescoping

Variable
QPQ
SA_PEN
SA_RATE
SA_TOUCH
SDISC
SEXHAR
SA_R_ADJ

DMDC Estimate Westat Estimate
0.2% ± 0.03%
0.2% ± 0.03%
0.2% ± 0.03%
0.2% ± 0.03%
0.6% ± 0.05%
0.7% ± 0.06%
0.4% ± 0.04%
0.4% ± 0.04%
1.5% ± 0.07%
1.6% ± 0.07%
4.2% ± 0.15%
4.4% ± 0.14%
0.5% ± 0.05%
0.5% ± 0.05%

Summary of Evaluate the Sensitivity of Different Post-Survey Adjustments
(Weighting Methods) on Survey Estimates
The DMDC and Westat weighting methods were conducted independently using different
software (DMDC used SAS and SPSS; Westat used R and SAS) and methodology, but the
overall results are strikingly similar across all of the intermediate weighting steps, the final
weights, and a comparison of key estimates. In addition, all of the confidence intervals for either
weighting method overlap. In conclusion we see very little potential for bias in the final weights
and resulting estimates that these weights were used to produce.

44

References
American Association for Public Opinion Research. (2015). Standard definitions: Final
dispositions of case codes and outcome rates for surveys (8th Ed.). AAPOR. Retrieved from
https://www.aapor.org/AAPOR_Main/media/publications/StandardDefinitions2015_8theditionwithchanges_April2015_logo.pdf
Benjamini, Y. & Hochberg, Y. (1995). Controlling the false discovery rate: a practical and
powerful approach to multiple testing. Journal of the Royal Statistical Society. Series B
(Methodological), 57. 289–300. Retrieved from http://www.jstor.org/stable/2346101
Chromy, J.R. (1987). Design optimization with multiple objectives. In 1987 proceedings of the
Section on Survey Research Methods : papers presented at the Annual Meeting of the
American Statistical Association, San Francisco, California, August 17–20, 1987 (pp. 194–
199). Alexandria, VA: American Statistical Association. Retrieved from
http://www.amstat.org/sections/srms/Proceedings/papers/1987_029.pdf
Dever, J.A., & Mason, R.E. (2003). DMDC sample planning tool (Version 2.1) [Computer
program and software]. Arlington, VA: DMDC.
DMDC. (2013). 2012 Workplace and gender relations survey of active duty members:
Nonresponse bias analysis report (Report No. 2013-059). Alexandria, VA: Author.
DMDC. (2016a). 2015 Workplace and gender relations survey of reserve component members:
Tabulation of responses (Report No. 2016-003). Alexandria, VA: Author.
DMDC. (2016b). 2015 Workplace and gender relations survey of reserve component members:
Administration, datasets, and codebook (Report No. 2016-005). Alexandria, VA: Author.
Groves, R.M., & Couper, M.P. (1998). Nonresponse in household interview survey. New York:
John Wiley & Sons, Inc. doi: 10.1002/9781118490082
Groves, R.M., & Peytcheva, E. (2008). The impact of nonresponse rates on nonresponse bias: A
Meta-Analysis. Public Opinion Quarterly, 72, 167–189. doi:10.1093/poq/nfn011
Kish, L. (1965). Weighting problems. Survey Sampling (pp. 424–433). New York: John Wiley
& Sons, Inc. doi:10.1002/sim.1513
Little, R.J., & Rubin, D.B. (2002). Statistical analysis with missing data (2nd ed.). New York:
John Wiley & Sons, Inc. doi: 10.1002/9781119013563
Little, R.J., & Vartivarian, S. (2004). Does weighting for nonresponse increase the variance of
survey means? (Working Paper 35). Unpublished manuscript, Department of Biostatistics,
University of Michigan, Ann Arbor, MI. Retrieved from
http://biostats.bepress.com/umichbiostat/paper35/

45

Mason, R.E., Wheeless, S.C., George, B.J., Dever, J.A., Riemer, R.A., & Elig, T.W. (1995).
Sample allocation for the status of the armed forces surveys. In Proceedings of the Section on
Survey Research Methods (Vol. II, pp. 769–774). Alexandria, VA: American Statistical
Association. Retrieved from
http://www.amstat.org/sections/srms/Proceedings/papers/1995_133.pdf
Morral, A.R., Gore, K.L., & Schell, T.L. (Eds.). (2014). Sexual assault and sexual harassment
in the U.S. military: Volume 1. Design of the 2014 RAND military workplace study (No. RR870/1-OSD). Santa Monica, CA: RAND Corporation.
Morral, A.R., Gore, K.L., & Schell, T.L. (Eds.). (2015). Sexual assault and sexual harassment
in the U.S. military: Volume 2. Estimates for Department of Defense service members from
the 2014 RAND military workplace study (No. RR-870/2-OSD). Santa Monica, CA: RAND
Corporation.
Morral, A.R., Gore, K.L., & Schell, T.L. (Eds.). (2015). Sexual assault and sexual harassment
in the U.S. military: Volume 4. Investigations of potential bias in estimates from the 2014
RAND military workplace study (No. RR-870/2-OSD). Santa Monica, CA: RAND
Corporation.
Ridgeway, G. (2009). Generalized boosted models: A guide to the GBM package (Version
2.11) [Computer software]. Retrieved from http://lib.stat.cmu.edu/R/CRAN/
Ridgeway, G. (2009). Toolkit for weighting and analysis of nonequivalent groups (Version 1.49.3) [Computer software]. Retrieved from http://lib.stat.cmu.edu/R/CRAN/

46

Appendix A.
Domain Based Sampling Size and Expected
Response

Domain

Population

All Domains
National Guard
Army National Guard
Air National Guard
Reserve
US Army Reserve
US Navy Reserve
US Marine Corps Reserve
US Air Force Reserve
Enlisted
E1-E4
E1-E3
E4
E5-E9
E5-E7
E8-E9
Officers
O1-O3
O4-O6
W1-W5
TPU
AGR
IMA
Deployed Last 12 Months
Not Deployed Last 12 Months
Non-Hispanic White
Total Minority
Non-Hispanic Black
Hispanic
Females
Females*Enlisted
Females*E1-E4
Females*E5-E9
Females*Officers
Females*O1-O3
Females*O4-O6
Females*TPU
Females*AGR
Females*IMA
Females*Non-Hispanic White
Females* Minority
Females*National Guard
Females*Army National Guard
Females*Army National Guard*Enlisted

817,007
453,417
348,599
104,818
363,590
197,698
58,227
38,468
69,197
689,119
352,772
158,071
194,701
336,347
307,320
29,027
127,888
59,524
56,171
12,193
666,703
76,746
12,048
33,507
783,500
549,466
267,541
129,665
88,750
154,442
130,132
71,779
58,353
24,310
13,165
9,784
125,127
15,725
3,145
84,666
69,776
76,588
55,983
50,044

49

Approximate
Sample Size1
486,119
248,019
186,500
61,738
237,424
120,991
36,217
36,352
43,940
416,917
238,121
110,650
127,529
179,273
164,724
14,339
68,548
33,691
29,097
5,779
414,689
33,461
8,614
18,697
466,966
313,196
172,564
85,838
55,824
154,442
130,132
71,779
58,353
24,310
13,165
9,784
125,127
15,725
3,145
84,666
69,776
76,588
55,983
50,044

Expected
Responses
131,959
66,370
41,676
24,694
65,589
32,651
12,492
3,710
16,736
101,433
36,049
16,211
19,838
65,384
59,639
5,745
30,526
12,908
15,077
2,541
96,828
16,606
3,709
5,596
126,363
85,082
46,877
24,939
13,347
71,370
56,289
22,670
33,619
15,081
7,164
6,983
52,575
9,868
1,906
40,699
30,671
34,282
22,194
18,680

Percent
Sampled
59.5
54.7
53.5
58.9
65.3
61.2
62.2
94.5
63.5
60.5
67.5
70.0
65.5
53.3
53.6
49.4
53.6
56.6
51.8
47.4
62.2
43.6
71.5
55.8
59.6
57.0
64.5
66.2
62.9
100.0
100.0
100.0
100.0
100.0
100.0
100.0
100.0
100.0
100.0
100.0
100.0
100.0
100.0
100.0

Domain

Population

Females*Army National Guard*Officers
Females*Air National Guard
Females*Air National Guard*Enlisted
Females*Air National Guard*Officers
Females*Reserve
Females*US Army Reserve
Females*US Army Reserve*Enlisted
Females*US Army Reserve*Officers
Females*US Navy Reserve
Females*US Navy Reserve*Enlisted
Females*US Navy Reserve* Officers
Females*US Marine Corps Reserve
Females*US Marine Corps Reserve*Enlisted
Females*US Marine Corps Reserve*Officers
Females*US Air Force Reserve
Females*US Air Force Reserve*Enlisted
Females*US Air Force Reserve*Officers
Males
Males*Enlisted
Males*E1-E4
Males*E5-E9
Males*Officers
Males*O1-O3
Males*O4-O6
Males*TPU
Males*AGR
Males*IMA
Males*Non-Hispanic White
Males*Total Minority
Males*National Guard
Males*Army National Guard
Males*Army National Guard*Enlisted
Males*Army National Guard*Officers
Males*Air National Guard
Males*Air National Guard*Enlisted
Males*Air National Guard*Officers
Males*Reserve
Males*US Army Reserve
Males*US Army Reserve*Enlisted
Males*US Army Reserve*Officers
Males*US Navy Reserve
Males*US Navy Reserve*Enlisted
Males*US Navy Reserve* Officers
Males*US Marine Corps Reserve

5,939
20,605
17,784
2,821
77,854
45,138
36,365
8,773
12,839
10,105
2,734
1,614
1,309
305
18,263
14,525
3,738
662,565
558,987
280,993
277,994
103,578
46,359
46,387
541,576
61,021
8,903
464,800
197,765
376,829
292,616
253,791
38,825
84,213
72,107
12,106
285,736
152,560
125,700
26,860
45,388
33,601
11,787
36,854

50

Approximate
Sample Size1
5,939
20,605
17,784
2,821
77,854
45,138
36,365
8,773
12,839
10,105
2,734
1,614
1,309
305
18,263
14,525
3,738
331,283
287,319
166,348
120,649
44,228
20,537
19,297
289,202
17,696
5,475
228,682
102,838
171,457
130,507
117,251
13,317
41,096
34,611
6,477
159,726
75,822
65,615
10,314
23,420
18,313
5,104
34,753

Expected
Responses
3,514
12,088
10,336
1,752
37,088
19,938
14,561
5,377
6,693
4,732
1,961
521
376
145
9,936
7,604
2,332
60,589
45,144
13,379
31,765
15,445
5,744
8,094
44,253
6,738
1,803
44,383
16,206
32,088
19,482
15,201
4,281
12,606
10,296
2,310
28,501
12,713
9,140
3,573
5,799
3,579
2,220
3,189

Percent
Sampled
100.0
100.0
100.0
100.0
100.0
100.0
100.0
100.0
100.0
100.0
100.0
100.0
100.0
100.0
100.0
100.0
100.0
50.0
51.4
59.2
43.4
42.7
44.3
41.6
53.4
29.0
61.5
49.2
52.0
45.5
44.6
46.2
34.3
48.8
48.0
53.5
55.9
49.7
52.2
38.4
51.6
54.5
43.3
94.3

Domain

Population

Males* US Marine Corps Reserve*Enlisted
Males*US Marine Corps Reserve*Officers
Males*US Air Force Reserve
Males*US Air Force Reserve*Enlisted
Males*US Air Force Reserve*Officers

32,894
3,960
50,934
40,894
10,040

1

Approximate
Sample Size1
30,789
3,960
25,671
20,611
5,060

Expected
Responses
1,909
1,280
6,800
5,019
1,781

Percent
Sampled
93.6
100.0
50.4
50.4
50.4

This is an approximate sample size that comes from the sampling tool since not all domains are explicitly defined by the stratum definitions. For
example, since IMAs are not used as a stratifier, the actual number of IMAs is not known until the sample is actually selected

51

Appendix B.
Categorical Variables Used for the Eligibility
and Completion Adjustments

Variable Name
Demographic Factors
Family Status
Education
Race/Ethnic Category
US Citizenship Origin Code

US Citizenship Status Code

Categories
Married With Child(ren); Married Without Child(ren); Single With
Child(ren); Single Without Child(ren); Unknown
No College or DK; Some College; 4-year Degree; Grad / Professional
Degree
Hispanic; Non-Hispanic Black; Non-minority; Other Race; Unknown
Born outside the US, GU, PR, or VI to at least one citizen parent; Born
within the US, GU, PR or VI; Unknown or NA; US citizen by
naturalization; US citizen, parent became a citizen by naturalization
Born outside the US, GU, PR, or VI to at least one citizen parent; Born
within the US, GU, PR or VI; Non US citizen or national; Unknown or
NA

Military Career Factors
Military Accession Program

Air National Guard Academy of Military Sciences; Aviation Cadet
program; Aviation training program other than OCS, AOCS, OTS, or PLC;
Direct appointment authority, Commissioned Off, all other; Direct
appointment authority, Commissioned Off, professional; Direct
appointment authority, commissioned warrant officer; Induction; National
Guard state OCS; OCS, AOCS, OTS, or PLC; Other; ROTC/NROTC
non-scholarship program; ROTC/NROTC scholarship program; S; US
Air Force Academy; US Coast Guard Academy; US Merchant Marine
Academy; US Military Academy; US Naval Academy; Unknown or Not
Applicable; Vol enlist - Rsv Comp for Reg DEP - 10 USC 12103/10 USC
513; Voluntary enlistment - Rsv Comp, Sec 511, ref(b). Excl DEP;
Voluntary enlistment in a Regular Component; Voluntary enlistment in a
Regular component under the NCSp; Warrant Officer Aviation Training
Program
Active Duty Status
Not on Active Duty; On Active Duty
Active Duty and Special Operations Active; Active Special Operations; Unknown
Status
Active Guard & Reserve, or Full
AGR: 10 USC 10211; AGR: 10USC 12310; AGR: 32; AGR: Other;
Time National Guard Duty Statue Military Technician; Unknown or NA
ID
Combat Occupation Flag
N; Y
Current Deployment Status
Currently Deployed; Never deployed; Not currently deployed
Deployment Flag in the Last 12
Deployed in the last 12 months or currently deployed; Never deployed (as
months
of March 2015); Not deployed in the last 12 months
Deployment Flag in the Last 24
Deployed in the last 24 months or currently deployed; Never deployed (as
months
of March 2015); Not deployed in the last 24 months
Eligibility Status as of August 2015 Eligible, Ineligible
Paygrade
E01; E02; E03; E04; E05; E06; E07; E08; E09; W01; W02;
W03; W04; W05; O01; O02; O03; O04; O05; O06
Primary Regular Component
Yes; No; Unknown or NA;
Service Indicator
Reserve Category Programs
AGR/TAR; IMA; Military Technicians; TPU; Unknown
Reserve Organization Code
Air Force Reserve; Air National Guard; Army National Guard; Army
Reserve; Marine Corps Reserve; Navy Reserve
Reserve Category Group Code
Active Guard/Reserve (AGR); Military Technician (MILTECH); Selected
Reserve (not including AGR or MILTECH)
Reserve Subcategory Code
Active Guard Reserve; Awaiting Second Part of IADT; Drilling Unit

55

Variable Name

Categories
Member; FT members performing AD on FTNGD for >180, but exempt
from; Individual Mobilization Augmentees (IMA); On Initial Active Duty
For Training (IADT); Person awaiting IADT; SEL RES-Other Training
Program; Simultaneous Membership Program (SMP)
Reserve Category Code
Selected Reserve - Trained in Units; Selected Reserve - Trained individuals
(non-unit); Selected Reserve - Training Pipeline
Reserve Component Category Code Unknown or NA; Inactive National Guard, RAPIDS entry; Inactive National
Guard, individual; Ready Reserve training, individual in officer training
program; Ready Reserve training, individual in Health Professional
Scholarship program; Reserve Officer Training Corps (ROTC); Individual
Ready Reserve, RAPIDS entry; Individual Ready Reserve, trained;
Individual Ready Reserve, awaiting IADT, not authorized to perform IDT;
Selected Reserve, Unknown Cat.; Selected Reserve, trained individual in
unit, 48 or more IDT periods; Selected Reserve, trained individual in unit,
Active Guard or Reserve; Full-Time Members (Special Category): Trained
Selected Reserve members who are performing AD for more than 180 days
in a fiscal year, but who are exempted from counting against the AD
strengths; Selected Reserve, trained individual not in unit, Individual
Mobilization Augmentee; Selected Reserve, individual in training pipeline,
on IADT; Selected Reserve, individual in training pipeline, awaiting IADT,
authorized to perform IDT; Selected Reserve, individual in training pipeline,
awaiting second part of IADT; Selected Reserve, individual in training
pipeline, Simultaneous Membership Program; Selected Reserve, individual
in training pipeline, other training program; Standby Reserve (Y9); Standby
Reserve, individual on Active Status list; Standby Reserve, individual on
Inactive Status List, 20 or more years Reserve service and less than 30%
disability; Standby Reserve, individual on Inactive Status List, other;
Reserve Category Unknown
Military Environment Factors
Assigned UIC Change of Station
N; Y
Flag as of March 2014
Assigned UIC Change of Station
N; Y
Flag as of August 2015
Duty UIC Change of Station Flag as N; Y
of March 2014
Duty UIC Change of Station Flag as N; Y
of August 2015
Survey Fielding Factors
First Letter Returned as PND
N; Y
Invalid Army E-mail Address Flag @Army.mil address; Not @Army.mil Address
E-mail Address Purchase Flag
Do Not Purchase e-mail; Purchase E-mail
E-mail Address Flag
At least one e-mail address; No e-mail address
Home Address Flag
Address available; No Home Address
Change in Mailing Address since
N; Y
Sample Frame Development
Mail Address Flag
N; Y
Number of E-mail Addresses
No e-mail address; One e-mail address; Two e-mail addresses

56

Appendix C.
Distribution of Weights and Adjustment
Factors by Eligibility Status for Female

Eligibility
Sampling
Statistic
Status
Weight
Eligible
N
Respondents MIN
MAX
MEAN
STD
CV
Eligible,
N
Incomplete MIN
Response
MAX
MEAN
STD
CV
Self/Proxy N
Ineligibles MIN
MAX
MEAN
STD
CV
Nonresponde N
nts
MIN
MAX
MEAN
STD
CV
Record
N
Ineligibles MIN
MAX
MEAN
STD
CV

Eligibility
Status
Adjusted
Weight

Complete Final Weight
Complete
Eligible
With
Eligibility
Eligible
Response Nonresponse Status
Response
Adjusted and Raking
Factor
Factor
Weight
Factors
34,706
34,706
34,706
34,706
1.17
1.18
1.14
1.01
39.19
41.23
38.18
1.07
4.16
4.34
4.04
1.03
3.34
3.64
3.24
0.01
0.80
0.84
0.80
0.01
987
987
987
987
0.00
0.00
1.18
0.00
0.00
0.00
27.27
0.00
0.00
0.00
4.23
0.00
0.00
0.00
3.42
0.00
0.81
523
523
523
523
1.43
1.48
1.43
1.00
43.06
44.83
43.06
1.00
6.60
6.99
6.60
1.00
5.51
5.99
5.51
0.00
0.84
0.86
0.84
0.00
115,035
115,035
115,035
115,035
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00

34,706
1.00
1.00
1.00
0.00
0.00
987
1.00
1.00
1.00
0.00
0.00
523
1.00
1.00
1.00
0.00
0.00
115,035
1.00
1.00
1.00
0.00
0.00
3,191
1.00
1.00
1.00
0.00

34,706
1.14
38.18
4.04
3.24
0.80
987
1.18
27.27
4.23
3.42
0.81
523
1.43
43.06
6.60
5.51
0.84
115,035
0.00
0.00
0.00
0.00
3,191
1.00
1.00
1.00
0.00

3,191
1.00
1.00
1.00
0.00

0.00

0.00

0.00

59

3,191
0.00
0.00
0.00
0.00

3,191

3,191

Raking
Factor
34,706
0.77
1.21
1.03
0.06
0.06
987

523
0.77
1.18
1.04
0.06
0.05
115,035

3,191

Appendix D.
Distribution of Weights and Adjustment
Factors by Eligibility Status for Male

Eligibility
Status

Eligibility
Sampling Status
Statistic
Weight Adjusted
Weight

Eligible
Respondents

N
MIN
MAX
MEAN
STD
CV
Eligible,
N
Incomplete
MIN
Response
MAX
MEAN
STD
CV
Self/Proxy
N
Ineligibles
MIN
MAX
MEAN
STD
CV
Nonrespondents N
MIN
MAX
MEAN
STD
CV
Record
N
Ineligibles
MIN
MAX
MEAN
STD
CV

Complete Final Weight
Complete
Eligible
With
Eligibility
Eligible
Response Nonresponse Status
Response
Adjusted and Raking
Factor
Factor
Weight
Factors
52,421
52,421
52,421
52,421
1.21
1.58
1.07
1.02
125.96
135.92
64.72
1.12
11.77
12.24
5.72
1.02
9.26
9.98
5.26
0.00
0.79
0.81
0.92
0.00
998
998
998
998
0.00
0.00
1.32
0.00
0.00
0.00
30.53
0.00
0.00
0.00
6.38
0.00
0.00
0.00
5.59
0.00
0.88
1,018
1,018
1,018
1,018
2.53
3.08
1.40
1.00
114.73
123.58
64.72
1.00
19.41
20.45
10.33
1.00
15.88
16.95
9.22
0.00
0.82
0.83
0.89
0.00
269,456
269,456
269,456
269,456
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00

52,421
1.00
5.69
2.32
0.91
0.39
998
1.00
5.69
2.29
0.93
0.41
1,018
1.00
5.69
2.12
0.75
0.35
269,456
1.00
5.69
1.94
0.59
0.30
7,439
1.00
5.69
1.84
0.52

52,421
1.19
121.51
11.53
9.03
0.78
998
1.79
57.32
12.69
9.57
0.75
1,018
2.53
114.73
19.41
15.88
0.82
269,456
0.00
0.00
0.00
0.00
7,439
1.00
5.69
1.84
0.52

7,439
1.00
5.69
1.84
0.52

0.28

0.28

0.28

63

7,439
0.00
0.00
0.00
0.00

7,439

7,439

Raking
Factor
52,421
0.87
1.35
1.03
0.09
0.09
998

1,018
0.88
1.34
1.05
0.09
0.09
269,456

7,439

Form Approved
OMB No. 0704-0188

REPORT DOCUMENTATION PAGE

The public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources,
gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of
information, including suggestions for reducing the burden, to Department of Defense, Washington Headquarters Services, Directorate for Information Operations and Reports (0704-0188),
1215 Jefferson Davis Highway, Suite 1204, Arlington, VA 22202-4302. Respondents should be aware that notwithstanding any other provision of law, no person shall be subject to any
penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number.

PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS.
1. REPORT DATE (DD-MM-YYYY)
2. REPORT TYPE

17-03-2016

3. DATES COVERED (From - To)

Final Report

August-October 2015

4. TITLE AND SUBTITLE

5a. CONTRACT NUMBER

2015 Workplace and Gender Relations Survey of Reserve Component
Members: Statistical Methodology Report

5b. GRANT NUMBER

5c. PROGRAM ELEMENT NUMBER

5d. PROJECT NUMBER

6. AUTHOR(S)

Defense Research, Surveys, and Statistics Center (RSSC)
5e. TASK NUMBER

5f. WORK UNIT NUMBER

8. PERFORMING ORGANIZATION
REPORT NUMBER

7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES)

Defense Manpower Data Center/RSSC
4800 Mark Center Drive, Suite 04E25-01
Alexandria, VA 22350-4011

DMDC Report No. 2016-003

10. SPONSOR/MONITOR'S ACRONYM(S)

9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES)

Sexual Assault Prevention and Response Office (SAPRO)
4800 Mark Center Drive, Suite 07G21, Alexandria, VA 22311
11. SPONSOR/MONITOR'S REPORT
NUMBER(S)
12. DISTRIBUTION/AVAILABILITY STATEMENT

Available for public release; distribution unlimited
13. SUPPLEMENTARY NOTES

14. ABSTRACT

This report describes the statistical methodologies for the 2015 Workplace and Gender Relations Survey of Reserve Component
Members (2015 WGRR). The first section describes the sample design and selection of the sample. The second section describes
weighting and variance estimation, as well as a comparison to the 2014 RAND Military Workplace Study. The third section
describes the statistical tests used for the 2015 WGRR. The fourth section describes the calculation of location, completion, and
response rates for the full sample and population subgroups. The final section contains the nonresponse bias (NRB) analysis.

15. SUBJECT TERMS

Sexual Assault, sexual harassment, statistical methodology
16. SECURITY CLASSIFICATION OF:
a. REPORT b. ABSTRACT c. THIS PAGE

UU

UU

UU

17. LIMITATION OF
ABSTRACT

SAR

18. NUMBER 19a. NAME OF RESPONSIBLE PERSON
OF
Eric Falk
PAGES
19b. TELEPHONE NUMBER (Include area code)

74

571-372-1098

Reset

Standard Form 298 (Rev. 8/98)

Prescribed by ANSI Std. Z39.18

INSTRUCTIONS FOR COMPLETING SF 298
1. REPORT DATE. Full publication date, including
day, month, if available. Must cite at least the year
and be Year 2000 compliant, e.g. 30-06-1998;
xx-06-1998; xx-xx-1998.
2. REPORT TYPE. State the type of report, such as
final, technical, interim, memorandum, master's
thesis, progress, quarterly, research, special, group
study, etc.
3. DATES COVERED. Indicate the time during
which the work was performed and the report was
written, e.g., Jun 1997 - Jun 1998; 1-10 Jun 1996;
May - Nov 1998; Nov 1998.
4. TITLE. Enter title and subtitle with volume
number and part number, if applicable. On classified
documents, enter the title classification in
parentheses.
5a. CONTRACT NUMBER. Enter all contract
numbers as they appear in the report, e.g.
F33615-86-C-5169.
5b. GRANT NUMBER. Enter all grant numbers as
they appear in the report, e.g. AFOSR-82-1234.
5c. PROGRAM ELEMENT NUMBER. Enter all
program element numbers as they appear in the
report, e.g. 61101A.
5d. PROJECT NUMBER. Enter all project numbers
as they appear in the report, e.g. 1F665702D1257;
ILIR.
5e. TASK NUMBER. Enter all task numbers as they
appear in the report, e.g. 05; RF0330201; T4112.
5f. WORK UNIT NUMBER. Enter all work unit
numbers as they appear in the report, e.g. 001;
AFAPL30480105.
6. AUTHOR(S). Enter name(s) of person(s)
responsible for writing the report, performing the
research, or credited with the content of the report.
The form of entry is the last name, first name, middle
initial, and additional qualifiers separated by commas,
e.g. Smith, Richard, J, Jr.
7. PERFORMING ORGANIZATION NAME(S) AND
ADDRESS(ES). Self-explanatory.

8. PERFORMING ORGANIZATION REPORT NUMBER.
Enter all unique alphanumeric report numbers assigned
by the performing organization, e.g. BRL-1234;
AFWL-TR-85-4017-Vol-21-PT-2.
9. SPONSORING/MONITORING AGENCY NAME(S)
AND ADDRESS(ES). Enter the name and address of the
organization(s) financially responsible for and
monitoring the work.
10. SPONSOR/MONITOR'S ACRONYM(S). Enter, if
available, e.g. BRL, ARDEC, NADC.
11. SPONSOR/MONITOR'S REPORT NUMBER(S).
Enter report number as assigned by the sponsoring/
monitoring agency, if available, e.g. BRL-TR-829; -215.
12. DISTRIBUTION/AVAILABILITY STATEMENT. Use
agency-mandated availability statements to indicate the
public availability or distribution limitations of the
report. If additional limitations/ restrictions or special
markings are indicated, follow agency authorization
procedures, e.g. RD/FRD, PROPIN, ITAR, etc. Include
copyright information.
13. SUPPLEMENTARY NOTES. Enter information not
included elsewhere such as: prepared in cooperation
with; translation of; report supersedes; old edition
number, etc.
14. ABSTRACT. A brief (approximately 200 words)
factual summary of the most significant information.
15. SUBJECT TERMS. Key words or phrases
identifying major concepts in the report.
16. SECURITY CLASSIFICATION. Enter security
classification in accordance with security classification
regulations, e.g. U, C, S, etc. If this form contains
classified information, stamp classification level on the
top and bottom of this page.
17. LIMITATION OF ABSTRACT. This block must be
completed to assign a distribution limitation to the
abstract. Enter UU (Unclassified Unlimited) or SAR
(Same as Report). An entry in this block is necessary if
the abstract is to be limited.

Standard Form 298 Back (Rev. 8/98)


File Typeapplication/pdf
File TitleMicrosoft Word - WGRR1501_SMR.docx
AuthorFalkET
File Modified2016-05-04
File Created2016-03-15

© 2024 OMB.report | Privacy Policy