RAND Military Workplace Study Volume 4

RAND_RMWS_volume4.pdf

Workplace and Gender Relations Survey

RAND Military Workplace Study Volume 4

OMB: 0704-0615

Document [pdf]
Download: pdf | pdf
SEXUAL ASSAULT AND
SEXUAL HARASSMENT
IN THE U.S. MILITARY
Volume 4. Investigations of Potential Bias
in Estimates from the 2014 RAND
Military Workplace Study
Andrew R. Morral, Kristie L. Gore, Terry L. Schell, editors

C O R P O R AT I O N

For more information on this publication, visit www.rand.org/t/RR870z6

Library of Congress Cataloging-in-Publication Data is available for this publication.
ISBN: 978-0-8330-9279-3

Published by the RAND Corporation, Santa Monica, Calif.
© Copyright 2016 RAND Corporation

R® is a registered trademark.

Limited Print and Electronic Distribution Rights
This document and trademark(s) contained herein are protected by law. This representation of RAND
intellectual property is provided for noncommercial use only. Unauthorized posting of this publication
online is prohibited. Permission is given to duplicate this document for personal use only, as long as it is
unaltered and complete. Permission is required from RAND to reproduce, or reuse in another form, any of
its research documents for commercial use. For information on reprint and linking permissions, please visit
www.rand.org/pubs/permissions.html.
The RAND Corporation is a research organization that develops solutions to public policy challenges to help
make communities throughout the world safer and more secure, healthier and more prosperous. RAND is
nonprofit, nonpartisan, and committed to the public interest.
RAND’s publications do not necessarily reflect the opinions of its research clients and sponsors.
Support RAND
Make a tax-deductible charitable contribution at
www.rand.org/giving/contribute

www.rand.org

The 2014 RAND Military Workplace Study Team

Principal Investigators
Andrew R. Morral, Ph.D.
Kristie L. Gore, Ph.D.
Instrument Design
Lisa Jaycox, Ph.D., team lead
Terry Schell, Ph.D.
Coreen Farris, Ph.D.
Dean Kilpatrick, Ph.D.*
Amy Street, Ph.D.*
Terri Tanielian, M.A.*

Study Design and Analysis
Terry Schell, Ph.D., team lead
Bonnie Ghosh-Dastidar, Ph.D.
Craig Martin, M.A.
Q Burkhart, M.S.
Robin Beckman, M.P.H.
Megan Mathews, M.A.
Marc Elliott, Ph.D.

Project Management
Kayla M. Williams, M.A.
Caroline Epley, M.P.A.
Amy Grace Donohue, M.P.P.

Survey Coordination
Jennifer Hawes-Dawson

Westat Survey Group
Shelley Perry, Ph.D., team lead
Wayne Hintze, M.S.
John Rauch
Bryan Davis
Lena Watkins
Richard Sigman, M.S.
Michael Hornbostel, M.S.

Project Communications
Steve Kistler
Jeffrey Hiday
Barbara Bicksler, M.P.P.

Scientific Advisory Board
Major General John Altenburg, Esq. (USA, ret.)
Captain Thomas A. Grieger, M.D. (USN, ret.)
Dean Kilpatrick, Ph.D.
Laura Miller, Ph.D.
Amy Street, Ph.D.
Roger Tourangeau, Ph.D.

David Cantor, Ph.D.
Colonel Dawn Hankins, USAF
Roderick Little, Ph.D.
Sharon Smith, Ph.D.
Terri Tanielian, M.A.
Veronica Venture, J.D.

* Three members of the Scientific Advisory Board were so extensively involved in the
development of the survey instrument that we list them here as full Instrument Design
team members.

iii

Preface

The Sexual Assault Prevention and Response Office within the Office of the Secretary
of Defense selected the RAND Corporation to provide a new and independent evaluation of sexual assault, sexual harassment, and gender discrimination across the U.S.
military. As such, the Department of Defense (DoD) asked the RAND research team
to redesign the approach used in previous DoD surveys, if changes would improve
the accuracy and validity of the survey results for estimating the prevalence of sexual
crimes and violations. In the summer of 2014, RAND fielded a new survey as part of
the RAND Military Workplace Study.
This report, Volume 4 in our series, contains discussion of methodological studies
of nonresponse bias, total survey error, the effects of our sample weighting methods,
analyses of survey breakoff, and other studies of the precision and validity of the survey
and its results. The complete series that collectively describes the study methodology
and its findings includes the following reports:
•	 Sexual Assault and Sexual Harassment in the U.S. Military: Top-Line Estimates for
Active-Duty Service Members from the 2014 RAND Military Workplace Study
•	 Sexual Assault and Sexual Harassment in the U.S. Military: Top-Line Estimates for
Active-Duty Coast Guard Members from the 2014 RAND Military Workplace Study
•	 Sexual Assault and Sexual Harassment in the U.S. Military: Volume 1. Design of the
2014 RAND Military Workplace Study
•	 Sexual Assault and Sexual Harassment in the U.S. Military: Volume 2. Estimates for
Department of Defense Service Members from the 2014 RAND Military Workplace
Study
•	 Sexual Assault and Sexual Harassment in the U.S. Military: Annex to Volume 2.
Tabular Results from the 2014 RAND Military Workplace Study for Department of
Defense Service Members
•	 Sexual Assault and Sexual Harassment in the U.S. Military: Volume 3. Estimates
for Coast Guard Service Members from the 2014 RAND Military Workplace Study
•	 Sexual Assault and Sexual Harassment in the U.S. Military: Annex to Volume 3.
Tabular Results from the 2014 RAND Military Workplace Study for Coast Guard
Service Members

v

vi

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

•	 Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4. Investigations
of Potential Bias in Estimates from the 2014 RAND Military Workplace Study.
These reports are available online at: www.rand.org/surveys/rmws.
This research was conducted within the Forces and Resources Policy Center of the
RAND National Defense Research Institute, a federally funded research and development center sponsored by the Office of the Secretary of Defense, the Joint Staff, the
Unified Combatant Commands, the Navy, the Marine Corps, the defense agencies,
and the defense Intelligence Community.
For more information on the Forces and Resources Policy Center, see www.rand.
org/nsrd/ndri/centers/frp or contact the director (contact information is provided on
the web page).

Contents

Preface. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v
Figures and Tables. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
Acknowledgments. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxvii
CHAPTER ONE

Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
About the 2014 Survey. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Organization of the Report. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Statistical Analysis and Reporting Conventions Used in This Report.. . . . . . . . . . . . . . . . . . . . . . . . . . . 4
CHAPTER TWO

Follow-Up Studies of Survey Nonrespondents. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Study Procedures. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Analysis of Nonresponse Bias.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Results. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Discussion and Conclusions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
CHAPTER THREE

The Efficacy of Sampling Weights for Correcting Nonresponse Bias. . . . . . . . . . . . . . . . . . . . . . . 21
Participant Characteristics Associated with Survey Nonresponse.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Association of Participant Characteristics with Survey Outcomes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Characteristics That Could Lead to Nonresponse Bias.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
The Development and Performance of RMWS Weights.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Conclusion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
CHAPTER FOUR

Investigation of Total Survey Error Using Official Records of Reported Sexual
Assaults. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

vii

viii

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

CHAPTER FIVE

Performance of the Sexual Assault Survey Module. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Intentionality. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Offender Behavior/Lack of Consent. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
Confirming Past-Year Time Frame. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Conclusions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
CHAPTER SIX

Undercounting and Overcounting of Service Members Exposed to Sexual Assault. . . . . 93
Inclusion of Preservice Sexual Assaults. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
Exclusion of Assaults Against Members With Fewer Than Six Months of Service. . . . . . . . . . . . 95
Exclusion of Members Who Recently Left the Service. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
Inclusion or Exclusion of Alcohol Blackouts and Fear Responses That Immobilize. . . . . . . . . . 97
Inclusion of Nonpenile Oral Penetration in the Penetration Counts. . . . . . . . . . . . . . . . . . . . . . . . . . . 98
Possible Exclusion of Civilian Sexual Assaults Among Reserve Component Members. . . . . . 98
Conclusions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
CHAPTER SEVEN

Performance of the Sexual Harassment and Gender Discrimination Module. . . . . . . . . .
Sexual Harassment and Gender Discrimination Screening Items. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Classification of Sexual Harassment of the Sexually Hostile Work Environment Type.. . . .
Classification of Sexual Harassment of the Quid Pro Quo Type. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Classification of Gender Discrimination. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Error in Categorizing Hostile Workplace Experiences. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Conclusion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

103
103
105
113
113
117
120

CHAPTER EIGHT

Comparison of Events Identified by the Prior Form and RAND Forms. . . . . . . . . . . . . . . . . 123
Some Past-Year Unwanted Sexual Contacts Counted with the Prior Form Occurred
More Than a Year Ago.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
The Prior Form Identifies Fewer Penetrative Sexual Assaults Than the RAND Form. . . . . . 126
Unwanted Sexual Contacts on the Prior Form May Include Events That Are Not
UCMJ Crimes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
Differences Between the WGRA and RAND Sexual Harassment Definitions. . . . . . . . . . . . . . 129
Conclusions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
CHAPTER NINE

Analysis of Survey Nonconsent and Breakoff. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Survey Nonconsent Rate. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Survey Breakoff Rates. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Effect of Survey Breakoff on Sample Characteristics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Conclusions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

135
136
136
145
150

Contents

ix

CHAPTER TEN

Service Member Tolerance of the RAND Form. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Complaint Rates. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Harm to Victims. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Benefits of the New RAND Survey Using Explicit Questions to Measure Sexual
Assault. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Conclusions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

153
154
155
157
157

CHAPTER ELEVEN

Conclusions and Recommendations for Future Administrations of the WGRA. . . . . . .
Measurement Approach. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Sample Frame.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Sampling Plan. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Sample Weighting.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Improving Response Rates. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Further Study of Nonresponse Bias and Survey Error.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Frequency of WGRA Administration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

159
160
162
163
164
165
166
167

APPENDIXES

A. Phone Survey Script. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
B. Mail Survey (Male and Female Respondent Versions). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
C. Supplementary Tables for Chapter Three. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
D. Supplementary Tables for Chapter Seven. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
Abbreviations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
References. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237

Figures and Tables

Figures

	 2.1.	 Diagram of RMWS and Nonresponse Follow-Up Studies. . . . . . . . . . . . . . . . . . . . . . . . . . . 7
	 3.1.	 Adjusted Risk Ratios for Factors Significantly Associated with Survey
Response and Sexual Assault. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
	 5.1.	 Flowchart of Survey Logic Underlying Categorization of Past-Year Sexual
Assault.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
	 7.1.	 Flowchart of the Assessment of Sexually Hostile Workplace Harassment,
Quid Pro Quo Sexual Harassment, and Gender Discrimination. . . . . . . . . . . . . . . . . 106
Tables

	 2.1.	 Characteristics of Respondents in Each Follow-up Sample, Relative to Main
Study Respondents (Ratios). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
	 2.2.	 Risk of Sexual Assault, Sexual Harassment, and Gender Discrimination for
Nonrespondent Follow-Up Studies Relative to Their Matched Respondents
from the Main Study. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
	 3.1.	 Predictors in Outcome-Optimized RMWS Weights. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
	 3.2.	 Characteristics Associated with Nonresponse Among Women. . . . . . . . . . . . . . . . . . . . 24
	 3.3.	 Characteristics Associated with Nonresponse Among Men. . . . . . . . . . . . . . . . . . . . . . . . 30
	 3.4.	 Association of Participant Characteristics with Survey Outcomes Among
Women. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
	 3.5.	 Association of Participant Characteristics with Survey Outcomes for Men. . . . . . . 45
	 3.6.	 Comparison of Predicted Risk in Full Sample and Nonresponse-Adjusted
Respondents. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
	 3.7.	 Association of Participant Characteristics with the Difference Between
RMWS and WGRA Weights. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
	 3.8.	 Design Effect of Components of RMWS Weights. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
	 3.9.	 Design Effect of Components of WGRA Weights. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
	 3.10.	 Evaluation of Survey Estimates with RMWS Weights Compared to WGRA
Weights.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
	 4.1.	 Comparison of Survey-Estimated Counts of Reported Sexual Assaults to
Official Reports of Sexual Assault. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
	 5.1.	 Classification of Past-Year Sexual Assault Among Female and Male
Respondents. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82

xi

xii

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

	 5.2.	 Affirmative Responses to Questions About Assault Intent, Among Those
Presented with the Question. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
	 6.1.	 Proportion of Sexual Assaults Linked to Military Service, All Sexually
Assaulted Active-Duty Respondents with Six to 12 Months of Service.. . . . . . . . . . 94
	 6.2.	 Summary of Possible Biases in the Estimated Number of ActiveComponent Members Who Experienced a Sexual Assault Due to Sample
Frame and Specification Errors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
	 7.1.	 Fifteen Inappropriate Workplace Behaviors and the Percentage of Men and
Women Who Indicated They Experienced Each Behavior in the Past Year. . . . . 104
	 7.2.	 For Women, Questionnaire Flow from Experiencing an Inappropriate
Workplace Behavior to Being Categorized as Having Experienced Sexual
Harassment of the Sexually Hostile Work Environment Type.. . . . . . . . . . . . . . . . . . . 109
	 7.3.	 For Men, Questionnaire Flow from Experiencing an Inappropriate
Workplace Behavior to Being Categorized as Having Experienced Sexual
Harassment of the Sexually Hostile Work Environment Type.. . . . . . . . . . . . . . . . . . . 111
	 7.4.	 Follow-Up Items Assessing the Level of Evidence for a Possible Quid Pro
Quo Offer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
	 7.5.	 For Women, Questionnaire Flow from Experiencing an Inappropriate
Workplace Behavior to Being Categorized as Having Experienced Sexual
Harassment of the Quid Pro Quo Type. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
	 7.6.	 For Men, Questionnaire Flow from Experiencing an Inappropriate
Workplace Behavior to Being Categorized as Having Experienced Sexual
Harassment of the Quid Pro Quo Type. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
	 7.7.	 For Women, Questionnaire Flow from Experiencing an Inappropriate
Workplace Behavior to Being Categorized as Having Experienced Gender
Discrimination. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
	 7.8.	 For Men, Questionnaire Flow from Experiencing an Inappropriate
Workplace Behavior to Being Categorized as Having Experienced Gender
Discrimination. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
	 7.9.	 Number of Active-Component Respondents Who Were Mischaracterized
as “Missing” When They Should Have Been Coded as Experiencing a
Hostile Work Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
	 7.10.	 Changes in Top-Line MEO Violation Estimates as a Result of
Programming Error.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
	 8.1.	 Estimated Percentage of Active-Component Service Members Who
Experienced Sexual Harassment in the Past Year, as Assessed with the
Prior Form and RAND Form. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
	 9.1.	 Final Participant Response by Survey Item, All RAND Form Types. . . . . . . . . . . . 138
	 9.2.	 Final Participant Response by Survey Item, RAND Short Form.. . . . . . . . . . . . . . . . 142
	 9.3.	 Survey Breakoff by Module, RAND Combined and RAND Short Form. . . . . . 144
	 9.4.	 Final Participant Response by Survey Item, Prior Form. . . . . . . . . . . . . . . . . . . . . . . . . . . 146
	 9.5.	 Predicted Risk of Sexual Assault by the Type of Nonresponse or Breakoff.. . . . . 149
	 10.1.	 Complaints Received About Survey Language in the RAND Survey, by
Respondent and Survey Characteristics (When Known). . . . . . . . . . . . . . . . . . . . . . . . . . 154

Figures and Tables

xiii

	 C.1.	 Adjusted and Unadjusted Associations of Respondent Characteristics with
Response for Active-Duty DoD Women. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
	 C.2.	 Adjusted and Unadjusted Associations of Respondent Characteristics with
Response for Active-Duty DoD Men. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
	 C.3.	 Design Effect of RMWS Weights for Key Reporting Categories.. . . . . . . . . . . . . . . . 226
	 C.4.	 Design Effect of WGRA Weights for Key Reporting Categories. . . . . . . . . . . . . . . . . 227
	 C.5.	 Balance of Weighted Respondents Relative to the DoD Active-Duty
Population Mean of Proxy Variables. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
	 D.1.	 Changes in Top-Line MEO Violation Estimates as a Result of
Programming Error, for Men by Service.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
	 D.2.	 Changes in Top-Line MEO Violation Estimates as a Result of
Programming Error, for Women by Service. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
	 D.3.	 Changes in Top-Line MEO Violation Estimates as a Result of
Programming Error, for Men by Pay Grade. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
	 D.4.	 Changes in Top-Line MEO Violation Estimates as a Result of
Programming Error, for Women by Pay Grade. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
	 D.5.	 Changes in Top-Line MEO Violation Estimates as a Result of
Programming Error, for Reserve-Component Service Members by Gender. . . . 233

Summary

In the summer of 2014, the RAND Military Workplace Study (RMWS) survey was
fielded to more than half a million members of the U.S. military, including active and
reserve component members from each Department of Defense (DoD) service and the
U.S. Coast Guard. The primary objective of the survey was to establish valid estimates
for the prevalence of sexual assault and sexual harassment in the U.S. military, and to
characterize the nature and circumstances of these crimes and violations of military
equal opportunity regulations. The survey questionnaire and many of the methods
used in the RMWS were substantially revised from those used in the Workplace and
Gender Relations Survey of Active Duty Members (WGRA) and the Workplace and
Gender Relations Survey of Reserve Component Members (WGRR) that were previously used by DoD to assess sexual assault and harassment.
The 2014 RMWS was the largest such study ever conducted. With more than
145,300 survey responses from active-component members, the prevalence estimates
generated from the study frequently had 95-percent confidence intervals that spanned
less than one-half a percentage point, suggesting extraordinary precision. However,
these confidence intervals, which only assess the uncertainty due to random sampling
variability, could be misleading. Sampling variability is unlikely to be the primary
source of error for our estimates. Instead, larger errors could result from several factors,
including specification errors, if, for instance, our sexual assault screening module misclassifies individuals; coverage errors, due to the inclusion criteria used in the sample
frame; and survey nonresponse, if our sample weights fail to fully adjust for important
differences between those who chose to participate in the study and those who did not.
The goal of this volume, the fourth volume of the Sexual Assault and Sexual
Harassment in the U.S. Military series, is to examine the influence and magnitude of
these less-easily quantified sources of error that may affect the study’s results (see Volumes 1, 2, and 3 for details of the study design and results).

xv

xvi

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

The Sexual Assault and Sexual Harassment Experiences of Survey
Nonrespondents
If survey nonrespondents have different sexual assault or sexual harassment experiences than our weighted sample of respondents, it would result in biased survey estimates. This possibility is a particular concern for surveys like the WGRA and most
other surveys of military populations which have, for many years, had response rates
below 50 percent. To investigate how nonrespondents differed from respondents, we
conducted follow-up studies of the sexual harassment and sexual assault experiences
of three samples of service members who were nonrespondents in the main study. All
nonrespondents were randomly assigned to one of three follow-up methods: (1) those
who were recruited by phone for a phone interview; (2) those who were recruited by
mail for a self-administered paper survey; and (3) those who were given additional
time to complete the survey on the web, but were not subject to additional recruitment
efforts beyond those of the main study. Across these three follow-up studies of RMWS
nonrespondents, more than 6,500 members were assessed for sexual assault, sexual
harassment, and gender discrimination experiences. With each follow-up study, we
examined differences in the sexual assault and harassment experiences of RMWS nonrespondents compared with matched samples of respondents from the main RMWS
survey, and evaluated whether our nonresponse weighting methods properly accounted
for these differences.
The results of these studies identified no consistent pattern of nonresponse bias.
We did find that nonrespondents who were asked about their sexual assault and harassment histories over the phone acknowledged sexual assault and harassment experiences
less often than RMWS survey respondents, even after weighting adjustments. However, this apparent difference may result from well-documented survey mode effects,
in which respondents speaking with live interviewers tend to underreport sensitive or
stigmatized experiences.
Because of this response bias, results from the sample of nonrespondents randomly assigned to the mailed survey condition likely offer a more valid estimate of
the experiences of nonrespondents. This study found no significant evidence of nonresponse bias on sexual assault and harassment outcomes after sample weighting. For
gender discrimination, a small possible bias was detected after weighting, suggesting
that the RMWS prevalence estimates may be too low. The third study, of late web
respondents, suggested that if there is a bias in the RMWS estimates, it is a downward
bias. That is, the RMWS estimates of all three primary outcomes may be lower than
the true population rates.

Summary

xvii

Efficacy of RMWS Nonresponse Weights
The nonresponse weighting methods used for the RMWS are novel and allowed us
to account for differences in a wider range of known characteristics of respondents
and nonrespondents than has previously been possible without unacceptably elevating the variance of survey estimates. In addition to accounting for key service and
demographic characteristics that have previously been included in past WGRA sample
weights, we included many more demographic, military service, environment, and
fieldwork metadata characteristics.
In analyses comparing sample weights generated using our new method with the
earlier WGRA weighting approach, we found that the new weights reduced differences
between the analytic sample and the population on a wide range of factors associated
with both nonresponse and key outcomes that were not satisfactorily addressed using
the earlier methods. Moreover, nearly all of these factors drove bias in the same direction—specifically, service members with characteristics associated with a higher risk
of sexual assault also had the lowest likelihood of responding to the survey. Therefore, we can be confident that the differences in survey estimates resulting from the
RMWS weights, in contrast to those generated using the traditional WGRA weighting
method, reflect a reduction in nonresponse bias as they adjust the prevalence estimates
upward, thus providing more-accurate estimates of actual prevalence.
Finally, these reductions in bias were achieved with only modest inflation of variance in the survey estimates. Whereas the overall design effect associated with the traditional WGRA weights was 2.62, the RMWS weights produced a design effect about
40 percent larger (3.69). An assessment of the trade-off between bias and variance suggested that any increase in variance was offset by reduction in bias. Furthermore, we
would argue that for a survey with such a large sample size and extraordinary precision,
a small loss of precision associated with the increased variance due to weighting is justified by the bias reduction that we achieved.
Assessment of Total Survey Error Using an Administrative Records
Benchmark
If our sampling, measurement, weighting, and analysis methods performed correctly,
the true rates of sexual assault and sexual harassment in the military should lie within
the confidence interval of our estimated rates. In reality, we cannot observe the true
values for these rates. However, we included one question on the survey that does have
an objective, observable benchmark value maintained in the administrative records of
the Sexual Assault Prevention and Response Office (SAPRO). Specifically, we asked
those who qualified as having experienced a sexual assault in the past year whether they
completed a DD2910 Victim Reporting Preference form. All victims of sexual assault

xviii

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

who come to the attention of a mandated reporter at DoD are asked to complete this
form to indicate how they want their case handled. Thus, the comparison of our survey’s estimate of the number of DD2910 forms completed in the past year with the
actual number found in SAPRO’s records offers a test of total survey error, not only
nonresponse bias, but also error due to sampling variability, sample coverage, specification errors associated with misclassifying a respondent’s past-year sexual assault experiences, computational, and other types of error.
Our weighted survey estimate of 2,435 forms filed during the year underestimates
the 2,997 forms actually recorded by SAPRO. Thus, the estimated value is 81 percent
of the true value, which we view as a relatively small error in light of the wide range of
factors that could contribute to this discrepancy. As with the results from our studies
of nonrespondents, the conclusion about total survey error from this analysis suggests
that bias in our estimates is relatively small and it is in a direction that implies the
RMWS estimates of prevalence are lower than the true rates in the population.
The precise causes of the discrepancy cannot be fully identified. However, one
source that is not actually survey error may contribute. The SAPRO count of DD2910
forms assesses the number of incidents that were reported, while the survey estimate
counts the prevalence, i.e., the number of individuals reporting one or more incidents.
A more accurate assessment of survey error would exclude this double counting from the
official records. In addition, one known source of survey error is the sample frame exclusion criteria, which are discussed below. Specifically, the survey excludes service members who left the military prior to sampling, as well as those who joined in the prior six
months. Any Victim Reporting Preference statements filed by such individuals will not
be counted in the survey estimate, which partially explains the observed survey error.
Undercounting and Overcounting of Sexual Assault in the RMWS
Our sampling plan, survey questionnaire, and analysis plan all required judgments
that affected how we counted incidents and the prevalence of sexual assault. We examined several such key decisions to evaluate how much effect they may have had on our
survey estimates. For instance, because we asked about past-year sexual assault experiences and included service members with just six months of service, it is possible that
some of their past-year sexual assaults occurred before they joined the military. We
found, however, that 98  percent of the assaults against service members with fewer
than 12 months of active-duty service were committed in a military setting, during
training, or by another member of the service. If we assume that the remaining 2 percent of the assaults occurred before the service member joined the military, the effect
of excluding these cases from the overall estimates is small, suggesting our population
estimate of 20,300 members experiencing sexual assaults in the past year could be an
overestimate by approximately ten members.

Summary

xix

On the other hand, the fact that we excluded service members with fewer than
six months of service means we failed to count any assaults against them. Using the
experiences of those members with six to 12 months of service to set bounds on the
likely magnitude of underreporting attributable to the exclusion of those with fewer
than six months of service, we estimated that it could cause our estimate of 20,300
active-duty members exposed to sexual assaults in the past year to be underestimated
by 25 to 190 people.
A more significant underestimate may result from the exclusion of members who
experienced sexual assaults in past year that separated from the service before our
sample frame was drawn. Using reasonable assumptions about rates of sexual assault
among those who separate from the military, we found that correcting for the omission
of these cases would cause our estimate of 20,300 active-duty members experiencing
sexual assaults in the past year to increase by 900 to 2,800 members or more.
We counted as sexual assaults some events about which the expert legal opinions
we received were strongly divided. For instance, if a person said they had an unwanted
sexual experience, but could not recall the details of the event because of the effects of
too much alcohol, we counted such events as having occurred without consent. Similarly, if none of the offender coercion behaviors occurred, but the respondent indicated
they could not consent because they were frozen in fear, we counted this as meeting
the criteria for sexual assault as well. The net effect of these decisions was very small.
Had we excluded all such cases from our estimates of the prevalence of past-year sexual
assaults, our estimate that 20,300 service members were sexually assaulted in the past
year would have been reduced by about 50 members.
As specified in Article 120 of the Uniform Code of Military Justice (UCMJ), we
included the mouth as one of the orifices that, when violated, would be counted as a
penetrative assault. Some reviewers suggested that such offenses may involve unwanted
kissing with the tongue. While this is clearly the intent of the law, we assessed how our
estimates of sexual assault would change if all penetrations of the mouth that involved
something other than a penis were excluded, and our population estimate of 20,300
service members experiencing sexual assaults in the past year would decline by about
250 members.
Differences in the Events Counted Using RAND and WGRA Questions
Whereas most survey respondents received a version of the new RAND survey form
(the RAND form), 29,541 respondents were randomly assigned to a questionnaire that
included the sexual harassment and unwanted sexual contact questions used in earlier
administrations of the WGRA survey (the prior form). This survey design allowed us
to compare estimates derived from the RAND form with those from the prior form,
providing a direct comparison of the types of events counted and not counted by each
measurement approach.

xx

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

Sexual Assault

The RAND form and the prior form used different approaches to establishing that
counted sexual assaults occurred in the past year. Both forms, however, included an
item at the end of the sexual assault (or unwanted sexual contact) module asking how
confident respondents were that the event occurred in the past 12 months. Of those
who received the RAND form, 6.8 percent responded by saying they were sure the
event actually occurred more than a year ago (i.e., should not be counted as a pastyear event), compared with 23.0 percent of those receiving the prior form. All who
indicated they were sure that no sexual assault occurred in the past year were excluded
from the prevalence estimates generated from the RAND form, but—consistent with
prior methods—these cases were included in the prior form prevalence estimates. We
present analyses demonstrating that the past-year prevalence estimate generated by the
prior form is likely overestimated by 20 percent due to the inclusion of service members
whose most recent unwanted sexual contact occurred more than a year earlier.
Similarly, we demonstrate that the prior form identifies only about one-half as
many service members who experienced penetrative assaults as the RAND form (4,200
versus 7,800), an effect particularly acute for male service members among whom the
RAND form identifies three times as many experiencing penetrative assaults (1,200
versus 3,700). We interpret these differences as likely attributable to the RAND form
identifying more sexual assaults that occur in the context of hazing or that are not perceived as sexual by the service member.
On the other hand, the prevalence of unwanted sexual contacts that are not penetrative (assessed on the prior form) is substantially higher than the prevalence of nonpenetrative sexual assault (assessed on the RAND form). Indeed, the prior form counts
5,600 more individuals in this category than the RAND form. If the experiences of
these individuals does not, in fact, meet the criteria for a UCMJ sexual assault, as
suggested by the fact that the RAND form did not identify a similar proportion on
nonpenetrative assault victims, this would suggest that 25  percent of all unwanted
sexual contacts counted using the prior form were not, in fact, crimes. Many may not
have met even the criteria for unwanted sexual contact, as we find that 18 percent of
all those with “one event” on the prior form that is not penetrative positively affirm
that their unwanted sexual contact experience met none of the behavioral descriptions
defining unwanted sexual contact.
Sexual Harassment

The RAND sexual harassment module, unlike the WGRA sexual harassment module,
does not require respondents to know the definition of sexual harassment and correctly
apply it to their experiences in order to have those experiences counted as sexual harassment. This WGRA “labeling” requirement proves to significantly reduce estimates of
sexual harassment prevalence. Indeed, if the RAND form required correct labeling to
count instances of sexual harassment, our overall prevalence rate for past-year sexual

Summary

xxi

harassment would have fallen by 30 percent and rates for men would have fallen by
50 percent.
Another difference between the forms is that the RAND form, unlike the prior
form, includes unwanted sexual touching by a coworker as one type of event that could
be classified as sexual harassment (even if it also qualifies as a sexual assault). In practice, however, we find that this difference does not lead to meaningful differences in
prevalence estimates.
When we adjusted the RAND form past-year sexual harassment prevalence rates
to match the criteria used in the prior form (implementing the “labeling” requirement,
excluding sexual touching and other adjustments), we found that the RAND form
identified fewer cases of sexual harassment against women and comparable numbers
for men compared to the prior form.
The fact that the over- and undercounts described here for the prior form approximately cancel one another out should not be taken as evidence that the prior form and
RAND form provide equivalent results or are equally satisfactory measures of sexual
offenses. For purposes of tracking the effectiveness of DoD policies or for estimating
the total number of offenses occurring against service men and women, measures must
accurately and precisely count people or events that are the target of training, prevention, or other policies or programs.
Sample Attrition and Breakoff Across the Survey Instrument
When respondents stop answering questions, or “break off,” before reaching the last
question in the survey, it reduces the precision with which questions later in the survey
are measured. It also can introduce nonresponse bias on later survey questions if
respondents who break off differ systematically from those who do not in ways that are
not addressed by sample weighting. We examined patterns of survey breakoff on the
RAND form and provide an innovative analysis of the sample characteristics of those
who make it through each step in the survey—from accessing the web form, to completing the informed consent, to answering each question presented to them.
Results of this analysis show that survey breakoff before the sexual assault module
(which was required for the survey to be counted as completed) was less than 4 percent
of those who began the RAND forms. This compares favorably to the 2012 WGRA
instrument, in which 13.9  percent broke off prior to the mandatory item assessing
unwanted sexual contact, and the prior form used in the current study, which had
6.5 percent breakoff before the unwanted sexual contact item.
We found little evidence that overall nonresponse was a reaction to the survey
content. Only a small proportion of study nonrespondents dropped out after being
directly informed about the survey content or seeing the survey questions; those who
dropped out after that point did not, on average, have characteristics that put them at

xxii

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

higher or lower predicted risk for sexual assault than those who completed the survey.
The primary difference in predicted risk between respondents and the intended sample
was due to those who never answered a single survey question, rather than those who
broke off after beginning the survey.
Survey Complaints and the Costs and Benefits of Using Survey
Language Some Respondents Find Offensive or Distressing
Our use of behaviorally and anatomically specific language in the sexual assault module
offended some service members, was distressing for some victims of sexual assault, and
raised questions among DoD leadership about whether the problems the language
created (including stories in the press questioning DoD’s approval of the survey) were
justified by the benefits of using precise language.
We examine data we collected on complaints about survey language as a proxy to
understand who was offended and the likely harms associated with participating in the
survey. We found that the RAND form, with 122 complaints per 100,000 completed
surveys, was far more likely to trigger survey language complaints than was the prior
form. The offense experienced was not sufficiently severe or widespread, however, to
cause a surge of breakoffs during the sexual assault screening module, where the language drawing complaints occurred. Indeed, more respondents broke off in the uncontroversial sexual harassment screening module than in the sexual assault screening
module, and breakoffs overall were similar for the combined RAND forms and prior
form (which drew few language complaints).
Participants with characteristics putting them at the lowest risk for sexual assault
tended to object to the survey language at the highest rates. These include men, officers,
and those with more-senior pay grades. Possibly, the risk of assault feels so remote to
these respondents that the inconvenience of being asked questions about whether they
themselves have experienced such violations outweighs any benefits they can imagine
the survey producing. Alternatively, perhaps these groups are more likely to express
their complaints about any topic, possibly because they are representing the views of,
or complaints from, their subordinates. In that case, their higher complaint rates would
have nothing to do with their lower risk of sexual assault.
Obtaining accurate data on the proportion of service members who are sexually assaulted each year is critical for sound policy on sexual assault prevention and
response. Sexual assault is a technical legal construct that is defined in the UCMJ
using anatomically and behaviorally specific language. To accurately identify events
that meet the UCMJ definitions, similarly specific language must be used in surveys.
This approach is widely used in sexual assault research in civilian populations, and it
is the approach recommended by the National Research Council for surveying sexual
assault experiences (National Research Council, 2014).

Summary

xxiii

Conclusions
Our investigations of a range of possible sources of error found no conclusive evidence
of substantial bias or error in the previously reported RMWS estimates. However, there
was a general pattern across these investigations, suggesting that our primary RMWS
study estimates of sexual assault, sexual harassment, and gender discrimination are
more likely to be underestimates than overestimates of true population values. In particular, three types of evidence suggest that the survey estimates could underestimate
the true values: (1) the nonresponse follow-up studies, (2) the analysis of individuals
excluded from the sample frame, and (3) the comparison between survey estimates of
officially reported sexual assaults to the number of actual reports.
In contrast, we found little evidence that the study was overcounting these outcomes. For example, although we concluded that a small number of pre-service sexual
assaults may be captured in our estimates, this number is almost certainly lower than
the larger number of assaults that go uncounted because we excluded members with
fewer than six months of service and those who left the military shortly before the
survey fielded. Similarly, our analyses of the performance of the sexual assault and
sexual harassment modules provides no indication that more incidents were counted as
crimes or violations than should have been.
Our conclusion that the study is more likely to underestimate than overestimate
the true values is stronger for the estimated counts of individuals who experienced
these violations (e.g., 20,300 service members experienced a sexual assault in the past
year) than for the estimated prevalence of these crimes (e.g., 1.5  percent of service
members experienced a sexual assault in the past year). This is because the strongest
evidence for bias comes from the fact that the survey sample frame clearly excluded
some individuals who served in the military in the past year and who may have experienced these outcomes (e.g., members who separated before the sample was drawn).
Therefore, even if our rate estimates are unbiased, when we multiply them by the
total number of members represented by the sample frame, we know the product will
underestimate the total number of service members sexually assaulted in the past year
because the sample frame excludes some members who were exposed to this risk. This
source of bias may explain a substantial proportion of the total survey error identified
by our comparison of survey-estimated counts of reported sexual assault to official
reports of sexual assault (see Chapter Four). In contrast, evidence of bias in the estimated prevalence of sexual assault, sexual harassment, and gender discrimination is
weaker; the incomplete coverage of the sample frame necessarily has smaller effects on
prevalence rates than on population counts. The three nonresponse follow-up studies
(Chapter Two) provide some limited evidence that the reported prevalence underestimates the true value. However, those effects were descriptively small, and were not
consistent across follow-up methods.

xxiv

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

In addition to the previously reported RMWS estimates, the RAND study also
replicated WGRA methods to produce time-trend data using the same measurement
and weighting methods as WGRA surveys conducted in past years. The current investigations of bias provide stronger evidence that the WGRA methods underestimated
the true rate of sexual assault. In particular, the analysis of nonresponses weights found
that the WGRA system of weights resulted in the underrepresentation of a number
of groups of service members who have a high risk for sexual assault and harassment.
In addition, the prior form identified substantially fewer penetrative sexual assaults
than the RMWS, particularly among men. However, this classification error was partially offset by telescoping errors, or the tendency of respondents to report events as
having occurred more recently than they actually did. These errors result in a substantial proportion (23 percent) of respondents being counted as experiencing unwanted
sexual contact in the past year, when they later admitted that their last such experience
occurred more than 12 months before the survey.
Recommendations
The findings described in this report and our experiences conducting the 2014 RMWS
support several recommendations for future WGRA surveys and the analysis of the
data they collect.
•	 Measurement approach. Future WGRA surveys should use the RMWS measurement approach, or comparable survey questions that use behaviorally and anatomically specific language to clearly define criminal sexual assault and violations
of equal opportunity law and policy. In future WGRA surveys, the Defense Manpower Data Center (DMDC) should consider supplementing the RMWS measure of gender discrimination with additional questions to establish (a) whether
the discrimination was legally mandated by the service (e.g., the exclusion of
women from combat occupations), (b) the specific nature of the career harm suffered, and (c) the evidence that a coworker’s gender biases harmed a service member’s career.
•	 Sample frame. Omitting service members who recently separated from the military could lead to significant bias in estimates of past-year sexual assaults, sexual
harassment, and gender discrimination. As a result, we recommend including
past-year separations in the sample frame of future WGRA surveys, or developing
analytic approaches for estimating the number of crimes and violations those who
separated experienced in the past year. Minimally, separations that occur after the
WGRA sample frame is drawn should not be counted as ineligible, as has been
the practice in earlier versions of the WGRA. In addition, because recent separations appear to have elevated risk of past-year sexual assaults, sexual harassment,

Summary

•	

•	
•	

•	

•	

xxv

and gender discrimination, the Office of the Secretary of Defense (OSD) should
evaluate (1) what effect such violations have on military careers and retention and
(2) whether making an official report or receipt of available services reduces the
separation rates of service members who have been sexually assaulted, harassed,
or discriminated against.
Sampling plan. DMDC should design future surveys to include sufficient numbers of men in the sample to ensure ongoing assessment of the nature of sexual
assaults against them. In practice, this means large sample surveys that may not
oversample women at rates as great as in the RMWS or previous WGRA studies. This can be done without reducing the precision of women’s estimates below
those of men.
Sample weighting. DMDC should build on approaches developed for the RMWS
to include a wider set of factors in future nonresponse weighting models than has
previously been possible for military surveys like the WGRA.
Improving response rates. OSD, the Defense Information Systems Agency, and
the services should collaborate to improve the coverage and reliability of email
contact information in the personnel systems used for survey recruitment. Also,
DMDC should investigate additional modes of recruitment (phone or text message) that improve outreach to members who do not routinely use email as part
of their military duties.
Further study of nonresponse bias in future surveys. In future administrations of
the WGRA, DMDC should continue to compare survey estimates with actual
numbers of filed Victim Reporting Preference forms as a measure of nonresponse
bias and total survey error more generally. The procedure we used could be further refined to better match the survey’s sample frame with the victim preference
statements counted in the SAPRO database.
Survey frequency. OSD should conduct the survey no more frequently than once
every two to four years.

Acknowledgments

For this methodological volume of the Sexual Assault and Sexual Harassment in the
U.S. Military series, we wish to highlight the invaluable methodological guidance and
support we received from the members of our scientific advisory board. We are also
grateful for the expert advice provided to us by the Defense Manpower Data Center
(DMDC), especially Elizabeth Van Winkle, who shared DMDC experience from
prior administrations of the Workplace and Gender Relations surveys and who served
as a liaison between RAND and other parts of DMDC.
We have benefited from a strong and critical set of internal and external reviewers,
including Greg Ridgeway, Layla Parast, Robert Fay, and Roderick Little.
Finally, we again want to acknowledge the service men and women who took the
time to complete the RAND Military Workplace Study survey and share their experiences, even when those experiences were painful to recount.

xxvii

CHAPTER ONE

Introduction
Andrew R. Morral, Kristie L. Gore, and Terry L. Schell

In the spring of 2014, RAND was asked by the Sexual Assault Prevention and Response
Office (SAPRO) in the Office of the Secretary of Defense (OSD) to conduct the 2014
Workplace and Gender Relations Survey of Active Duty Members (WGRA) and the
Workplace and Gender Relations Survey of Reserve Component Members (WGRR),
biennial surveys of the state of gender relations in the military required by Congress.
The terms of the project required that RAND make any necessary changes to the measurement approach, sampling plan, and analytic plan to ensure that the survey results
would represent the best available information on the prevalence of criminal sexual
assault and military equal opportunity (MEO) sexual harassment and gender discrimination violations in the U.S. military.
In consultation with experts at RAND and other institutions, a scientific advisory board, the Defense Manpower Data Center (DMDC), and Sexual Assault Prevention and Response (SAPR) program officials from each service, RAND completely
redesigned the survey questions used to assess each of the principal outcomes, developed a new approach to sample weighting designed to reduce nonresponse bias, and
designed a follow-up study of survey nonrespondents to examine whether their exposure to sexual assault and sexual harassment differs systematically from the weighted
sample of respondents.
Because results from the 2014 WGRA were required for a report to the President
on Department of Defense (DoD) progress addressing sexual assault—to be delivered
no later than December 1, 2014—RAND had to make these changes and field, analyze, and report results on the survey in a span of eight months. This left little time for
pretesting many of the changes we introduced to the survey design beyond some basic
assessments of whether the target population correctly understood and was able to tolerate the new survey questions.
Instead of pretesting, in several cases we were able to design the study to include
experiments and substudies that could be examined after survey fielding to evaluate
the performance of the new survey instrument and other aspects of the study design.
This fourth volume of the Sexual Assault and Sexual Harassment in the U.S. Military
series presents the results of these experiments and additional analyses we conducted
to evaluate the quality and credibility of the findings that the new survey design pro-

1

2

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

duced (see Volumes 1, 2, and 3, as well as their annexes, for details of the study design
and results).
About the 2014 Survey
DoD, in consultation with the White House National Security Staff, stipulated that
the sample for the new study—which became known as the RAND Military Workplace Study (RMWS)—was to include a census of all women and 25 percent of men in
the active components of the Army, Navy, Air Force, and Marine Corps. In addition,
we were asked to include a smaller sample of National Guard and other reserve-component members sufficient to support comparisons of sexual assault, sexual harassment,
and gender discrimination between the active and reserve components. The U.S. Coast
Guard also asked that RAND include a sample of its active- and reserve-component
members. In total, therefore, RAND invited close to 560,000 service members to participate in the study, making it the largest study of sexual assault and harassment ever
conducted in the U.S. military.
Active-component respondents were randomly assigned to one of four different
survey instruments:
1.	 A “long form,” consisting of a sexual assault module; a sex-based MEO violation module, which assessed sexual harassment and gender discrimination; and
questions on respondent demographics, psychological state, command climate,
attitudes and beliefs about sexual assault in the military and the nation, and
other related issues.
2.	 A “medium form,” consisting of the sexual assault module, the sex-based MEO
violation module, and demographic questions.
3.	 A “short form,” consisting of the sexual assault module, the screening items
from the sex-based MEO violation module, and demographic questions. Thus,
these respondents did not complete the full, sex-based MEO violation assessment.
4.	 The “prior form,” consisting of the unwanted sexual contact, sexual harassment,
and gender discrimination assessments from the 2012 WGRA.
The long, medium, and short forms included the new questions developed at
RAND to more reliably measure criminal sexual assault experiences as defined in the
Uniform Code of Military Justice (UCMJ), and MEO violations of sexual harassment
and gender discrimination as defined in DoD Directive 1350.2 (Under Secretary of
Defense for Personnel and Readiness, 1995). Respondents who received the prior form
saw questions nearly identical to those used in the 2012 WGRA survey, including
questions on “unwanted sexual contact,” the construct used to measure sexual assaults,

Introduction

3

and sexual harassment. Reserve-component members were randomly assigned to the
medium or short forms, and members of the Coast Guard active component received
only the long, medium, or short forms.
A total of 477,513 members of the DoD active component were randomly selected
from a population of 1,317,561 active-component DoD service members and who met
the study inclusion criteria, which required that they be age 18 or older, below the rank
of a general or flag officer, and in service for at least six months. These are the same
inclusion criteria used in prior WGRA surveys. The sample included 197,491 women
and 280,022 men.
The web-based survey was fielded by Westat, a commercial research firm, in the
summer of 2014. Of the 477,513 DoD active-component members invited to take the
RMWS survey, 145,300 individuals participated, or just over 30 percent. The respondents included 34 percent of the women sampled (67,187) and 27.9 percent of the men
(78,113). Service members in the Air Force had the highest response rate (43.5 percent), followed by Army (29.4 percent), Navy (23.3 percent), and Marine Corps (20.6
percent).
Organization of the Report
The focus of this volume is the technical performance of the RMWS study design,
survey instrument, and sample weights, focusing on multiple sources of bias that could
have undermined the accuracy of our survey results. Chapter Two examines nonresponse bias found in samples of survey nonrespondents whose sexual assault and
harassment experiences we were later able to assess as part of a follow-up study of
nonrespondents. Chapter Three examines the characteristics of the sample weights we
developed using a new approach to survey weighting that permits adjustment on many
more differences between respondents and nonrespondents than earlier methods. We
then compared the survey estimates and variances using the new and older approaches
to developing sample weights. Chapter Four assesses total survey error by comparing
weighted survey estimates for the number of Victim Reporting Preference forms signed
in the past year by active-component members with the true value for that number as
revealed in SAPRO tracking data.
Chapter Five provides details on how men and women progressed through the
sexual assault module in the RAND form, and on how follow-up items assessing the
UCMJ criteria for such assaults filtered identified unwanted experiences down to just
those qualifying as a sexual assault. Chapter Six examines possible sources of overcounting or undercounting of sexual assault attributable to sample frame inclusion
criteria and scoring decision rules we implemented. Chapter Seven provides similar
descriptive analyses for the sexual harassment and gender discrimination module.

4

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

Chapter Eight compares the people and events counted by the prior form as
experiencing either unwanted sexual contact or sexual harassment with those counted
by the RAND form as experiencing sexual assaults or sexual harassment. Chapter
Nine describes patterns of survey breakoff found on the RAND forms and the prior
form, and provides an assessment of whether breakoff differentially occurred among
those with higher or lower risk of sexual assault. Chapter Ten examines the complaints
RAND or others received about the language used in the sexual assault module.
Finally, Chapter Eleven draws crosscutting conclusions and makes a series of recommendations for future administrations of the WGRA.
Statistical Analysis and Reporting Conventions Used in This Report
The statistical analyses presented in this report and its appendixes employ statistical
procedures designed to reduce the likelihood of drawing inappropriate conclusions or
compromising the privacy of respondents.
First, we assured respondents in the survey Privacy Statement (part of the informed
consent) that our reports would not include analyses conducted within subsets smaller
than 15 respondents. Thus, to maintain participant privacy, the report and its appendixes do not include sample statistics (including confidence intervals) computed within
groups smaller than 15 unweighted respondents. If such a cell appears in a table, the
point estimates and its confidence intervals are replaced with “not reportable” (NR).
Second, the report contains estimated population percentages that vary dramatically in their statistical precision. Some estimates have a 95-percent confidence interval with a width of 0.3 percentage points, while others have a width of 30 percentage
points. This occurs because some percentages are estimated using more than 100,000
respondents, while others are estimated on small subsamples (e.g., male airmen who
experienced a sexual assault). To reduce the likelihood of misinterpretations, percentages with very low precision are not reported. Specifically, percentages estimated with
a margin of error greater than 15 percentage points are replaced with NR (where the
margin of error is defined as the larger half-width of the confidence interval). In such
cases, the confidence intervals are still presented to communicate the range of percentages that is consistent with the data. Such imprecise estimates are better thought about
as ranges rather than points.
The text and tables in this report do not use a constant level of numerical precision. Because the statistical precision of the estimates vary by more than two orders of
magnitude, and because the purpose of numbers presented in the text and tables may
be slightly different, we have tried to select a level of numerical precision that is appropriate for each situation.

CHAPTER TWO

Follow-Up Studies of Survey Nonrespondents
Terry L. Schell, Andrew R. Morral, Lisa H. Jaycox,
Coreen Farris, and Bonnie Ghosh-Dastidar

Survey nonresponse has the potential to introduce bias in the estimates of key outcomes. To the extent that nonrespondents and respondents differ on observed characteristics, we can use weights to adjust the sample so that the weighted respondents
match the full population on those observed characteristics. This can eliminate the
portion of nonresponse bias associated with those observed variables. When all nonresponse bias can be eliminated in this manner, the “missingness” is called ignorable
or missing at random (Little and Rubin, 2002). The more variables that are observed
for the nonrespondents and incorporated into the weights, the more plausible it is to
assume that the weights eliminate any nonresponse bias. This study had an unusually
large amount of data on survey nonrespondents because DMDC provided RAND
with a broad range of socio-demographic and workplace-related measures from sampled members’ personnel files. The survey weights incorporated this information in an
attempt to make the weighted analytic sample representative of the full population in
terms of their risk for sexual assault, sexual harassment, and gender discrimination.
When the propensity to respond (or participate) in a survey is related to survey outcomes even after adjusting for all of the individual-level characteristics available, nonresponse weights will not fully eliminate nonresponse bias. Such an association could
occur for several reasons. For example, individuals may choose to respond to the study
precisely because they have experienced one of these violations and are more motivated to “tell their story” on the survey than unaffected service members. In that case,
the respondents to the survey could have higher rates of sexual assault or harassment
than nonrespondents, even when these groups are otherwise matched on demographic
and military factors. Alternatively, individual respondents who have experienced these
violations may avoid participating in the study. For example, post-traumatic cognitive avoidance is widely observed in traumatized samples and is part of the definition
of posttraumatic stress disorder (American Psychiatric Association, 2013). Similarly,
some victims may not trust the researchers to keep experiences reported on the survey
private, potentially putting them at risk for embarrassment or retaliation. To the extent
that those who experience such outcomes participate at lower rates than the rest of the
sample, survey estimates may be lower than the underlying “true” population estimate.

5

6

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

These hypothesized mechanisms could each produce non-ignorable missingness
(Little and Rubin, 2002), which refers to missing data that is uniquely associated with
the survey measurements of interest, and thus its effects are not eliminated through
standard methods to minimize nonresponse bias. We are primarily concerned about
the net effect of all of these mechanisms. For example, if the number of people who
experienced sexual assault and avoid the study because of the assault is the same as the
number who participate specifically because of their assault, the aggregate pattern of
nonresponse would not be associated with sexual assault, and these mechanisms would
not produce a nonresponse bias in our estimates.
Assessing the existence of non-ignorable nonresponse is difficult or impossible
within the primary study data set. Nor can it be determined by assessing the plausibility of various theories of how an individual’s propensity for responding depends on
their experiences with sexual assault or harassment; all of these theories could be correct and yet there may be no overall response bias.
The only direct way to assess this type of bias is to measure the association
between the study outcomes and propensity for responding; however, that requires
observing the study outcomes among survey nonrespondents. Because we considered
nonresponse bias to be the largest potential source of survey error in the RMWS, the
study was designed to include additional data collection on survey nonrespondents to
directly assess this potential bias.
The purpose of our follow-up studies of nonrespondents is to go back into the
field and empirically assess the primary study outcomes in groups of individuals who
failed to respond to the main study. These data can then be used to determine if the
respondents to the main survey have higher or lower rates of sexual assault, sexual
harassment, or gender discrimination than the observed nonrespondents, even when
these groups are otherwise matched on demographic and military factors. This comparison may provide information about the direction and approximate magnitude of
any nonresponse bias.
Study Procedures
Overview

We conducted three follow-up studies of survey nonresponse, each using slightly different procedures to collect outcome data on sampled members who failed to respond
within the survey field period. Although the sample was randomized into these three
groups prior to fielding the survey, group assignment had no effect on any procedures
used in the main RMWS survey. Assignment only affected study procedures for those
members who were nonrespondents at the conclusion of the main study field period.
The three follow-up groups correspond to (1) nonrespondents who were subsequently
recruited by phone and mail for a phone interview (phone follow-up sample); (2) non-

Follow-Up Studies of Survey Nonrespondents

7

respondents who were subsequently recruited by mail for a self-administered paper
survey (mail follow-up sample); and (3) nonrespondents who were given additional
time to complete the survey on the web, but were not subject to additional recruitment
efforts beyond those of the main study (late web sample).
Participants

For the main RMWS, 391,680 active-component members were randomly assigned to
receive the RAND form as a web-based survey. As shown in Figure 2.1, these service
members were randomized into one of three mutually exclusive nonresponse followup groups: phone follow-up (N = 12,000), mail follow-up (N = 12,000), and late web
(N = 367,680). The assignment to these groups were implemented as simple random
samples of active-component DoD members who had been previously randomized
to the short form (which itself is an equal-probability stratified random sample of the
overall sample). However, active-component Coast Guard members (N = 14,167) were
not randomized into either the phone or mail follow-up groups; these Coast Guard
members are included in the late web group.
Active-component sample members who completed the survey during the
RMWS study field period were counted as respondents, and their data were analyzed
to produce the main RMWS survey results for active-component members, whether
they had been assigned to the phone, mail, or late web samples or not. For all groups,
approximately 30 percent of the active-component sample responded to the main
survey during its field period.
Among those who had been assigned to the phone and mail follow-up samples,
8,060 and 8,083, respectively, did not participate in the RMWS study, and so became
Figure 2.1
Diagram of RMWS and Nonresponse Follow-Up Studies
Sample randomized to RAND survey form
N = 391,680

RMWS
sampling
Mail follow-up
N = 12,000

Phone follow-up
N = 12,000

Late web
N = 367,680

RMWS Study
(web-only)

Respondent
N = 3,917

Nonrespondent
N = 8,083

Respondent
N = 3,940

Nonrespondent
N = 8,060

Respondent
N = 114,939

Nonrespondent
N = 252,741

Nonresponse
follow-up
studies

Mail
respondent
N = 994

Mail
nonrespondent
N = 7,089

Phone
respondent
N = 1,656

Phone
nonrespondent
N = 6,404

Late web
respondent
N = 3,908

Late web
nonrespondent
N = 248,833

RAND RR870/6-2.1

8

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

the samples for the high-intensity nonresponse follow-up studies. An additional
252,741 active-component members who were nonrespondents to the main RMWS
survey and who had not been assigned to the mail or phone contingency samples
became the sample for late web follow-up study. Although no additional efforts were
made to recruit this sample into the follow-up study (beyond those that had already
taken place for the main study), these service members were allowed to take the survey
after the RMWS field period was closed. Their responses were not included in the
main RMWS analytic sample and findings because they did not complete before the
end of the fielding period, but were instead treated as a third sample of RMWS main
study nonrespondents who were subsequently followed up.
High-Intensity Phone Follow-Up Study: Measures and Procedures

The sample of service members randomized to phone follow-up who were nonrespondents in the main study was sent a prenotification letter approximately ten days after
the close of the main survey’s field period. This letter served several functions: (1) it
alerted recipients to the fact that we would be calling them to request a phone interview for the RMWS; (2) it provided the consent form for the study so that participants
would have a copy for their records and so the interviewer could forgo reading the full
consent form in cases when the respondent had read it in advance; (3) it contained
$4 in cash as a preincentive to improve response rates; and (4) it provided a toll-free
number to call if the member wished to conduct the survey at a time of their convenience rather than waiting for our interviewers to call them. When a prenotification
letter was returned as postal nondeliverable, another letter was mailed to the next best
address available.
In most cases, these letters were sent to the same address used for mail notifications in the main study (see Volume 1). However, we also used recent mailing addresses
from a LexisNexis search and EU Services (which had U.S. Postal Service change of
address information) to attempt to identify the best available mailing address. Similarly, recruitment telephone calls were made to a phone number provided by DMDC
when it was available. When such a number was not available, not in service, or when
interviewers were informed it was a wrong number, subsequent calls were made to
additional numbers. These other phone numbers included some provided by DMDC,
as well as those identified from a LexisNexis search and manual tracing through publicly available records.
Service members in the sample could be called up to nine times during the survey
field period (October 20–November 25, 2014), and one message could be left on the
answering machine at a given number alerting the sampled member of the study.
Approximately 50 percent of all calls went to voicemail or an answering machine.
Members sampled into the phone follow-up who requested removal from further communications received no further contacts. Those who were contacted but gave a “soft
refusal” (for example, they hung up on the interviewer or said they did not have time

Follow-Up Studies of Survey Nonrespondents

9

to complete the survey at that time) were recontacted if the study team determined that
such a contact was not against the stated wishes of that member. Twenty-five percent
of such cases eventually completed the survey.
The phone survey instrument was implemented as a computer-assisted telephone
interview, using a trained, live interviewer reading questions from a computer. Thus the
instrument could implement a complex skip pattern similar to the web-based survey
used in the main study (see Appendix A). However, to maximize the response rate,
the instrument was simplified substantially from the full web instrument. It included
(1) the five questions designed to remind respondents of the past–12-month time frame
to minimize response telescoping, or the tendency to regard past events as occurring more
recently than they did; (2) the behavioral screening items from the assessment of sexual
harassment; (3) the behavioral screening items from the assessment of gender discrimination; and (4) both the screening items and the follow-up items for the assessment of
sexual assault in the past year. Relative to the full instrument, it excluded (1) the followup items designed to assess persistence or severity of each potentially harassing behavior;
(2) the follow-up items designed to assess harm to career from potential gender discrimination; (3) the follow-up items that provide descriptive details about incidents of sexual
assault, sexual harassment, and gender discrimination; and (4) the lifetime assessment
of sexual assault and the general attitudes and beliefs questions. Sampled members were
advised that this shortened survey would take between seven and 12 minutes to complete and took respondents an average of nine minutes to complete.
High-Intensity Mail Follow-up Study: Measures and Procedures

The full sample of service members randomized to mail follow-up who were nonrespondents in the main RMWS study was sent a mailing by first-class mail approximately ten days after the close of the fielding period for the RMWS survey. This mailing contained several documents: (1) a booklet containing the informed consent and
a self-administered paper version of the survey; (2) $4 in cash as a preincentive to
improve response rates; (3) a postage-paid return envelope for the survey booklet; and
(4) a cover letter that explained the survey, answered frequently asked questions, and
provided directions for how to get any questions answered by phone or on the web.
If this mailing was returned as postal nondeliverable, another letter was mailed to the
next best address available.
In most cases, these letters were sent to the same address used for mail notifications in the main study. However, we also used recent mailing addresses from a LexisNexis search and EU Services (which had U.S. Postal Service change of address information) to attempt to identify the best available mailing address.
A second survey was mailed using Federal Express (or USPS Priority Mail if the
address was a post office box) on October 30, 2014, to 6,911 members who had not
responded to the first mailing. Unlike the first mailing, which was sent via USPS firstclass mail, the second mailing involved an attempt to hand-deliver the mailing to the

10

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

service member. The content of the second mailing was identical to the initial mailing,
except that it did not include an additional monetary incentive. Surveys returned to
Westat and entered before December 1, 2014, were included in the final data set.
The mail survey was implemented as a teleform instrument, i.e., one in which the
respondent marks the appropriate box so that responses can be machine-read. It is difficult to implement a complex skip pattern in a paper instrument, because paper instruments rely on respondents carefully reading question instructions. Such skips can be
facilitated with the careful design of the questionnaire’s layout and formatting, but we
determined that it would be best to limit the instrument to only two levels of question
skips. This required some subtle changes in the way in which respondents navigated
the sexual assault assessment module relative to both the main web survey and the
phone survey (see Appendix B). In addition, the paper version could not be customized
so that the date that represented one year ago would be exactly one year prior to when
the respondent filled out the survey. Surveys mailed on October 10, 2014, used a date
of “10/1/2013” when indicating the beginning of the period the survey questions concerned; surveys mailed on October 30, 2014, used a date of “11/3/2013” when indicating the beginning of the period the survey questions concerned.
Similar to the phone survey, the paper version of the instrument was substantially
shortened relative to the main web survey to provide only the important information
necessary for assessing key outcomes. The paper survey included (1) the five questions
designed to remind respondents of the past–12-month time frame; (2) the behavioral
screening items from the assessment of sexual harassment; (3) the behavioral screening
items from the assessment of gender discrimination; and (4) both the screening items
and the follow-up items for the assessment of sexual assault in the past year, but with a
slightly simplified skip pattern for the follow-up questions. Relative to the full instrument, the paper survey excluded (1) the follow-up items designed to assess persistence
or severity of each potentially harassing behavior; (2) the follow-up items designed to
assess harm to career from potential gender discrimination; (3) the follow-up items
that provide descriptive details about incidents of sexual assault, sexual harassment,
and gender discrimination; and (4) the lifetime assessment of sexual assault and the
general attitudes and beliefs questions. This shortened survey was advertised to sample
members as taking five minutes.
Late Web Study: Measures and Procedures

In addition to the phone and mail high-intensity recruitment efforts, we also observed
study outcomes in a sample of late responders through the main web portal. These participants were subject to no additional outreach or incentives except for the recruitment
messages provided to all members sampled into the main study and completed the
same web-based survey instrument as the main study respondents. The only difference
in procedures was that the survey field period was extended to November 25, 2014.
These respondents completed the survey after the official survey close date.

Follow-Up Studies of Survey Nonrespondents

11

Neither late web respondents nor those in the phone or mail intensive follow-up
studies were included in the analytic samples used to provide our prior published estimates of sexual offenses in the military, including those found in our top-line reports
and our Volume 2 and 3 reports.
Response Rates

The primary purpose of this follow-up study of nonrespondents was to observe key
outcomes among individuals who were nonrespondents for the main study, and thus
were omitted from the main study estimates. A key measure of the success of the study
is the extent to which these procedures succeeded in recruiting the nonrespondents.
This goal of achieving a high response rate is tempered somewhat with the acknowledgement that the sample of individuals subject to these procedures had already demonstrated they were difficult to recruit. They had all failed to respond within the main
study field period after receiving three mail invitations and nine email invitations.
For the purposes of sample accounting and outcome analyses, respondents were
classified by the condition to which they had been randomized, rather than their mode
of survey response. Specifically, a small number of individuals who received recruitment phone calls (N = 66) or were mailed a survey packet (N = 128) subsequently completed the survey on the web using the login credentials that had been emailed during
the main study. These respondents are treated as part of the phone or mail follow-up
groups because they were assigned to those groups and subject to those recruitment
procedures, regardless of the mode of survey administration they ultimately selected.
Similar to the main study, sampled members were counted as respondents if they
completed the sexual assault assessment module (see Volume 1 for details). There were
1,656 respondents in the phone follow-up sample, 994 in the mail follow-up sample,
and 3,908 in the late web sample. This gives response rates of 20.5 percent, 12.3 percent, and 1.5 percent, for the phone, mail, and late web samples, respectively, using
American Association of Public Opinion Research “RR1” definitions for response rates
(American Association of Public Opinion Research, 2011). Thus, the high-intensity
phone follow-up was most effective at recruiting the main study nonrespondents, with
the mail recruitment method only about half as effective. However, both of these highintensity follow-up methods were an order of magnitude more effective at converting nonrespondents relative to the late web group, which merely lengthened the field
period with no additional recruitment efforts.
Sample Characteristics

All of these response rates are relatively low and raise concerns about the extent to
which these follow-up study respondents should be seen as representative of the nonrespondents to the main study. It is possible that these additional recruitment methods
capture essentially the same types of individuals as those who participated in the main
study and thus would be expected to have the same nonresponse biases. To best iden-

12

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

tify nonresponse bias in the main study estimates, the changes in the survey methods
(e.g., changing modes, adding incentives, lengthening the field period, using additional contact information, and making additional recruitment attempts) would yield
a different pattern of nonresponse. That is, the follow-up studies would include individuals who are different from the main study respondent on those factors associated
with nonresponse in the main study. Ideally, the follow-up studies capture different
respondents, not just more respondents.
As discussed in Chapter Three, several groups were substantially underrepresented among respondents in the main RMWS study. These included men, sailors,
Marines, and junior enlisted members. Table 2.1 presents the characteristics of the participants in the three nonrespondent follow-up studies relative to the respondents in the
main RMWS study. Each estimate represents a group’s proportion within one of the
unweighted follow-up samples divided by that group’s proportion in the unweighted
main RMWS. For example, the proportion of the phone follow-up sample that was
junior enlisted (E1–E4) was 1.90 times the proportion of the main study that was
junior enlisted. Thus, while junior enlisted were substantially underrepresented in the
unweighted RMWS respondent sample (see Chapter Three), the phone follow-up was
better able to collect information on junior enlisted. In contrast, ratios near 1.0 indicate that the follow-up respondents have similar characteristics to the main RMWS
respondents.
The pattern of respondent characteristics is generally similar across the three
nonrespondent follow-up samples. Those specific groups of individuals who were substantially underrepresented among respondents to the main study (particularly junior
enlisted, sailors, and Marines) are better represented in the nonresponse follow-up
studies. These effects are most pronounced for the phone follow-up condition. For
example, it yielded a sample in which the proportion of respondents who were junior
enlisted was nearly twice as high as in the main study sample.
While the response rates in these follow-up studies are lower than ideal, these
procedures did succeed in recruiting different types of service members than the main
study. Whatever processes produced survey nonresponse appear to be meaningfully
different in these follow-up studies relative to the main study. Thus, the follow-up
studies may provide useful assessments of any nonresponse bias that exists in the main
study estimates. However, our conclusions about nonresponse bias will be limited to
the extent that characteristics exist that are (a) associated with nonresponse in both the
main study and the follow-up studies, (b) associated with primary study outcomes, and
(c) independent of the factors included in RMWS nonresponse weighting.
Analysis of Nonresponse Bias
The analysis of the nonresponse follow-up studies was designed to determine whether
the main study estimates of prevalence rates for sexual assault, sexual harassment, and

Follow-Up Studies of Survey Nonrespondents

13

Table 2.1
Characteristics of Respondents in Each Follow-up Sample,
Relative to Main Study Respondents (Ratios)
Follow-Up Sample
 

Phone

Mail

Late Web

Female

0.85

0.93

0.98

Male

1.13

1.06

1.02

Air Force

0.60

0.61

0.68

Army

1.11

1.08

1.09

Coast Guard

NA

NA

0.60

Marine Corps

1.59

1.16

1.36

Navy

1.31

1.52

1.44

E1–E4

1.90

1.26

1.31

E5–E9

0.76

0.91

0.88

O1–O3

0.72

0.98

0.90

O4–O6

0.37

0.82

0.94

Sex

Branch of service

Pay grade

NOTE: The ratio reflects the proportion of respondents in each
nonrespondent follow-up sample divided by their proportion
among respondents to the main study. Ratios for the late web
sample include Coast Guard members, while they are excluded
from both the numerator and denominator in the phone and mail
columns because Coast Guard members were not randomized into
those conditions (NA = not applicable).

gender discrimination were biased by survey nonresponse. In an attempt to mitigate
nonresponse biases, those estimates incorporated nonresponse weights that accounted
for the known differences in characteristics across respondents and nonrespondents in
the main study. However, nonresponse bias would still occur whenever there is a difference in the prevalence of those outcomes between the study respondents and nonrespondents, even after conditioning on all of the factors included in the nonresponse
weights. For example, if female Marines who responded to the study had higher rates
of sexual assault than those who did not respond, weighting the respondent sample so
that it matched the full population on the proportion of female Marines would not
eliminate the nonresponse bias.
Therefore, measuring nonresponse bias requires determining whether respondents
and observed nonrespondents with the same characteristics (i.e., “matched”) have the

14

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

same prevalence of these outcomes; specifically whether they have the same prevalence
when they are matched to one another in the same way that the characteristics of the
respondents were matched to the overall population by the main study nonresponse
weights.
In our analysis of these nonresponse follow-up samples, we use the same nonresponse model that was used to derive weights in the main study,1 that is, the model
that was designed to adjust the main study respondents to match the characteristics of
the full population. However, in the current analyses, we used that weighting model to
generate three sets of weights that matched the main study respondents to the respondents in each of the three nonresponse studies separately: One set of weights matched
the characteristics of the main study respondent to the characteristics of the nonrespondents observed in the late web condition; one set matched them to the nonrespondents observed in the phone follow-up condition; and one set matched them to the
nonrespondents observed in the mail follow-up condition.
An estimate of nonresponse bias can then be made by comparing the prevalence
of each outcome between the observed nonrespondents from a follow-up study and a
matched group of respondents in the main study. If individuals in the nonresponse
follow-up condition have higher prevalence than matched respondents from the main
study, it suggests that the main study prevalence estimates underestimate the true prevalence. If individuals in the nonresponse follow-up condition have lower prevalence
than matched respondents from the main study, it suggests that the main study prevalence estimates overestimate the true prevalence.
Comparing the prevalence of these outcomes between the observed nonrespondents in the follow-up studies relative to the matched respondents from the main study
was done within a Poisson regression model.2 This model used a log link-function, so
that exponentiated model coefficients expressed incidence or prevalence ratios. These
ratios compare the mean/proportion observed among a follow-up group divided by
the mean/proportion observed among the matched respondents from the main study.
For example, an incidence ratio of 2.0 on the sexual harassment measure would indicate that respondents in the nonresponse follow-up sample said “yes” to twice as many
sexual harassment screening questions, on average, as similar respondents in the main
study.3 Such a result would imply that the main study underestimated the true preva1	

The nonresponse model used to derive RMWS weights is described in Volume 1. The models used in the
analyses of these follow-up studies of nonresponse used the same type of prediction model, the same predictors,
and the same statistical criteria to identify the set of weights that provided optimal balance. As with the main
study weights, the follow-up study weights were applied only to the main study respondents.

2	

The models used robust standard errors (i.e., General Estimating Equations), rather than inferring statistical
significance directly from a Poisson distribution. All models were estimated within SAS PROC GENMOD.

3	

As mentioned earlier, the follow-up instruments had abbreviated measures that did not include the follow-up
questions that allow a dichotomous measure of sexual harassment in the past year. Thus, these analyses are based
on counts of possible sexual harassment experiences.

Follow-Up Studies of Survey Nonrespondents

15

lence of sexual harassment due to a nonresponse bias in which victims of sexual harassment were less likely to be respondents in the main survey in a manner that was not
corrected by the study nonresponse weights.
As a practical matter, the inference of nonresponse bias is complicated by the
fact that we have three studies of nonrespondents. Each used different methods, and
produced substantially different response rates. As a result, the estimates of nonresponse bias may differ across these studies. The phone follow-up produced the highest
response rate and yielded a sample with the highest proportions of groups that were
underrepresented in the main study. That sample allowed for slightly stronger inferences than the mail follow-up and considerably stronger than the late web follow-up.
On the other hand, the phone follow-up used a live interviewer, unlike the main
study or the other two follow-up studies. Respondents in the phone follow-up were
asked to reveal to another person the details of any sexual assault or harassment experiences. There is long line of research in the field of survey methodology demonstrating
that live-interviewer surveys can result in lower reports of embarrassing or stigmatizing experiences relative to the same survey administered without a live interviewer
(e.g., Bradburn, 1983; Tourangeau and Yan, 2007). There is also some evidence that
surveys using live interviewers produce less-accurate responses for sensitive questions
(e.g., Kreuter, Presser and Tourangeau, 2008). These mode effects are regularly found
in the measurement of experiences involving alcohol use or sexual behavior (Acree
et al., 1999; Aquilino, 1994), and some have found lower rates of reported sexual victimization (Turner et al., 1998; Parks, Pardi and Bradizza, 2006) and intimate partner
violence (Hussain et al., 2013) when using live interviewers. In contrast, studies that
compare responses to sensitive questions between web and mail survey modes generally have not found substantial differences (e.g., McCabe et al., 2006). Thus, although
the phone follow-up study yielded the best sample for studying nonrespondents, that
mode of survey administration may underestimate the true rate of sexual assault, sexual
harassment, and gender discrimination. Because of this possible reporting bias, if we
found that the phone follow-up produced higher prevalence estimates than matched
respondents from the main study, it would provide strong evidence of nonresponse bias
in the main study estimates. However, a finding that the phone follow-up produced
lower prevalence cannot be easily interpreted because that effect could result from
either a nonresponse bias in the main study or a live-interviewer reporting bias in the
phone follow-up.
One final analytic complication is that the instruments are not identical across the
three follow-up studies. Both the mail and phone follow-up instruments (as well as the
short form instrument on the web) omit the follow-up questions for sexual harassment
and gender discrimination. Those instruments cannot create the primary measures of
sexual harassment and gender discrimination reported in the main project reports.
Similarly, the mail follow-up required simplification of the skip patterns within the

16

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

sexual assault assessment module, modifications that might affect the probability that
someone who screens into the module is classified as having experienced sexual assault.
To address these differences in the surveys, the primary analyses will focus on
those measures that were administered identically across all three instruments. These
outcomes differ from the measures reported in the main project reports but are necessarily highly associated with those outcomes. For sexual harassment, we analyzed the
number of sexual harassment screeners that were indicated (range: 0–13); these are
behaviorally specific experiences where sexual behavior of a coworker made the respondent uncomfortable, angry, or upset. For gender discrimination, we used the number
of discrimination screeners indicated (range: 0–2). For sexual assault, we used a dichotomous measure indicating whether any of the six behaviorally specific unwanted experiences occurred, but without the follow-up questions that assessed offender intent or
method of coercion.
Results
Table  2.2 presents the risk ratios comparing the observed nonrespondents from the
three nonresponse follow-up studies with their matched respondents from the main
study. There were substantial differences across the three follow-up studies designed to
assess prevalence among nonrespondents to the main study. The phone follow-up generally yielded risk ratios less than one, indicating that participants in the phone followup had lower rates of study outcomes relative to similar respondents to the main study
with ratios significantly less than one for sexual harassment and gender discrimination.
In contrast, both the mail follow-up and late web studies yielded risk ratios
greater than one. These ratios were significantly greater than one on all outcomes
within the late web study, where we had the most statistical power to find such effects.4
Within the mail follow-up study, only gender discrimination yielded a risk ratio that
was significantly greater than one.
Thus, the three follow-up studies imply different conclusions about possible
nonresponse bias in the main study prevalence estimates presented in earlier volumes. The late web and mail nonresponse follow-up studies imply that main study
findings slightly underestimated the prevalence of these outcomes among non4	

As mentioned earlier, the outcomes presented in Table 2.2 are different than those used as primary outcomes
in the main study because of differences in the instruments across survey modes. However, the primary sexual
assault measure in the main study can be computed for both the phone follow-up and late web follow-up studies.
Analyzing that sexual assault measure yielded nearly equivalent risk ratios to those presented in Table 2.2: 0.90
and 1.20, for the overall phone and late web, respectively. Similarly, the dichotomous primary study measures for
sexual harassment and gender discrimination can be estimated for late web follow-up. Analyzing those measures
within the overall late web follow-up study yielded similar risk ratios to those presented in Table 2.2: 1.14 and
1.10, for sexual harassment and gender discrimination, respectively. None of the inferential results are different if
logistic regression is substituted for Poisson regression for measures that are dichotomous.

Follow-Up Studies of Survey Nonrespondents

17

Table 2.2
Risk of Sexual Assault, Sexual Harassment, and Gender Discrimination
for Nonrespondent Follow-Up Studies Relative to Their Matched
Respondents from the Main Study
Overall
Risk Ratio

Women
Risk Ratio

Men
Risk Ratio

Sexual Assault

0.81
(0.61–1.08)

0.83
(0.59–1.16)

0.78
(0.46–1.31)

Sexual Harassment

0.73***
(0.62–0.86)

0.78*
(0.64–0.96)

0.64***
(0.49–0.83)

Gender Discrimination

0.68***
(0.59–0.80)

0.70***
(0.60–0.82)

0.59**
(0.40–0.88)

Sexual Assault

1.05
(0.76–1.47)

1.26
(0.88–1.80)

0.59
(0.26–1.32)

Sexual Harassment

1.05
(0.88–1.25)

1.15
(0.94–1.41)

0.84
(0.61–1.15)

Gender Discrimination

1.27***
(1.10–1.45)

1.25**
(1.09–1.44)

1.33
(0.95–1.88)

Sexual Assault

1.19*
(1.01–1.41)

1.20
(0.99–1.45)

1.31*
(1.06–1.60)

Sexual Harassment

1.20***
(1.10–1.31)

1.19***
(1.08–1.33)

1.21*
(1.03–1.43)

Gender Discrimination

1.14***
(1.06–1.23)

1.13**
(1.05–1.22)

1.22
(0.99–1.49)

Follow-Up Study
Phone Follow-Up

Mail Follow-Up

Late Web Follow-Up

NOTE: These risk ratios represent the prevalence/incidence rate for an outcome
among a given nonresponse follow-up study divided by the rate among the
matched respondents from the main study. 95-percent confidence intervals are
included in parenthesis. * p < 0.05; ** p < 0.01; *** p < 0.001 testing the null
hypothesis that the risk ratio = 1.

respondents to the main study. The estimates of this bias range from 5 percent to
27  percent depending on the outcome. These results suggest that the main study’s
overall population estimates may be slightly too low.5 On the other hand, the phone
5	

These risk ratios do not correspond exactly to the magnitude of the bias in the overall population estimates.
The bias in the population estimates depends on both the size of the prevalence difference between nonrespondents and respondents as well as the response rate. For example, if 30 percent of the sample responded to the
survey and the study underestimated prevalence among nonresponders by a factor of 1.20, it would result in a
total bias of 1.14 in the population estimates (e.g., the study would yield a 7-percent prevalence when the actual
population prevalence was 8 percent). Second, the three follow-up samples may not be representative of the full

18

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

follow-up study implies that main study findings overestimated the prevalence of these
outcomes among nonrespondents to the main study, an estimated bias of approximately the same magnitude but in the opposite direction.
Discussion and Conclusions
Overall, these findings do not show a consistent pattern of nonresponse bias. However,
the interpretation of these results is complicated by strong evidence of a survey mode
effect, as well as differences across the three follow-up studies in their ability to recruit
a representative sample of nonrespondents. In short, the survey methodology literature
supports a strong hypothesis that phone interviews result in underreporting of sexual
assault, sexual harassment, and gender discrimination. Because the study found lower
prevalence in the phone follow-up group relative to matched respondents from the
main study, it is not possible to determine the extent to which those low risk ratios were
due to nonresponse bias in the main study versus a response bias in the phone survey.
However, the other two follow-up studies were less successful in recruiting study nonrespondents, so estimates based on those studies are harder to generalize to the broader
set of study nonrespondents.
The mail follow-up study may provide the best overall estimates of nonresponse
bias, because it yielded a response rate that was an order of magnitude better than
the late web study but is not hypothesized to suffer from the types of live-interviewer
response biases found in the phone survey. The mail follow-up study found descriptively small and statistically nonsignificant evidence of nonresponse bias when looking
at sexual assault and sexual harassment outcomes. The confidence intervals suggest
that the nonresponse bias on the main study could be in either direction for those outcomes. However, the mail follow-up study also suggests that the main study estimates
of gender discrimination may have resulted in a slight underestimate of the true prevalence of gender discrimination in the population.
Although these follow-up studies of nonrespondents did not identify a consistent
pattern of nonresponse bias, they do provide other useful information about nonresponse in the main study, as well as about the feasibility of improving response rates
through modifications of the survey methods. The phone follow-up was successful at
recruiting military service members who failed to respond to email invitations; the
overall response rate achieved by combining the main web survey and the phone follow-up study was 45 percent. For studies that ask less-sensitive questions than the
population of nonrespondents. To the extent that the size of the risk ratio comparing nonrespondents and respondents varies meaningfully across subgroups in the population, the estimates from these follow-up studies may not
precisely correspond to the risk ratio in the full population. The goal of these follow-up studies is to identify the
direction and approximate magnitude of the remaining nonresponse bias; they are not well suited for estimating
post hoc corrections to the population prevalence estimates contained in Volumes 2 and 3.

Follow-Up Studies of Survey Nonrespondents

19

WGRA/RMWS, this mixed-mode design has the potential to substantially reduce
nonresponse, although we would not suggest such an approach for topics that are likely
to be underreported in live-interviewer surveys. Alternatively, interactive voice recognition phone interviews may produce less response biases than live-interviewer surveys
(Kreuter, Presser, and Tourangeau, 2008) and may be a cost-effective means of increasing the overall response rate relative to a web-only administration.
The fact that so many junior enlisted personnel responded to phone calls (even
after RAND had sent out nine email invitations) may suggest that phone outreach and
recruitment is an effective way to reach hard-to-recruit subpopulations. That method
of outreach may be effective even if survey administration is conducted on the web. It
is plausible that some of the low-responding groups do not regularly use email communications as part of their military duties, which presents a problem for web surveys that
recruit primarily through email sent to work addresses. It may be possible to motivate
respondents to complete a web survey using automated phone messages or text messages to better capture service members who do not normally use email as part of their
jobs. To the extent that the survey is well adapted for completion on a smartphone, it
may also be more convenient to receive links to the web survey via text messages than
via email for many service members.
Another observation from the follow-up studies of nonresponse is that a longer
field period may improve the sample. The 2014 RMWS was conducted on a tight timeline with a field period substantially shorter than used in prior WGRA studies. However, even after the announced closing date, and without any additional recruitment
messages, responses continued to accumulate by leaving the web site open for an additional eight weeks. Had we included these late respondents in the main study sample,
they would have accounted for a 1-percent improvement in main study response rates.
Perhaps more importantly, the service members who completed during this period
were often from groups (junior enlisted, Marine Corps, and Navy) who were substantially underrepresented in the unweighted sample. This suggests that a longer field
period might also be useful for improving the representativeness of the sample.

CHAPTER THREE

The Efficacy of Sampling Weights for Correcting
Nonresponse Bias
Bonnie Ghosh-Dastidar, Terry L. Schell,
Andrew R. Morral, and Marc N. Elliott

As a continuation of our investigation of nonresponse bias in Chapter Two, in this chapter we explore predictors of nonresponse and key survey outcomes, and the amount of
nonresponse bias reduced using sample weights. Survey nonresponse reduces sample
size and, thereby, the precision of estimates. More critically, it may introduce bias in
estimates of survey outcomes when the outcomes of those who respond differ systematically from those who do not respond. While even a low rate of nonresponse could
introduce bias into survey estimates (Groves, 2006), concerns about nonresponse bias
grow as response rates decline. Unfortunately, for the past decade or more, response
rates have declined dramatically for military and civilian surveys (Falk, 2012; Kohut
et al., 2012; National Research Council, 2014).
One standard approach to address the threat of nonresponse bias is sample weighting that ensures that survey respondents are representative of the underlying population, at least in terms of characteristics that are known about the population (Heeringa, West, and Berglund, 2010; Little and Rubin, 2002; Schafer and Graham, 2002).
For instance, if women participate in the survey at higher rates than men, sample or
nonresponse weights can be used to ensure that the weighted proportion of women
respondents matches the correct proportion of women in the population. Similarly,
nonresponse weights can also be used to ensure other characteristics of the weighted
respondent sample match the population—such as age distribution, marital status, and
education level. By ensuring representativeness on all these characteristics, the sample
weights ensure that survey estimates of, say, income are not biased by any systematic
over- or underrepresentation among respondents of one gender or higher education
levels, factors known to be associated with income.
In contrast to a majority of surveys in civilian settings, military surveys benefit
from an extraordinary wealth of information about the population of service members.
This information, in theory, could be used in the development of nonresponse weights,
potentially reducing nonresponse bias in survey estimates. However, there is a cost
to adding factors and complexity to nonresponse weight models. Specifically, adding
factors that are only weakly associated with either nonresponse or the primary survey
outcomes usually has the effect of driving up variance in survey estimates. Although
the estimates may be less biased, on average, they may become unacceptably imprecise.

21

22

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

Hence, the efficacy of weighting adjustments may be conceptualized as a trade-off in
bias and variance of survey estimates.
For the RMWS study, we developed a new approach to nonresponse weighting designed to permit inclusion of many more factors in our nonresponse weighting
models than would be possible using traditional methods, without driving up variance to unacceptable levels. In addition to these “outcome-optimized RMWS weights”
(hereafter called “RMWS weights”), we created a second set of weights using the same
weighting approach as has been used in earlier administrations of the WGRA. We
constructed these “WGRA weights” so that we could report results on questions from
the 2014 version of the prior form using weights comparable to those used for estimates
from past WGRAs. Generating both sets of weights also provides us an opportunity
to compare the impact on survey estimates of nonresponse weighting using the new
RMWS weighting methods with the more familiar weighting methods used in earlier
WGRA studies.
If survey respondents and nonrespondents differ in terms of some characteristic,
this characteristic is a potential source of nonresponse bias only if it is also associated
with an outcome of interest. For instance, if respondents and nonrespondents have different proportions of women, the respondents will still offer an unbiased estimate of,
say, population rates of sexual assault if men and women experience sexual assault at
equal rates. If men and women do not experience assaults at equal rates, however, the
gender differences between respondents and nonrespondents pose a risk of introducing nonresponse bias in sexual assault estimates for the population, unless we explicitly
adjust for gender differences between the sample of respondents and the underlying
population.
In this chapter, we examine a wide range of candidate factors available to us
through the administrative data on service members maintained by DMDC. Specifically, we conducted analyses on the active-component population to identify factors
associated with both nonresponse and primary outcome variables, and therefore factors that we would consider candidates for our nonresponse models. After constructing
weights that included all of these factors in an efficient way, we examined the effect
of using these weights in survey estimation on variance inflation (design effect), bias
reduction, and mean squared error (MSE) or accuracy.
Participant Characteristics Associated with Survey Nonresponse
We identified four categories of characteristics that were known to be predictors of
survey response in the DMDC population or that were hypothesized as predictors of
nonresponse in the RMWS: (1) demographic, (2) military career, (3) military environment, and (4) survey fieldwork factors. This analysis includes the full sample of 197,491
women and 280,022 men because all characteristics, including the survey fieldwork

The Efficacy of Sampling Weights for Correcting Nonresponse Bias

23

indicators, are available for both respondents and nonrespondents (see Chapter Two).
Table 3.1 provides a complete list of variables we included in the models used to construct the RMWS weights.
We report risk ratios (RRs) as a measure of the association of these factors with
nonresponse in the RMWS, an effect size metric that we have used across all volumes
of this study. An RR is the probability of survey response in the designated category (for
instance, Hispanics) divided by the probability of response in the designated comparison category (for instance, whites). In order to estimate risk ratios, we exponentiated
the coefficients from a regression model that used a log link function—specifically, a
Poisson regression with robust error variance (Zou, 2004). The dependent variable was
Table 3.1
Predictors in Outcome-Optimized RMWS Weights
Demographic Factors
Gender (male/female)
Date of birth
Race/ethnicity
Marital status
Total number of dependents (spouse, children,
or others)
Education level
Armed Forces Qualification Test (AFQT) Score
Military Career Factors
Branch of service
Pay grade (20 categories)
Days of active-duty service, past year (reserves
only)
Cumulative months of active federal military
service
Projected end date for current term
Date of entry into military service
Separated or retired after sampling (Y/N)
Months deployed since 9/11/2001
Months deployed since 7/01/2013
DoD occupational group (20 categories)
Duty unit location (CONUS/OCONUS)

Military Environment Factors
Percentage male within members’ specific
occupationa
Number of people within members’ specific
occupationa
Percentage male at military installationb
Number of people at military installationb
Percentage male in military unitc
Number of people in military unitc
Survey Fieldwork Factors
Change-of-address entered in DMDC records
after sampling (Y/N)
Change of station after sampling (Y/N)
Change of station, past year (Y/N)
No mailing address at time of sampling (Y/N)
No email address at time of sampling (Y/N)
First letter returned as postal nondeliverable
(Y/N)
Email sent by Marines (Y/N)d
Percentage of sent emails that bounced back

NOTES: Categories containing fewer than 40 cases among survey respondents were combined. CONUS =
continental United States; OCONUS = outside the continental United States.
a Derived from 302 DoD occupational categories.
b Derived as two separate variables for each member’s assigned installation (N = 3,031) and their duty

installation (N = 3,147).
c Derived as two separate variables for each member’s assigned unit (N = 24,496) and their duty unit
(N = 24,517).
d Email invitations were sent directly by the Marine Corps for those Marines who did not have a valid
email address in the DMDC-provided contact information.

24

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

a binary indicator with response status for the study, where 0 indicates nonresponse
and 1 indicates response. In a second step, the factor was examined in a similar regression model that also included main (or marginal) effects of three covariates that were
included in previous WGRA nonresponse models: branch of service, pay grade, and
race/ethnicity (we stratified on gender; thus, we do not explicitly control for gender in
the models). These models allow us to assess whether the candidate factors were associated with nonresponse, over and above what the standard covariates explain. We refer
to the risk ratios produced in models that include these covariates as “adjusted risk
ratios.” Here, we present just these adjusted models stratified by gender (Table 3.2, for
women, and Table 3.3, for men), and discuss just those adjusted risk ratios that were
significant at the p < 0.01 level. Complete results, including unadjusted risk ratios, are
found in Appendix C, Tables C.1 and C.2.
Characteristics Associated with Nonresponse Among Women

Demographics. The joint test for race/ethnicity was significant. Blacks and Hispanics had adjusted risk ratios indicating they were 9 percent and 5 percent less likely to
Table 3.2
Characteristics Associated with Nonresponse Among Women
Variable

Sample Size
(197,491)

Full Sample
Mean

Respondent Adjusted Risk
Mean
Ratioa

P-Value

P-Value from
Joint Test

Demographics
Age in years as of
August 1, 2014b
Race/ethnicity

28.6

30.8

1.16

 

 

 

<0.0001
<0.0001

Non-Hispanic
white (ref)

96,730

49.0

53.5

 

Non-Hispanic
black

53,570

27.1

25.2

0.91

<0.0001

Hispanic

25,187

12.8

10.9

0.95

<0.0001

Asian

8,887

4.5

4.8

1.01

0.3998

Other

13,117

6.6

5.6

0.95

<0.0001

 

 

 

Marital status

 

<0.0001

Married (ref)

90,723

45.9

51.8

 

Never married

88,135

44.6

37.3

0.93

<0.0001

Divorced/
separated/other

18,633

9.4

10.9

0.92

<0.0001

0.9

1.0

1.01

<0.0001

Number of
dependents

 

The Efficacy of Sampling Weights for Correcting Nonresponse Bias

25

Table 3.2—Continued
Variable

Sample Size
(197,491)

Education level

Full Sample
Mean
 

Respondent Adjusted Risk
Mean
Ratioa
 

P-Value

 

P-Value from
Joint Test
<0.0001

High school or
less (ref)

114,516

58.0

44.9

 

 

Some college

32,169

16.3

19.2

1.11

<0.0001

Bachelor’s
degree

30,823

15.6

20.2

1.25

<0.0001

Graduate
degree

19,984

10.1

15.6

1.27

<0.0001

Military Career
Service branch

 

 

 

<0.0001

Air Force (ref)

59,324

30.0

40.1

 

Army

69,445

35.2

34.2

0.76

<0.0001

Navy

54,946

27.8

20.3

0.61

<0.0001

Marine Corps

13,776

7.0

5.4

0.68

<0.0001

 

 

 

Pay grade

 

<0.0001

E1–E3 (ref)

46,634

23.6

14.9

 

E4

40,711

20.6

16.1

1.19

<0.0001

E5–E6

55,798

28.3

29.7

1.55

<0.0001

E7–E9

15,853

8.0

11.8

2.14

<0.0001

W1–W5

1,699

0.9

1.3

2.44

<0.0001

O1–O3

24,755

12.5

16.4

1.91

<0.0001

O4–O6

12,041

6.1

9.7

2.29

<0.0001

60.1

62.0

1.12

<0.0001

7.1

8.7

1.08

<0.0001

 

 

 
 

AFQT percentile
(enlisted only)b
Years of active
military service)b
Deployment status

 

<0.0001

Never deployed
(ref)

97,712

49.5

44.3

 

Deployed before
08/01/2013

81,197

41.1

48.3

1.00

0.5059

Deployed after
08/01/2013

18,582

9.4

7.5

0.81

<0.0001

26

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

Table 3.2—Continued
Variable

Sample Size
(197,491)

Full Sample
Mean

Respondent Adjusted Risk
Mean
Ratioa

P-Value

Months deployed
since 9/11/2001)b

11.6

12.5

1.05

<0.0001

Months deployed
since 7/1/2013)b

2.8

3.0

1.10

<0.0001

4.5

0.6

0.13

<0.0001

 

 

 

8,554

4.3

3.4

1.00

0.9038

Electronic
equipment
repairers

11,251

5.7

5.0

1.20

<0.0001

Communications
and intelligence
specialists

16,689

8.5

8.0

1.15

<0.0001

Health care
specialists

24,582

12.4

14.0

1.34

<0.0001

Other technical
and allied
specialists

4,607

2.3

2.5

1.25

<0.0001

Functional
support and
administration
(ref)

40,621

20.6

21.9

1.26

<0.0001

Electrical/
mechanical
equipment
repairers

20,983

10.6

7.2

 

4,040

2.0

1.4

0.96

0.2282

21,955

11.1

7.8

0.92

<0.0001

Nonoccupational

5,714

2.9

1.5

0.83

<0.0001

Tactical
operations
officers

4,818

2.4

3.2

2.32

<0.0001

Intelligence
officers

2,833

1.4

2.0

2.36

<0.0001

Engineering and
maintenance
officers

3,441

1.7

2.6

2.55

<0.0001

Separated/retired

8,980

DoD occupational
area
Infantry,
guncrews, and
seamanship
specialists

Craftsworkers
Service and
supply handlers

P-Value from
Joint Test

<0.0001

 

The Efficacy of Sampling Weights for Correcting Nonresponse Bias

27

Table 3.2—Continued
Variable

Sample Size
(197,491)

Scientists and
professionals

Full Sample
Mean

Respondent Adjusted Risk
Mean
Ratioa

P-Value

2,366

1.2

2.0

2.77

<0.0001

15,293

7.7

11.0

2.48

<0.0001

Administrators

4,263

2.2

3.3

2.64

<0.0001

Supply,
procurement,
and allied
officers

3,825

1.9

2.7

2.45

<0.0001

Other officers
(20, 21, 29)

1,656

0.8

0.7

1.73

<0.0001

 

 

 

 
 

Health care
officers

Unit location
Continental
United States
(ref)

163,301

82.7

81.3

 

Outside the
continental
United States

34,190

17.3

18.7

1.07

<0.0001

Percentage male in
occupation groupb

74.8

72.8

0.96

<0.0001

Sizec of occupation
groupb

31,083.4

28,031.0

0.98

<0.0001

Percentage male in
unitb

77.3

76.5

1.00

0.1151

404.9

281.8

0.91

<0.0001

82.2

81.9

0.99

<0.0001

9,864.3

9,051.6

0.95

<0.0001

P-Value from
Joint Test

Military Environment

Sizec of unitb
Percentage male
in installation (zip
code)b
Sizec in installation
(zip code)b
Fieldwork Indicators
Change in assigned
unit zip since
8/1/2013

55,563

28.1

24.6

1.02

<0.0001

Change in assigned
unit zip since
4/1/2014

39,243

19.9

14.6

0.72

<0.0001

Change of mailing
address since
4/1/2014

64,350

32.6

27.1

0.84

<0.0001

 

28

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

Table 3.2—Continued
Variable
No valid mailing
address
Mailing 1 is postal
nondeliverable
No valid email
address
Marine Corps sent
email

Sample Size
(197,491)

Full Sample
Mean

Respondent Adjusted Risk
Mean
Ratioa

P-Value

3,858

2.0

1.4

0.84

<0.0001

29,696

15.0

8.9

0.71

<0.0001

8,180

4.1

0.8

0.23

<0.0001

939

0.5

0.2

0.57

<0.0001

8.6

1.0

0.81

<0.0001

Percentage of
emails bouncedb

P-Value from
Joint Test

NOTE: P-values from individual tests of significance are shown in column 5, while the p-values for a
joint test come from a chi-square score test; p-values for both the joint and individual tests appear in
column 6. Variables marked “ref” are the reference variables in their categories.
a The adjusted risk ratio comes from a model that includes race/ethnicity (indicated levels), service
branch, and pay grade.
b Indicates variables entered as continuous, for which the risk ratio corresponds to one standard

deviation change in the variable (standard deviations are listed in Appendix C).
c Size measured by number of people.

respond, respectively, compared with whites. Those with a marital status of “divorced,
separated, or other” or “never married” were 7–8 percent less likely to respond relative
to those “married.” An increase in education resulted in an increase in probability of
response: Those with a graduate degree were 27 percent more likely to respond than
those with a high school education or less. Also, a one standard deviation (eight-year)
increase in age was associated with a 16-percent increase in response rate, while each
additional dependent resulted only in a 1-percent increase in probability of response.
Military career. Those in the Army, Marine Corps, and Navy had a lower probability of response of 24 percent, 32 percent, and 39 percent, respectively, compared
with Air Force service members. Those at pay grades of E4 or E5–E6 had a 19 percent
and 55 percent higher response rate, respectively, compared with service members at
the E1–E3 level. Service members at other pay grades (E7–E9, W1–W5, O1–O3 and
O4–O6) had substantially higher response rates (with relative risk of 2 or greater)
compared with the E1–E3 reference category. Those deployed after August 2013 were
19 percent less likely to respond, relative to those never deployed. Those deployed since
2001 were more likely to respond than those not deployed since 2001. An additional
11 months of deployment since September 11, 2001, was associated with a 5-percent
increase in response rate, while an additional three months of deployment since July 1,
2013, was associated with a 10-percent increase in response rate.
There was a big decrease (87 percent) in response rate for those service members
who had separated or retired after the sample was drawn (which is not surprising due

The Efficacy of Sampling Weights for Correcting Nonresponse Bias

29

to the difficulty of contacting those who have left the services). A one standard deviation (18-percent) increase in AFQT scores was associated with a 12-percent increase
in response rate, while an additional seven years of active federal military service was
associated with an 8-percent increase in response rate. Service members posted outside
the continental United States were 7 percent more likely to respond, compared with
those within the continental United States. Occupational group was a significant predictor of survey response, as indicated by the significant joint test. A majority of the
18 occupational groups we analyzed had significantly higher response rates than the
reference group (electrical/mechanical equipment repairers).
Military environment. An additional 15 percentage points in the percentage of
males in a service member’s occupation was associated with a 4-percent reduction in
response rate, while an additional 6.4 percent males in one’s installation was associated
with a 1-percent reduction in response rate. An increase in size (measured by number
of service members) reduced propensity to respond—an additional 32,000 persons in
the occupation, an additional 500 persons in the unit, or an additional 10,000 persons
in the installation was associated with 2-percent, 9-percent, and 5-percent reductions
in response rates, respectively.
Survey fieldwork indicators. A change in station or mailing address after sampling (both indicators of a move just prior to survey fielding) resulted in 28-percent
and 16-percent decreases in response rate, respectively. However, a change in station
in the past year was associated with a slight (2 percent) increase in response rate. Also,
the lack of a valid mailing address or an incorrect mailing address in DMDC records
(resulting in the mailing being returned as postal nondeliverable) resulted in 16-percent and 29-percent decreases in response rate, respectively. Service members without
a valid email address reduced their likelihood of response by 77  percentage points.
Service members who received an email from the Marine Corps (an indicator that
their email address was missing from DMDC records, but that the Marine Corps was
able to send them a survey invitation on RAND’s behalf) had a 43-percent lower likelihood of responding. Finally, with each additional 10 percent of survey notification
emails not delivered (another indicator of the lack of a working email address), we saw
a 19-percent reduction in likelihood of response.
Characteristics Associated with Nonresponse Among Men

Demographics. The joint test for race/ethnicity was significant. Blacks were 3  percent less likely, while Asians were 18 percent more likely, to respond relative to whites.
Those with a marital status of “divorced, separated, or other” or “never married” were
less likely to respond (21 percent and 17 percent, respectively) compared with those
“married.” An increase in education level was associated with an increase in the probability of response. Compared with those with high school education or less, having an
education level of some college, a bachelor’s degree, or a graduate degree increased the
probability of response by 20 percent, 29 percent, and 34 percent, respectively. Also,

30

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

Table 3.3
Characteristics Associated with Nonresponse Among Men
Variable

Sample Size
(280,022)

Full Sample
Mean

Respondent Adjusted Risk
Mean
Ratioa

P-Value

P-Value from
Joint Test

Demographics
29.1

Age in years as of
August 1, 2014b
Race/ethnicity

32.9

1.26

<0.0001

 

 
<0.0001

Non-Hispanic
white (ref)

183,534

65.5

68.9

Non-Hispanic
black

41,019

14.6

13.4

0.97

0.0031

 

Hispanic

32,627

11.7

10.1

1.03

0.0119

 

Asian

10,289

3.7

4.2

1.18

<0.0001

 

Other

12,553

4.5

3.5

0.96

0.0131

 

Marital status

 

 

<0.0001

Married (ref)

162,077

57.9

72.6

Never married

107,985

38.6

23.3

0.79

<0.0001

 

9,960

3.6

4.1

0.83

<0.0001

 

1.5

2.0

1.05

<0.0001

 

Divorced/
separated/other
Number of
dependents
Education level

 

 

<0.0001

High school or
less (ref)

189,582

67.7

48.7

 

Some college

33,933

12.1

17.3

1.20

<0.0001

 

Bachelor’s
degree

34,346

12.3

18.3

1.29

<0.0001

 

Graduate
degree

22,161

7.9

15.7

1.34

<0.0001

 

Military Career
Service branch
Air Force (ref)

 

<0.0001

63,865

22.8

34.1

 

Army

108,411

38.7

37.4

0.67

<0.0001

 

Navy

64,461

23.1

18.1

0.55

<0.0001

 

Marine Corps

43,185

15.4

10.4

0.57

<0.0001

 

The Efficacy of Sampling Weights for Correcting Nonresponse Bias

31

Table 3.3—Continued
Variable

Sample Size
(280,022)

Pay grade

Full Sample
Mean

Respondent Adjusted Risk
Mean
Ratioa

P-Value

 

P-Value from
Joint Test
<0.0001

E1–E3 (ref)

64,409

23.0

9.5

E4

54,450

19.4

11.6

1.39

<0.0001

 

E5–E6

83,845

29.9

31.6

2.43

<0.0001

 

E7–E9

28,613

10.2

17.6

3.96

<0.0001

 

W1–W5

4,401

1.6

2.7

4.43

<0.0001

 

O1–O3

25,658

9.2

13.6

3.37

<0.0001

 

O4–O6

18,646

6.7

13.4

4.47

<0.0001

 

AFQT percentile
(enlisted only)b

63.9

65.5

1.09

<0.0001

 

Years of active
military serviceb

7.8

11.0

1.16

<0.0001

 

Deployment status

 

 

<0.0001

Never deployed
(ref)

112,101

40.0

28.2

Deployed before
8/1/2013

135,307

48.3

61.5

1.04

<0.0001

 

32,614

11.6

10.3

0.87

<0.0001

 

Months deployed
since 9/11/2001b

14.6

16.3

1.06

<0.0001

 

Months deployed
since 7/1/2013b

3.0

3.1

1.10

<0.0001

 

4.4

0.7

0.17

<0.0001

 

Deployed after
8/1/2013

Separated/retired

12,243

DoD occupational
area

 

 

<0.0001

Infantry,
guncrews, and
seamanship
specialists

44,050

15.7

8.6

0.71

<0.0001

 

Electronic
equipment
repairers

22,553

8.1

7.5

1.10

<0.0001

 

Communications
and intelligence
specialists

24,374

8.7

7.6

0.99

0.4665

 

Health care
specialists

14,523

5.2

5.5

1.33

<0.0001

 

32

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

Table 3.3—Continued
Variable
Other technical
and allied
specialists

Sample Size
(280,022)

Full Sample
Mean

Respondent Adjusted Risk
Mean
Ratioa

P-Value

P-Value from
Joint Test

7,218

2.6

3.0

1.16

<0.0001

 

Functional
support and
administration
(ref)

25,060

8.9

11.0

1.27

<0.0001

 

Electrical/
mechanical
equipment
repairers

50,038

17.9

15.6

8,565

3.1

2.7

1.01

0.4596

 

26,135

9.3

7.4

0.92

<0.0001

 

8,801

3.1

1.3

0.89

<0.0001

 

19,990

7.1

11.6

4.08

<0.0001

 

Intelligence
officers

3,117

1.1

1.9

4.16

<0.0001

 

Engineering and
maintenance
officers

6,983

2.5

4.7

4.71

<0.0001

 

Scientists and
professionals

2,980

1.1

2.2

4.75

<0.0001

 

Health care
officers

5,690

2.0

3.6

4.30

<0.0001

 

Administrators

2,839

1.0

2.0

4.93

<0.0001

 

Supply,
procurement
and allied
officers

4,012

1.4

2.5

4.46

<0.0001

 

Other officers
(20, 21, 29)

3,087

1.1

1.3

3.19

<0.0001

 

Craftsworkers
Service and
supply handlers
Nonoccupational
Tactical
operations
officers

Unit location

 

 

 

Continental
United States
(ref)

230,820

82.4

80.5

Outside the
continental
United States

49,202

17.6

19.5

 

1.10

<0.0001

 

The Efficacy of Sampling Weights for Correcting Nonresponse Bias

33

Table 3.3—Continued
Variable

Sample Size
(280,022)

Full Sample
Mean

Respondent Adjusted Risk
Mean
Ratioa

P-Value

P-Value from
Joint Test

Military Environment
Percentage male in
occupation groupb

86.1

84.8

0.88

<0.0001

Sizec of occupation
groupb

39,198.9

31,088.0

0.93

<0.0001

Percentage male in
unitb

86.4

84.6

0.90

<0.0001

 

393.7

281.8

0.94

<0.0001

 

85.2

84.1

0.94

<0.0001

 

11,847.4

9,852.0

0.92

<0.0001

 

Sizec of unitb
Percentage male
in installation (zip
code)b
Sizec of installation
(zip code)b
Fieldwork
Indicators
Change in assigned
unit zip since
8/1/2013

73,552

26.3

21.8

1.05

<0.0001

 

Change in assigned
unit zip since
4/1/2014

54,879

19.6

15.6

0.79

<0.0001

 

Change of mailing
address since
4/1/2014

83,832

29.9

24.6

0.84

<0.0001

 

6,205

2.2

1.3

0.89

<0.0001

 

Mailing 1 is postal
nondeliverable

46,246

16.5

7.6

0.62

<0.0001

 

No valid email
address

17,440

6.2

1.0

0.24

<0.0001

 

4,980

1.8

0.4

0.44

<0.0001

 

10.8

1.2

0.83

<0.0001

 

No valid mailing
address

Marine Corps sent
email
Percentage
of emails that
bouncedb

NOTE: P-values from individual tests of significance are indicated in column 5, while the p-values for
a joint test come from a chi-square score test and shown in column 6. Variables marked “ref” are the
reference variables in their categories.
a The adjusted risk ratio comes from a model that includes race/ethnicity (indicated levels), service
branch, and pay grade.
b Indicates variables entered as continuous, for which the risk ratio corresponds to one standard

deviation change in the variable (standard deviations are listed in Appendix C).
c Size measured by number of people.

34

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

a one standard deviation (eight-year) increase in age was associated with a 26-percent
increase in response, while each additional dependent led to a 5-percent increase in
response rate.
Military career. Those in the Army, Marine Corps, and Navy had a reduction
in probability of response of 33 percent, 43 percent, and 45 percent, respectively, compared with those in the Air Force. A service member at a pay grade of E4 or E5–E6
had a 39 percent and 143 percent higher response rate, respectively, compared with
E1–E3 service members. Service members at other pay grades (E7–E9, W1–W5, O1–
O3, O4–O6) had response rates three to four times that of E1–E3 service members.
Those deployed after August 2013 were 13  percent less likely, while those deployed
after September 2001 were 4  percent more likely, to respond compared with those
never deployed. For an additional three months of deployment in the past year, or 11
months of deployment since September 11, 2001, there were 10-percent and 6-percent
increases in response rates, respectively. We also saw an 83-percent decrease in response
rates among those who had separated or retired since the sample was drawn.
A one standard deviation (18-percent) increase in AFQT scores was associated
with a 9-percent increase in response rate, while an additional seven years of active
federal military service was associated with a 16-percent increase in response rate.
Being outside the continental United States was associated with a 10-percent increase
in response rate, compared with those in the continental United States. As was true
for women, occupational group was significantly associated with response, with most
occupational categories demonstrating response rates that were significantly higher
than the reference group (electrical/mechanical equipment repairers). Indeed, several
of the occupation groups had response rates that were three to five times greater than
that of the reference group.
Military environment. We explored two types of environmental factors. One set
looked at the percentage of males in the military environment. An additional 15 percentage points in the percentage of males in one’s occupation, an additional 11.2 percentage points of males in one’s unit, and an additional 6.4 percentage points of males
in one’s installation were associated with 12-percent, 10-percent and 6-percent reductions in response rates, respectively. We also explored the impact of size (measured by
number of service members) of one’s unit, installation, or occupation as another set
of environmental factors. An additional 32,000 persons in the same occupation, an
additional 500 persons in the unit, or an additional 10,000 persons in the installation
were associated with 7-percent, 6-percent and 8-percent reductions in response rates,
respectively.
Survey fieldwork indicators. A change in station or mailing address after sampling (both indicators of a move just prior to survey fielding) resulted in 21-percent
and 16-percent decreases in response rates, respectively. However, a change in station
in the past year was associated with a small (5-percent) increase in response rate. Also,
the lack of a valid mailing address or an incorrect mailing address in DMDC records

The Efficacy of Sampling Weights for Correcting Nonresponse Bias

35

(returned as postal nondeliverable) resulted in 11-percent and 38-percent decreases in
response rates, respectively. Service members without a valid email address reduced
their likelihood of response by 76 percentage points. Service members to whom the
Marine Corps sent an email were 56 percent less likely to respond. Also, with each
additional 10 percent of survey notification emails not delivered (another indicator of
a bad or non-working email address), we saw a 17-percent reduction in likelihood of
response.
Implications for Nonresponse Bias Adjustment

Many of the variables that we had hypothesized as potential correlates of response were
confirmed to have significant associations with response status. In some cases, covariates were strong predictors of nonresponse, such as the more than fourfold differences
in response rates among those with and without valid email addresses, among members of different occupational groups, and among men at different pay grades. Notably,
many of these large associations remained even after controlling for gender, branch of
service, pay grade, and race/ethnicity, covariates traditionally used in the industry standard for nonresponse weighting in military surveys. If the additional variables that we
have identified as strong predictors of nonresponse are also correlated with our survey
outcomes, their omission from nonresponse weighting presents a risk for nonresponse
bias in survey estimates. In the following section, we examine the association of these
same characteristics with the primary RMWS outcome measures.
Association of Participant Characteristics with Survey Outcomes
In this section, we describe the association of the demographic and other factors
with three primary outcomes from the RMWS survey: (1) any sexual assault, (2) any
sexual harassment, and (3) any gender discrimination. We selected these three outcomes because they are our primary outcomes. In a few analyses, we sub-divide sexual
assault into three subtypes and sexual harassment into two subtypes, but those subtypes generally had similar predictors. The sample is restricted to survey respondents
with non-missing responses for these three survey outcomes (67,187 women and 78,113
men). Here, again, we report adjusted risk ratios computed for males and females separately using models that control for the same key covariates: service branch, pay grade,
and race/ethnicity. Findings for women and men are presented in Tables 3.4 and 3.5,
respectively. In the sections that follow, we discuss only those risk ratios that are significant at the p < 0.01 level.
Characteristics Associated with Primary Outcomes Among Women

Demographics. Race/ethnicity and marital status were significant predictors of the
three outcomes. whites had the highest risk for sexual assault, sexual harassment, or

Variable

Sample Size
(67,187) Adjusted RRa

P-Value

Sexual Harassment

P-Value from
Joint Test Adjusted RRa

P-Value

Gender Discrimination

P-Value from
Joint Test Adjusted RRa

P-Value

P-Value from
Joint Test

Demographics
Age in years (as of
August 1, 2014)b

67,187

Race/ethnicity

0.62

<0.0001

 

Non-Hispanic
white (ref)

35,938

 

Non-Hispanic
black

16,941

0.70

Hispanic

7,345

Asian
Other

 
<0.0001

0.83

<0.0001

 

 

1.07

0.0040

<0.0001

 
<0.0001

 

 

<0.0001

 

0.71

<0.0001

 

0.69

<0.0001

 

0.69

<0.0001

 

0.95

0.2089

 

0.90

0.0468

 

3,210

0.61

<0.0001

 

0.66

<0.0001

 

0.64

<0.0001

 

3,753

0.88

0.1684

 

0.99

0.8094

 

0.90

0.1713

 

Marital status

 

<0.0001

<0.0001

Never married

25,049

1.89

<0.0001

 

1.20

<0.0001

 

0.93

0.0501

 

7,321

2.17

<0.0001

 

1.40

<0.0001

 

1.26

<0.0001

 

67,187

0.86

<0.0001

 

0.95

<0.0001

 

1.03

0.0209

 

Education

 

<0.0001

 

<0.0001

34,817

Number of
dependents

 

 

 

Married (ref)

Divorced/
separated/other

 

 

 

 

0.0016

 

0.3718

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

Sexual Assault

36

Table 3.4
Association of Participant Characteristics with Survey Outcomes Among Women

Table 3.4—Continued
Sexual Assault
Variable

Sample Size
(67,187) Adjusted RRa
29,362

 

Some college

12,572

0.77

Bachelor’s
degree

13,216

Graduate
degree

10,220

P-Value from
Joint Test Adjusted RRa

P-Value

Gender Discrimination

P-Value from
Joint Test Adjusted RRa

P-Value

 

P-Value from
Joint Test

 

 

 

0.0003

 

0.94

0.0815

 

1.02

0.6298

 

0.68

<0.0001

 

0.94

0.1349

 

0.96

0.4323

 

0.41

<0.0001

 

0.79

0.0003

 

1.06

0.4792

 

Military Career
Service

 

<0.0001

Army

23,010

1.58

<0.0001

 

1.89

<0.0001

 

2.28

<0.0001

 

Navy

13,630

1.72

<0.0001

 

1.93

<0.0001

 

2.07

<0.0001

 

3,607

2.17

<0.0001

 

2.04

<0.0001

 

2.51

<0.0001

 

 

<0.0001

 

 

 

<0.0001

10,004

 

E4

10,849

0.82

0.0012

 

1.22

<0.0001

 

1.50

<0.0001

 

E5–E6

19,930

0.49

<0.0001

 

0.96

0.2640

 

1.50

<0.0001

 

E7–E9

7,956

0.25

<0.0001

 

0.63

<0.0001

 

1.30

0.0002

 

880

0.38

<0.0001

 

0.54

<0.0001

 

1.25

0.1147

 

11,040

0.47

<0.0001

 

0.87

0.0013

 

1.29

<0.0001

 

O1–O3

 

<0.0001

E1–E3 (ref)

W1–W5

 

 

<0.0001

26,940

Pay grade

 

<0.0001

Air Force (ref)

Marine Corps

 

 

 

 

The Efficacy of Sampling Weights for Correcting Nonresponse Bias

High school or
less (ref)

P-Value

Sexual Harassment

37

38

Table 3.4—Continued

Variable
O4–O6
AFQT percentile
(enlisted only)b
Years of active
military serviceb

Sample Size
(67,187) Adjusted RRa
6,538
48.031
67,134

P-Value

Sexual Harassment

P-Value from
Joint Test Adjusted RRa

P-Value

Gender Discrimination

P-Value from
Joint Test Adjusted RRa

P-Value

P-Value from
Joint Test

0.16

<0.0001

 

0.44

<0.0001

 

1.42

<0.0001

 

1.29

<0.0001

 

1.19

<0.0001

 

1.14

<0.0001

 

0.69

<0.0001

 

0.82

<0.0001

 

1.05

0.1045

 

Deployment status

<0.0001

<0.0001

<0.0001

Never deployed
(ref)

29,742

Deployed
before 8/1/2013

32,432

0.93

0.1937

1.01

0.7564

1.12

0.0007

Deployed after
8/1/2013

5,013

1.38

< .0001

1.28

< .0001

1.28

< .0001

Months deployed
since 9/11/01b

37,445

0.90

0.0304

 

0.97

0.1846

 

1.01

0.6364

 

Months deployed
since 7/1/13b

5,013

0.97

0.7745

 

0.95

0.2768

 

0.90

0.1137

 

Separated/retired

421

1.94

0.0018

 

1.36

0.0228

 

1.73

0.0003

 

DoD occupational
area
Infantry,
guncrews, and
seamanship
specialists

 
2,265

0.97

<0.0001
0.8231

 

 
0.71

<0.0001
<0.0001

 

<0.0001
0.64

0.0001

 

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

Sexual Assault

Table 3.4—Continued
Sexual Assault
Variable

Sample Size
(67,187) Adjusted RRa

P-Value

Sexual Harassment

P-Value from
Joint Test Adjusted RRa

P-Value

Gender Discrimination

P-Value from
Joint Test Adjusted RRa

P-Value

P-Value from
Joint Test

3,343

0.97

0.7599

 

0.99

0.9009

 

0.82

0.0144

 

Communications
and intelligence
specialists

5,375

0.89

0.2398

 

0.93

0.1591

 

0.76

<0.0001

 

Health care
specialists

9,382

0.76

0.0020

 

0.71

<0.0001

 

0.56

<0.0001

 

Other technical
and allied
specialists

1,659

0.90

0.4871

 

0.79

0.0035

 

0.66

0.0002

 

Functional
support and
administration

14,683

0.73

0.0002

 

0.68

<0.0001

 

0.52

<0.0001

 

Electrical/
mechanical
equipment
repairers (ref)

4,855

 

 

 

Craftsworkers

913

0.80

0.2449

 

0.93

0.4310

 

1.11

0.3792

 

Service and
supply handlers

5,232

0.87

0.1471

 

0.88

0.0172

 

0.70

<0.0001

 

Nonoccupational

1,022

0.49

0.0005

 

0.41

<0.0001

 

0.27

<0.0001

 

Tactical
operations
officers

2,165

0.19

<0.0001

 

0.49

<0.0001

 

1.37

0.0030

 

 

 

The Efficacy of Sampling Weights for Correcting Nonresponse Bias

Electronic
equipment
repairers

39

40

Table 3.4—Continued

Variable

Sample Size
(67,187) Adjusted RRa

P-Value

Sexual Harassment

P-Value from
Joint Test Adjusted RRa

P-Value

Gender Discrimination

P-Value from
Joint Test Adjusted RRa

P-Value

P-Value from
Joint Test

Intelligence
officers

1,330

0.23

<0.0001

 

0.46

<0.0001

 

1.17

0.2031

 

Engineering and
maintenance
officers

1,714

0.18

<0.0001

 

0.45

<0.0001

 

1.25

0.0559

 

Scientists and
professionals

1,327

0.17

<0.0001

 

0.32

<0.0001

 

0.91

0.4717

 

Health care
officers

7,400

0.08

<0.0001

 

0.26

<0.0001

 

0.71

0.0003

 

Administrators

2,206

0.18

<0.0001

 

0.40

<0.0001

 

0.96

0.7491

 

Supply,
procurement
and allied
officers

1,817

0.14

<0.0001

 

0.37

<0.0001

 

0.99

0.9063

 

499

0.21

<0.0001

 

0.52

<0.0001

 

1.19

0.3425

 

 

 

 

 

 

 

 

 

 

Other officers
Unit location
Continental
United States
(ref)

54,555

 

Outside the
continental
United States

12,585

1.21

0.0003

 

1.04

0.2690

 

1.03

0.4507

 

67,187

1.21

<0.0001

 

1.21

<0.0001

 

1.35

<0.0001

 

Military Environment
Percentage male in
occupation groupb

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

Sexual Assault

Table 3.4—Continued
Sexual Assault
Variable

Sample Size
(67,187) Adjusted RRa

P-Value

Sexual Harassment

P-Value from
Joint Test Adjusted RRa

P-Value

Gender Discrimination

P-Value from
Joint Test Adjusted RRa

P-Value

P-Value from
Joint Test

67,187

0.99

0.5846

 

1.01

0.6245

 

1.03

0.1598

 

Percentage male in
unitb

66,973

1.15

<0.0001

 

1.16

<0.0001

 

1.23

<0.0001

 

Sizec of unitb

67,187

1.02

0.2735

 

1.05

<0.0001

 

1.02

0.2248

 

Percentage male
in installation (zip
code)b

67,125

1.12

<0.0001

 

1.09

<0.0001

 

1.14

<0.0001

 

Sizec of installation
(zip code)b

67,140

0.97

0.2812

 

1.02

0.2269

 

1.03

0.0458

 

Change in assigned
unit zip since
8/1/2013

16,499

1.06

0.2548

 

0.97

0.3738

 

0.92

0.0486

 

Change in assigned
unit zip since
4/1/2014

9,806

1.07

0.2990

 

0.90

0.0059

 

0.96

0.3598

 

Change of mailing
address since
4/1/2014

18,239

1.13

0.0131

 

1.01

0.6796

 

1.03

0.4648

 

No valid mailing
address

927

0.92

0.6370

 

0.70

0.0091

 

0.51

0.0040

 

No valid email
address

549

1.44

0.0515

 

1.41

0.0007

 

1.07

0.6621

 

Fieldwork Indicators

The Efficacy of Sampling Weights for Correcting Nonresponse Bias

Sizec of occupation
groupb

41

42

Table 3.4—Continued

Variable
Mailing 1 postal
nondeliverable

Sample Size
(67,187) Adjusted RRa

P-Value from
Joint Test Adjusted RRa

P-Value

Gender Discrimination

P-Value from
Joint Test Adjusted RRa

P-Value

P-Value from
Joint Test

5,992

1.22

0.0027

 

0.98

0.6981

 

0.94

0.3273

 

125

0.91

0.7754

 

0.98

0.9433

 

1.17

0.6015

 

67,187

1.03

0.0721

1.05

<0.0001

1.02

0.1019

Marines sent email
Percentage of
emails bouncedb

P-Value

Sexual Harassment

NOTE: P-values from individual tests of significance are indicated in the “P-Value” columns, while the p-values for a joint test come from a chi-square
score test and shown in the “P-Value from Joint Test” columns. Variables marked “ref” are the reference variables in their categories.
a The adjusted risk ratio comes from a model that includes race/ethnicity (indicated levels), service branch, and pay grade.
b Indicates variables entered as continuous, for which the risk ratio corresponds to one standard deviation change in the variable (standard deviations

are listed in Appendix C).
c Size measured by number of people.

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

Sexual Assault

The Efficacy of Sampling Weights for Correcting Nonresponse Bias

43

gender discrimination. Blacks and Asians had a lower risk of sexual assault or harassment, while Hispanics had a (31 percent) lower risk of sexual assault only, compared
with whites. Those in the “never married” category had an elevated risk for sexual
assault (89 percent) or sexual harassment (20 percent), while those in the “divorced,
separated, or other” category had an increased risk across all three outcomes (26–
117  percent), compared with those “married.” A one standard deviation (eight-year)
increase in age was associated with a reduction in risk for both sexual assault (38 percent) and sexual harassment (17 percent), and a small increase (7 percent) in gender discrimination. Having an additional dependent was associated with a decrease in risk for
sexual assault (14 percent) and sexual harassment (5 percent). An increase in education
level was associated with a decrease in risk for sexual assault, across all levels, relative to
those at an education level of “high school or less.” Having a graduate degree resulted
in a significant decrease in risk of sexual harassment only (21 percent); education was
not significantly associated with gender discrimination.
Military career. Service, pay grade, and occupational area were significantly
associated with risk for one or more of the survey outcomes. Being a woman in the
Army, Navy, and Marine Corps was associated with an increase in risk for sexual
assault (58–117 percent), sexual harassment (89–104 percent), and gender discrimination (107–151 percent), compared with women in the Air Force. The relationship of the
survey outcomes with pay grade was varied. While service members at the E4 level had
an 18-percent lower risk of sexual assault, they had a 22-percent higher risk of sexual
harassment and a 50-percent higher risk of discrimination, compared with women at
the level of E1–E3. Service women at the E5–E6 level had a 51-percent lower risk of
sexual assault, but similar risk (about 1.0) of sexual harassment, compared with E1–
E3 women. Service women at other pay grade levels (E7–E9, W1–W5, O1–O3, and
O4–O6) had a substantially lower risk of sexual assault and sexual harassment, and
an increased risk of discrimination, compared with women at the E1–E3 level. Those
deployed after August 2013 were (28–38  percent) more likely to experience sexual
assault, sexual harassment, or discrimination, relative to those never deployed. Also,
those deployed before August 2013 were 12 percent more likely to experience gender
discrimination compared with those never deployed.
Among enlisted service women, a one standard deviation (18-percent) increase
in AFQT scores was associated with increased risk across all outcomes: sexual assault
(29 percent), sexual harassment (19 percent), and gender discrimination (14 percent).
An additional seven years of active federal military service was associated with 31-percent and 18-percent decreases in risk of sexual assault and harassment, respectively.
If the sampled woman had separated or retired since the sample was drawn, we saw a
94-percent increase in risk for sexual assault and a 73-percent increase in gender discrimination. Being outside the continental United States was associated with a 21-percent increase in sexual assault relative to being in the continental United States.

44

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

Among occupational groups, the reference group (electrical/mechanical equipment repairers) experienced the highest rates of sexual assault and sexual harassment,
but not gender discrimination. Whereas occupational group was a significant predictor of sexual assault and sexual harassment, in many cases the differentiation appears
to be one between occupations held by officers (who are exposed to lower risks) and
those held by others. Exceptions to this rule include enlisted women in health care
and administrative roles, who are exposed to lower rates of sexual assault and harassment, and women who are infantry, guncrew, or seamanship specialists, who experience lower rates of sexual harassment, compared with the reference group.
Military environment. The percentage of the work environment composed of
men was significantly associated with risk of sexual assault, sexual harassment, and
gender discrimination among women. An increase of one standard deviation (15 percentage points) in the percentage of males in the occupation group was associated
with an increased risk of 21  percent for both sexual assault and sexual harassment,
and 35  percent for gender discrimination. An increase of one standard deviation
(11.2 percentage points) in the percentage of men in the unit was also associated with
an increased risk for all three outcomes: sexual assault (15 percent), sexual harassment
(16 percent), and gender discrimination (23 percent). Also, an increase in the size of
the unit was associated with an increase in risk of sexual harassment among women.
Finally, an increase of one standard deviation (6.4 percentage points) in the percentage men in one’s installation was associated with an increased risk for sexual assault
(12 percent), sexual harassment (9 percent), and gender discrimination (14 percent).
Survey fieldwork indicators. The associations between fieldwork indicators and
the three survey outcomes were not consistent. The lack of a valid mailing address in
DMDC records was associated with large reductions in women’s risk of sexual assault
(30 percent) and gender discrimination (49 percent), while the lack of a valid email
address in DMDC records had a similar but opposite effect (41-percent increase) on
the risk of sexual harassment. Service women whose first postal mailing was returned
as nondeliverable were at 22-percent higher risk of sexual assault. Similarly, an additional 10 percent of the survey notification emails not delivered was associated with a
5-percent increase in risk of harassment.
Characteristics Associated with Primary Outcomes Among Men

Demographics. We found fewer significant associations between demographic characteristics and survey outcomes among men compared with women. Age, education
level, and number of dependents were not significantly associated with the three survey
outcomes. Blacks had a 27-percent reduction in risk for sexual harassment compared
with whites. Those “never married” had an elevated risk of sexual assault (50 percent)
and sexual harassment (24 percent), while those in the “divorced, separated, or other”
category had an increased risk (39 percent) of sexual harassment, compared with those
who are “married.”

Table 3.5
Association of Participant Characteristics with Survey Outcomes for Men
Sexual Assault
Variable

Sample Size
(78,113) Adjusted RRa

P-Value

Sexual Harassment

P-Value from
Joint Test Adjusted RRa

P-Value

Gender Discrimination

P-Value from
Joint Test Adjusted RRa

P-Value

P-Value from
Joint Test

Demographics
Age in years (as of
August 1, 2014)b

78,113

0.1922

 

0.92

0.0459

0.4839

0.1204

0.0002

Non-Hispanic
white (ref)

53,812

Non-Hispanic
black

10,442

1.25

0.1566

0.73

0.0002

Hispanic

7,886

0.84

0.3643

1.10

0.2411

Asian

3,278

0.84

0.5498

0.96

Other

2,695

1.02

0.9335

0.87

Marital status

1.11

 
0.2618 

 

 

 

0.81

0.1164

 

 

0.82

0.1812

 

0.7473

 

0.82

0.3616

 

0.3225

 

0.76

0.2745

 

0.0186

0.0004

 0.3832

Married (ref)

56,719

Never married

18,198

1.50

0.0030

1.24

0.0005

1.03

0.7633

 

3,196

1.21

0.5371

1.39

0.0078

1.37

0.1172

 

78,113

0.95

0.2652

0.96

0.0211

1.00

0.9043

 

Divorced/
separated/other
Number of
dependents
Education

 
37,207

 

 

0.1442

0.1980

0.4940 
 

45

High school or
less (ref)

 

The Efficacy of Sampling Weights for Correcting Nonresponse Bias

Race/ethnicity

0.88

46

Table 3.5—Continued

Variable

Sample Size
(78,113) Adjusted RRa

P-Value

Sexual Harassment

P-Value from
Joint Test Adjusted RRa

P-Value

Gender Discrimination

P-Value from
Joint Test Adjusted RRa

P-Value

P-Value from
Joint Test

Some college

13,205

0.71

0.0746

1.01

0.9190

1.08

0.5464

 

Bachelor’s
degree

14,032

1.07

0.7710

1.15

0.1409

1.10

0.5699

 

Graduate
degree

12,028

0.76

0.4389

1.36

0.0273

1.39

0.1310

 

Military Career
Service

 

<0.0001

<0.0001

<0.0001

Air Force (ref)

26,610

 

Army

29,226

2.67

<0.0001

2.08

<0.0001

2.41

<0.0001

Navy

14,157

3.61

<0.0001

2.19

<0.0001

2.52

<0.0001

8,120

2.43

<0.0001

1.43

0.0002

1.35

0.0963

Marine Corps
Pay grade

 

 

<0.0001

<0.0001

E1–E3 (ref)

7,383

E4

9,066

1.37

0.1225

1.52

<0.0001

2.04

0.0003

E5–E6

24,694

0.76

0.1546

0.77

0.0034

1.34

0.1218

 

E7–E9

13,728

0.34

<0.0001

0.36

<0.0001

0.77

0.2285

 

2,148

0.26

0.0118

0.31

<0.0001

0.90

0.7542

 

O1–O3

10,606

0.44

0.0015

0.66

<0.0001

1.04

0.8658

 

O4–O6

10,488

0.33

<0.0001

0.31

<0.0001

1.05

0.8216

 

W1–W5

 

<0.0001

 

 

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

Sexual Assault

Table 3.5—Continued
Sexual Assault
Variable

Sample Size
(78,113) Adjusted RRa

P-Value

Sexual Harassment

P-Value from
Joint Test Adjusted RRa

P-Value

Gender Discrimination

P-Value from
Joint Test Adjusted RRa

P-Value

P-Value from
Joint Test

AFQT percentile
(enlisted only)b

54,160

1.22

0.0030

1.22

<0.0001

1.05

0.3482

 

Years of active
military serviceb

78,080

0.80

0.0396

0.80

<0.0001

0.97

0.6537

 

Deployment status

0.1733

0.0648

0.611

22,039

Deployed
before 8/1/2013

48,027

0.76

0.0786

0.92

0.2473

1.02

0.8891

Deployed after
8/1/2013

8,047

0.99

0.9405

1.13

0 .1745

0.88

0.4632

Months deployed
since 9/11/2001b

56,074

0.87

0.0864

0.85

<0.0001

0.81

0.0007

Months deployed
since 7/1/2013b

8,047

0.95

0.8139

1.04

0.6828

1.03

0.8434

Separated/retired

584

4.48

<0.0001

1.94

0.0027

2.53

0.0031

DoD occupational
area

 

0.1441

<0.0001

<0.0001

Infantry,
guncrews, and
seamanship
specialists

6,745

0.97

0.8985

0.85

0.1425

0.64

0.0707

Electronic
equipment
repairers

5,833

0.87

0.5668

1.31

0.0058

1.61

0.0128

 

The Efficacy of Sampling Weights for Correcting Nonresponse Bias

Never deployed
(ref)

47

48

Table 3.5—Continued

Variable

Sample Size
(78,113) Adjusted RRa

P-Value

Sexual Harassment

P-Value from
Joint Test Adjusted RRa

P-Value

Gender Discrimination

P-Value from
Joint Test Adjusted RRa

P-Value

Communications
and intelligence
specialists

5,969

0.72

0.1986

1.02

0.8290

1.49

0.0417

Health care
specialists

4,320

0.83

0.4555

0.91

0.3918

2.06

0.0002

Other technical
and allied
specialists

2,323

0.75

0.4458

1.18

0.2463

1.65

0.0516

Functional
support and
administration

8,614

0.98

0.9334

0.95

0.6072

1.66

0.0045

Electrical/
mechanical
equipment
repairers (ref)

12,219

 

Craftsworkers

2,094

0.74

0.4494

1.16

0.3308

0.81

0.5700

Service and
supply handlers

5,753

1.03

0.9150

1.07

0.5462

1.73

0.0047

Nonoccupational

1,001

0.79

0.6970

0.73

0.2805

0.74

0.6797

Tactical
operations
officers

9,074

0.37

0.0042

0.30

<0.0001

0.94

0.8302

Intelligence
officers

1,450

0.37

0.0946

0.32

<0.0001

1.26

0.5639

P-Value from
Joint Test

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

Sexual Assault

Table 3.5—Continued
Sexual Assault
Variable

Sample Size
(78,113) Adjusted RRa

P-Value

Sexual Harassment

P-Value from
Joint Test Adjusted RRa

P-Value

Gender Discrimination

P-Value from
Joint Test Adjusted RRa

P-Value

3,654

0.29

0.0054

0.28

<0.0001

1.23

0.5339

Scientists and
professionals

1,688

0.17

0.0159

0.28

<0.0001

2.36

0.0095

Health care
officers

2,823

0.18

0.0022

0.35

<0.0001

2.13

0.0131

Administrators

1,575

0.24

0.0295

0.31

<0.0001

2.17

0.0214

Supply,
procurement
and allied
officers

1,991

0.39

0.0397

0.19

<0.0001

1.24

0.5726

987

0.13

0.0521

0.52

0.0084

1.65

0.2770

Other officers
Unit location

 

Continental
United
States(ref)

62,835

 

Outside the
continental
United States

15,201

1.33

0.0368

1.14

0.0332

1.20

0.0841

Percentage male in
occupation groupb

78,113

1.08

0.3537

0.98

0.5654

0.70

<0.0001

Sizec of occupation
groupb

78,113

1.03

0.6276

0.93

0.0064

0.94

0.1728

Military Environment

The Efficacy of Sampling Weights for Correcting Nonresponse Bias

Engineering and
maintenance
officers

P-Value from
Joint Test

49

50

Table 3.5—Continued

Variable

Sample Size
(78,113) Adjusted RRa

P-Value

Sexual Harassment

P-Value from
Joint Test Adjusted RRa

P-Value

Gender Discrimination

P-Value from
Joint Test Adjusted RRa

P-Value

P-Value from
Joint Test

Percentage male in
unitb

77,741

0.99

0.8294

1.02

0.5879

0.68

<0.0001

Sizec of unitb

78,113

1.08

0.1069

1.04

0.1001

1.06

0.1838

Percentage male
in installation (zip
code)b

78,008

0.94

0.3062

1.06

0.0706

0.86

0.0006

Sizec of installation
(zip code)b

78,036

0.90

0.0678

1.02

0.4763

0.97

0.4553

 

Change in assigned
unit zip since
8/1/2013

17,030

0.95

0.7275

1.04

0.4975

1.05

0.6418

 

Change in assigned
unit zip since
4/1/2014

12,172

1.44

0.0164

0.96

0.5637

 

1.04

0.7698

 

Change of mailing
address since
4/1/2014

19,207

1.11

0.4164

1.06

0.2740

 

1.06

0.5567

 

1,018

0.88

0.8295

0.81

0.4282

 

0.38

0.1766

 

743

2.98

0.0010

1.41

0.1037

 

1.79

0.1061

 

5,901

0.87

0.4965

0.82

0.0293

1.15

0.3565

 

285

4.13

0.0011

1.06

0.8707

0.71

0.7346

 

Fieldwork Indicators

No valid postal
address
No valid email
address
Mailing 1 is postal
nondeliverable
Marines sent email

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

Sexual Assault

Table 3.5—Continued
Sexual Assault
Variable
Percentage
of emails that
bouncedb

Sample Size
(78,113) Adjusted RRa
78,113

1.09

P-Value
0.0039

Sexual Harassment

P-Value from
Joint Test Adjusted RRa
1.04

P-Value
0.0216

Gender Discrimination

P-Value from
Joint Test Adjusted RRa
1.03

P-Value
0.4327

P-Value from
Joint Test
 

NOTE: P-values from individual tests of significance are indicated in the “P-Value” columns, while the p-values for a joint test come from a chi-square
score test and are shown in the “P-Value from Joint Test” columns. Variables marked “ref” are the reference variables in their categories.

are listed in Appendix C).
c Size measured by number of people.

The Efficacy of Sampling Weights for Correcting Nonresponse Bias

a The adjusted risk ratio comes from a model that includes race/ethnicity (indicated levels), service branch, and pay grade.
b Indicates variables entered as continuous, for which the risk ratio corresponds to one standard deviation change in the variable (standard deviations

51

52

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

Military career. Service, pay grade, and occupational area were significantly associated with risk of one or more of men’s survey outcomes. Being a man in the Army,
Navy, and Marine Corps was associated with an increased risk of sexual assault (143–
261 percent), sexual harassment (43–119 percent), and discrimination (35–152 percent),
compared with those in the Air Force. Service members at a pay grade of E4 had a
52-percent higher risk of sexual harassment, and two times the risk of discrimination,
as service members at the E1–E3 level. Service members at all other pay grades (E5–E6,
E7–E9, W1–W5, O1–O3, O4–O6) had a reduced risk of sexual harassment compared
with men at the ranks of E1–E3, while those at pay grades of E7–E9, W1–W5, O1–O3,
O4–O6 had substantially lower risk of sexual assault than E1–E3 service men.
Among enlisted service men, an 18-percent increase in AFQT scores was associated with a 22-percent increase in risk of both sexual assault and sexual harassment. An
additional seven years of active federal military service was associated with a 20-percent decrease in risk for both sexual assault and harassment. An additional 11 months
of deployment since September 11, 2001, reduced the risk for sexual harassment and
gender discrimination by 15 percent and 19 percent, respectively. If the sampled person
had separated or retired since the sample was drawn, the risk for sexual assault, sexual
harassment, and gender discrimination was 4.5 times, two times, and 2.5 times greater
than those not separated or retired.
Occupational area was not significantly associated with the risk of sexual assault
for men, but it was associated with sexual harassment and gender discrimination. For
sexual harassment, the primary distinction was between occupations held by officers,
in which men were exposed to lower risk of past-year sexual harassment than the reference group, and those held by others who experienced higher rates of sexual harassment. Men’s gender discrimination experiences appear to be associated with such occupations as health care specialists, functional and administrative support, and scientists
and professionals.
Military environment. Although an increase in the size of an occupational
group by 32,000 people is associated with a small (7-percent) reduction in the risk of
sexual harassment, for the most part military environment variables are not associated
with men’s risk of sexual assault or sexual harassment. In contrast, men’s experience
with gender discrimination is strongly associated with the percentage of men in their
occupational group, unit, and facility. As the percentage of men in an occupational
group increases, men’s risk of past-year gender discrimination declines—an additional
15 percent males in one’s occupational group corresponds to a 30-percent reduction in
risk of discrimination. An additional 11.2 percent males in the occupation or an additional 6.4 percent males in the facility was associated with a reduction of 32 percent
and 14 percent, respectively, in risk of gender discrimination.
Survey fieldwork indicators. A change in assigned unit since the sample was
drawn was associated with a 44-percent increase in risk of sexual assault. Those without
a valid email address in DMDC records were at three times the risk of sexual assault

The Efficacy of Sampling Weights for Correcting Nonresponse Bias

53

compared with that of those with one. Service members to whom the Marine Corps
sent an email (an indicator that their email address was missing in DMDC records,
but that the Marine Corps was able to send them a survey invitation on RAND’s
behalf) had more than four times the risk of sexual assault compared with those who
did not have an email sent. Also, those with an additional 10 percent emails returned
(or bounced) had a 9-percent higher risk for sexual assault.
Observations on Association of Predictors with Primary RMWS Outcomes

We found fewer factors significantly associated with sexual assault and sexual harassment of men compared with women, but otherwise the associations were remarkably
consistent in direction and magnitude with those found for women (likely due to
smaller sample sizes for men). For sexual assault, all of the factors associated with
assaults against men (p < 0.01) were similarly related to assaults against women, with
the exception of three fieldwork factors that were predictive of risk for men but nonsignificant predictors for women. For sexual harassment, all but three factors associated
with harassment of men were also associated with harassment of women, and the three
exceptions (being of rank E5–E6, having an electronic equipment repair job, and the
number of people in the member’s occupational group) were all nonsignificant predictors for women.
The same is not true for gender discrimination, however, where only branch of
service, pay grade, and recent retirement/separation are significant predictors of discrimination for both men and women. Instead, we see strikingly divergent effects of
some occupational areas and military environment factors on gender discrimination
for men and women. Whereas for women, the occupations of health care specialist,
functional support and administration, and service and supply handlers are all significant predictors of lower risk of gender discrimination, men with those occupations are
at higher risk of past-year discrimination. Similarly, whereas a higher percentage of
men in a service member’s occupational area, unit, or installation is associated with an
increase in the risk of gender discrimination for women, it decreases that risk for men.
Several of the associations are surprising and bear further investigation. One is the
strong relationship between recent separations from the military and past-year sexual
assault. Specifically, women who recently separated are almost twice as likely as those
who remain in the active component to have been sexually assaulted in the past year.
Men who recently separated are more than four times as likely to have been sexually
assaulted compared with those remaining in the military. Similar but smaller effects
are also observed for gender discrimination. The mechanism underlying these effects
is not yet clear, but one potential hypothesis is that exposure to sexual assault, sexual
harassment, or gender discrimination causes people to leave the military. (Note that a
hypothesis of causation is impossible to test with these cross-sectional data.)
Another finding that is surprising is that higher AFQT scores are associated with
increased risk of both sexual assault and sexual harassment for men and women. The

54

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

AFQT score measures reading comprehension, vocabulary, and math and reasoning
skills, but the observed association with sexual assault and harassment is in the opposite direction of factors (like education level) related to AFQT score. AFQT scores are a
key determinant of occupations for which enlisted members might qualify. Therefore,
it is plausible that jobs requiring higher AFQT scores are also associated with elevated
risk of sexual assault or harassment. Here again, additional investigations are required
to better understand the relationship between AFQT scores and our outcomes.
Finally, we cannot explain the mechanism that would lead to our finding that
men without a valid email address have nearly three times the risk of a past-year sexual
assault compared with those with a valid email address. While it is true that some E1–
E3 service men may not yet have been assigned a military email address and junior
enlisted personnel have higher rates of sexual assault, the association we have detected
included controls for pay grade. To understand this finding, it would be necessary to
learn more about how email addresses are assigned to service members, when they are
eliminated or missing, and circumstances that lead to their omission from DMDC
records.
Characteristics That Could Lead to Nonresponse Bias
Many of the characteristics considered in the last section were associated with both
survey nonresponse and primary survey outcomes in (adjusted) models that already
included members’ pay grades, branch of service, gender, and race/ethnicity. As such,
the WGRA nonresponse weights that relied just on pay grade, branch of service,
gender, and race/ethnicity would present a considerable risk of failing to correct for
important sources of nonresponse bias. Whether such bias actually occurred, however,
would depend on whether these factors work together to increase bias or whether they
work at odds, canceling each other’s effects.
We can examine this by looking at the direction of all those effects we found to
be significantly associated with both survey response and with our primary outcomes.
Figure 3.1 presents all such adjusted risk ratios significantly associated with survey
response and past-year sexual assault at p < 0.01 (as indicated in Tables 3.4 and 3.5),
controlling for gender, service, pay grade, and race/ethnicity, in order to show how
much of the variance in response and sexual assault not explained by the conventional
weighting approach can be explained with these additional variables. Because the
effects are ratios, we use axes that are logarithmically scaled so that ratios above and
below the value of 1 scale symmetrically.
What Figure 3.1 reveals is a strong log-linear correspondence between adjusted
risk ratios for most of the factors. In particular, most points in this plot for men and
women fall in the upper-left or lower-right quadrants. That is, the effects not accounted
for by the standard weighting covariates are associated with higher risk of sexual assault

The Efficacy of Sampling Weights for Correcting Nonresponse Bias

55

Figure 3.1
Adjusted Risk Ratios for Factors Significantly Associated with Survey
Response and Sexual Assault
10

Women
Men
Sexual assault
risk ratios 0.1

1

10

0.1

0.01

Response risk ratios
NOTE: Risk ratios are adjusted for service, pay grade, and race/ethnicity. Axes use
logarithmic scales. Ratios significant at the p < 0.01 level are included.
RAND RR870/6-3.1

among those least likely to participate in the survey, or lower risk of sexual assault
among those most likely to participate in the survey. In both cases, and for men and
women, the effects of these factors would be to contribute to the underestimation of
sexual assault.
This analysis provides compelling evidence of the importance of adjusting for as
many of these factors as possible in the construction of sample weights for purposes of
the RMWS survey, in order to eliminate the threat of underestimation bias they present. Further, given that large sample sizes reduce variance but not bias, even small bias
reductions are a good trade-off in surveys, such as ours, with very large sample sizes
and great precision (Elliott and Haviland, 2007). Adjusted risk ratios for other primary
outcomes (sexual harassment and gender discrimination) exhibit the same pattern, and
lead to similar conclusions.
The Development and Performance of RMWS Weights
When producing results from the prior form, we used the same weighting approach
that was used in 2012 (called WGRA weights). When presenting results for the new
RAND forms, we used a weighting approach designed to make the analytic sample
representative of the population of active-component service members on a broader

56

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

range of factors (called RMWS weights).1 In this section, we summarize the development of these two weights, and then compare them on several performance measures.
The WGRA and RMWS weights for the 2014 data are each a product of three
component weights: (1) design weights, to account for disproportionate sampling of
women and men; (2) nonresponse weights, to make the weighted participant sample
comparable to the population on a set of characteristics known in the population; and
(3) poststratification weights, to make proportions of weighted respondents identical
to those in the population for key reporting categories. Where the two sets of weights
differed was in the characteristics used to construct the nonresponse weights. In constructing the WGRA weights, we used a logistic regression model with predictors similar to those used previously by DMDC for past WGRA surveys.2 For the RMWS
weights, we sought to include all of the additional factors described in the prior section
(listed in Tables 3.2 through 3.5).
Weighting on variables that are not associated with any survey outcomes cannot
remove any nonresponse bias in those outcomes, but can increase the variance of the
weights, with resulting reductions in the precision of estimates (Little and Vartivarian,
2005). Thus, the ideal weights are based on a nonresponse model that includes only
those factors that are associated with the key outcomes and nonresponse. Such a model
would remove the maximum amount of nonresponse bias, while limiting the variance
in the weights to just the amount needed to eliminate nonresponse bias. Thus, the best
weighting approach is one that has been optimized for the specific outcomes that the
study is designed to measure.
To construct outcome-optimized RMWS weights, we first identified those factors
statistically associated with one or more of six primary outcomes for our study (three
types of sexual assault and three types of sex-based MEO violations). Specifically, a
separate regression model was estimated for each of the six primary outcomes among
survey respondents using the full range of administrative and survey paradata as predictors (see Table 3.1). The six resulting models are used to estimate service members’
risk of each outcome in the full sample (both respondents and nonrespondents) as a
weighted combination of the available predictors. Second, we estimated a model predicting survey nonresponse from gender, service branch, pay grade, and the predicted
risk values from the six outcome models estimated in the first step. In this way, the
large number of factors considered in the first step enter into the nonresponse model
only to the extent that they are predictive of one or more of the primary outcomes.

1	

See Chapter Five of Volume 1 for a detailed discussion of the two sample weighting methods (Morral, Gore,
and Schell, 2014).

2	

WGRA nonresponse/poststratification weights fully balanced respondents to the sample frame on the factors
of gender, pay grade, service branch, and minority status. They partially balanced on deployment status, combat
occupations, and marital status.

The Efficacy of Sampling Weights for Correcting Nonresponse Bias

57

The regression models that predict each of the six primary outcomes were run
separately for men and women because the relationship between risk factors and outcomes were hypothesized to differ across gender. These models were estimated using
a machine-learning algorithm, Generalized Boosted Models (GBM; Ridgeway, 2012),
to best capture the relationship between the predictors and the outcomes. GBM is a
general, automated, data-adaptive modeling algorithm that can estimate the relationship between a variable of interest and a large number of covariates of mixed type,
while also allowing for flexible nonlinear relationships between the covariates and the
response propensity (Friedman, 2001; Ridgeway, 1999). These routines were run in the
R software package. During the GBM estimation, the complexity of each model was
optimized using tenfold cross validation—i.e., parameters were added to the model
until the model maximized out-of-training-sample prediction. This procedure prevents
overfitting the data.
The resulting models were used to create predicted values for each person in the
full sample of both respondents and nonrespondents on each of our primary outcomes
(e.g., the predicted probability of penetrative sexual assault). By definition, these predicted values are weighted combinations of the variables contained in Table 3.1. The
original variables are no longer associated with the outcomes when controlling for
these particular weighted combinations.
In the second-stage model, we derived the nonresponse weights using a response
propensity model. Specifically, the response propensity model was estimated with a
binary indicator of survey response (respondent versus nonrespondent) as the dependent variable with several independent variables: (1) the six predicted outcome variables
from the initial step, (2) a 40-category indicator of the reporting categories (service x
pay grade x gender), (3) form type (long, medium, or short), and (4) all two-way interactions among these predictors. Including the reporting categories and form types in
this model insured that any nonresponse bias identified in this process was removed
from both the aggregate DoD estimate and from estimates within various reporting
categories.
The nonresponse model was also estimated using GBM (Ridgeway, 2012) to
best capture the relationship between the various predictors and survey responding.
This approach allowed for flexible modeling and has been shown to improve the performance of logistic regression (McCaffrey, Ridgeway, and Morral, 2004; Ridgeway
and McCaffrey, 2007). Unlike ordinary GBM, in which parameters are added to the
model until out-of-training-sample prediction is maximized, we wished to optimize
the model to achieve the best weights. Specifically, we added parameters to the model
until the resulting weights maximized the similarity between the respondents and the
full sample. The similarity was assessed using maximum Kolmogorov–Smirnov statistic among all predictors in the model. Thus, the GBM stopped when the weights
achieved the best balance between the cumulative distributions of respondents and
nonrespondents on all of the predictor variables in the model.

58

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

Table 3.6 shows the balance achieved in these six predicted risk variables using
the RMWS and WGRA weights, relative to the design-weighted estimate in the full
sample (because the full sample included an oversampling of women, these designweighted values provide an estimate of population values). The results in Table 3.6 demonstrate that the RMWS weights are more representative of the full sample estimate.
The balance on the other predictors included in the nonresponse model—gender, pay
grade, and service—was nearly perfect for both the RMWS and WGRA weights due
to poststratification. More-detailed tables of predicted risk within each reporting category (cross-classified gender, service, and rank) are found in Appendix C, Tables C.1
and C.2.
Table 3.6 illustrates that, for every outcome except attempted sexual assault, the
WGRA weights underestimated the level of risk in the population—by as much as
12  percent (in the case of non-penetrative sexual assaults). In contrast, the RMWS
weights correctly matched respondents to population risk levels with good accuracy,
the greatest discrepancy being for hostile work environment, where the RWMS weights
yielded an estimate of population risk that was 1 percent over the true value.
Predictors of Discrepancy Between RMWS and WGRA Weights

The RMWS and WGRA weights yielded different prevalence estimates for our main
outcomes. We were interested in identifying which of the factors used in the nonresponse weighting models contributed to these differences. As described above,
the RMWS weights were derived using a two-step process to ensure that variables
were included only if they were associated with both (a) one or more primary study
outcome(s) and (b) propensity for nonresponse. Because of this two-step procedure, it
is difficult to discern directly from the earlier tables (Tables 3.2 through 3.5) which
specific administrative variables are responsible for differences across the two weights.
To identify significant predictors of differences, we estimated the associations
between (a) the administrative variables used in weighting and (b) the difference
Table 3.6
Comparison of Predicted Risk in Full Sample and Nonresponse-Adjusted Respondents
Design-Weighted
Estimate in Full Sample

WGRA-Weighted
Estimate Among
Respondents

RMWS-Weighted
Estimate Among
Respondents

Gender discrimination

3.15%

3.11%

3.14%

Quid pro quo

0.39%

0.37%

0.39%

Hostile work environment

8.21%

7.83%

8.30%

Penetrative sexual assault

0.46%

0.43%

0.46%

Non-penetrative sexual assault

0.78%

0.69%

0.77%

Attempted sexual assault

0.02%

0.02%

0.02%

Predicted risk of:

The Efficacy of Sampling Weights for Correcting Nonresponse Bias

59

between the RMWS and WGRA weight for each individual. Specifically, the RMWS
and WGRA weights were separately normalized so that the average respondent had a
weight equal to 1, and then each individual’s WGRA weight was subtracted from their
RMWS weight. For any subgroup of respondents that represented the same proportion
of the population under both weighting systems, the mean of this difference is zero. For
example, if the proportions of the sample that were in the Marine Corps were identical
under both weighting systems, the mean differences between these weights within the
Marine Corps is zero. In contrast, a subgroup with a mean difference in weights equal
to +1 would be one where the RMWS weights gave those individuals more weight than
the WGRA weights. Specifically, if the WGRA weights gave individuals in that group
the same weight as overall average respondent (i.e., 1), the RMWS weights gave them
twice the weight of the average respondent (1 + 1 = 2). Similarly, a linear slope can be
computed for continuous administrative variables to assess the extent to which each
factor explains the difference between these weights. Table 3.7 provides the subgroup
means or slopes and R 2 statistics from a model of the difference between weights to
indicate the proportion of variance that can be attributed to each specific effect. For
Table 3.7
Association of Participant Characteristics with the Difference Between
RMWS and WGRA Weights
Variable

Mean/Slope

R2

0.036

0.0002

Demographics
Age in years (as of August 1, 2014)a
Race/ethnicity

0.0003

Non-Hispanic white

0.001

0.0000

Non-Hispanic black

–0.017

0.0001

0.027

0.0001

Asian

–0.053

0.0001

Other

0.053

0.0001

Hispanic

Marital status
Married

0.0030
0.045

0.0029

Never married

–0.085

0.0025

Divorced/separated/other

–0.047

0.0001

–0.006

0.0003

0.005

0.0000

Number of dependents a
Education
High school or less

60

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

Table 3.7—Continued
Mean/Slope

R2

–0.017

0.0000

Bachelor’s degree

0.005

0.0000

Graduate degree

–0.006

0.0000

Variable
Some college

Military Career
Service 

0.0000

Air Force

0.000

0.0000

Army

0.000

0.0000

Navy

0.000

0.0000

Marine Corps

0.000

0.0000

Pay grade
E1–E3

0.0040
–0.146

0.0024

0.128

0.0021

E5–E6

–0.001

0.0000

E7–E9

0.004

0.0000

–0.011

0.0000

O1–O3

0.000

0.0000

O4–O6

0.000

0.0000

AFQT percentile (enlisted only)a

–0.028

0.0000

Years of active military servicea

0.031

0.0005

Months deployed since 9/11/01a

0.008

0.0001

–0.016

0.0003

1.328

0.0101

E4

W1–W5

Months deployed since 7/1/13a
Separated/retired
DoD occupational area
Infantry, guncrews, and seamanship

0.0096
0.056

0.0002

–0.029

0.0000

0.047

0.0002

Health care specialists

–0.247

0.0052

Other technical and allied specialists

–0.097

0.0002

Functional support and administration

–0.041

0.0003

0.181

0.0036

Electronic equipment repairers
Communications and intelligence specialists

Electrical/mechanical equipment repairers

The Efficacy of Sampling Weights for Correcting Nonresponse Bias

Table 3.7—Continued
Mean/Slope

R2

Craftsworkers

0.067

0.0001

Service and supply handlers

0.031

0.0001

Nonoccupational

0.174

0.0004

–0.004

0.0000

0.031

0.0000

Engineering and maintenance officers

–0.031

0.0000

Scientists and professionals

–0.017

0.0000

Health care officers

–0.015

0.0000

Administrators

–0.014

0.0000

Supply, procurement and allied

0.016

0.0000

Other officers

0.198

0.0003

–0.031

0.0002

0.083

0.0029

–0.022

0.0004

0.125

0.0067

–0.041

0.0036

0.181

0.0041

–0.028

0.0008

Change in assigned unit zip since 8/1/2013

–0.043

0.0005

Change in assigned unit zip since 4/1/2014

0.019

0.0001

Change of mailing address since 4/1/2014

0.006

0.0000

No valid mailing address

0.046

0.0000

No valid email address

2.576

0.0491

–0.124

0.0011

Marines sent email

3.349

0.0262

Percentage of emails bounceda

3.249

0.0964

Variable

Tactical operations officers
Intelligence officers

Unit outside the continental United States
Military Environment
Percentage male in occupation groupa
Number of people in occupation groupa
Percentage male in unita
Number of people in unita
Percentage male in installation (zip code)a
Number of people in installation (zip code)a
Fieldwork Indicators

Mailing 1 is postal nondeliverable

a Indicates variables entered as continuous, for which the parameter indicates the

expected difference in weights per unit change in the variable.

61

62

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

categorical factors with multiple levels, such as pay grade, with 6 degrees of freedom,
or occupation code, with 17 degrees of freedom, this table indicates the group effect.
In general, the fieldwork indicators were the most important factors explaining
the difference between weights. Having a bad email address was positively associated
with all primary outcomes for both men and women, and was strongly and negatively
associated with propensity for survey participation. As a result, respondents who had
a bad email address (indicated by either having no address or having email bounced
back as undeliverable) were given substantially more weight using the RMWS weights.
The mean difference of 3.2 suggests that the RMWS gave those individuals four times
the weight as the WGRA weights, if the WGRA weights gave them the same weight
as the average respondent.
Several other administrative variables also influenced the difference between
weights. Specifically, the RMWS system gave greater weight to members who retired
or separated after the sample frame was drawn, members from units or zip codes
that were predominantly male, members from occupational categories that were predominately male, and individuals in specific occupational codes. In all of these cases,
RMWS gave additional weight to individuals whose characteristics were underrepresented among survey participants (see Tables 3.2 and 3.3) and who were at elevated risk
for sexual assault or sexual harassment (see Tables 3.4 and 3.5). Note that the linear
effects summarized in Table 3.6 describe the marginal effects of these factors on the
RMWS weights relative to the WGRA weights. However, the underlying models used
nonlinear effects, including interactions of these characteristics across the 40 reporting
categories, as well as nonlinear effects of the continuous variables. Therefore, Table 3.6,
which only includes the simple linear effects of continuous variables, underrepresents
the importance of those variables in explaining the difference between the RMWS and
WGRA weights.
Statistical Characteristics of the WGRA and RMWS Weights

The WGRA and RMWS weights have a correlation of 0.78; the standard deviation
of the RMWS weights is 1.38 times that of the WGRA weights, and both weights
were normalized to have a mean of 1. Weighted estimates of the prevalence of primary
survey outcomes were higher when using the RMWS weights, compared with when
using the WGRA weights. To describe the practical effect of the higher standard deviation on the precision of survey estimates, we computed design effects. Design effects
describe the loss in precision of survey estimates that can be attributed to sampling
weights. For example, a design effect of 2 means that twice as many respondents are
required to achieve the level of precision as a simple random sample design that does
not require weighting would produce. The overall design effect associated with each
set of weights was approximated with (1 + CVw2), where CVw is the coefficient of variation (Kish, 1965). Kish’s formula is only an approximation, and may misrepresent the
variance impact of the weights for any particular statistic of interest. However, these

The Efficacy of Sampling Weights for Correcting Nonresponse Bias

63

approximations are still informative for relative comparisons, particularly when the
impact of the weight components on the design effect is small, as in our case.
Tables 3.8 and 3.9 present the design effects associated with the three components
of the sampling weights. The design effect associated with the design weights was 1.33
across both weights because the study design is the same. The oversampling of women
in the design phase meant we had to sample 33 percent more people to achieve the same
level of precision as would have been possible without any oversampling. The design
effect of the nonresponse weights derived using the RMWS and WGRA approaches
were 2.14 and 1.59, respectively. Also, the design effect associated with post-stratification in both weights was negligible. The overall design effect of the RMWS weights
(3.69) was larger than that of the WGRA weights (2.62). In comparison, the design
effect for the much smaller 2012 WGRA survey was approximately 2.53. Although the
design effect associated with the RMWS weights was larger than that with the WGRA
weights, the large sample sizes included in the RMWS study ensured ample precision
for the outcomes of interest. Design effects for key reporting strata (pay grade, service)
for men and women are provided in Appendix C, Tables C.3 and C.4.
The RMWS weights are more variable than the WGRA weights because the
RMWS approach included many more factors in its adjustment. However, the RMWS
weights should also reduce nonresponse bias in the population estimates of sexual
assault, sexual harassment, and gender discrimination to a greater extent than the
WGRA weights, because the RMWS weights include additional variables that are
Table 3.8
Design Effect of Components of RMWS Weights
Mean

Standard Deviation

Design Effect

Design weights

1.00

0.57

1.33

Model-based nonresponse weight

1.00

1.07

2.14

Post-stratification weight

1.00

0.12

1.01

Overall WGRA weight

1.00

1.64

3.69

Mean

Standard Deviation

Design Effect

Design weights

1.00

0.57

1.33

Model-based nonresponse weight

1.00

0.77

1.59

Poststratification weight

1.00

0.14

1.02

Overall WGRA weight

1.00

1.27

2.62

Table 3.9
Design Effect of Components of WGRA Weights

64

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

significantly associated with survey outcomes. In surveys with large sample sizes, however, even small reductions in bias may offer a good trade-off for variance inflation.
Evaluation of the Accuracy of Estimates using RMWS and WGRA Weights

In Table 3.10, we illustrate the effect of using the two sets of nonresponse weights on
overall accuracy, as assessed by the MSE of prevalence estimates for primary survey
outcomes. Weights can reduce MSE of a specific estimate to the extent that they reduce
bias in that estimate, but can increase MSE to the extent that they increase the standard error of that estimate. Effective weights reduce MSE by eliminating more squared
bias than they add in sampling variance (i.e., the square of the standard error of the
estimate). This is called the bias-variance trade-off. A single set of analytic weights can
cause different amounts of bias reduction and variance inflation for every variable in
the analysis, so the effect of the weights on MSE is outcome-specific. Thus, estimates
of the overall “design effects” of weights (e.g., Tables 3.8 and 3.9) do not tell us much
about the performance of the weights on the key outcomes of the study. Individual
outcomes may show more or less variance inflation than implied by the overall design
effect, and even when there is substantial variance inflation, the weights may still be
associated with reduced MSE on a specific outcome if accompanied by a reduction in
bias.
Table 3.10 contains additional information about the performance of the weights
across all of the primary outcomes of the study. It includes the primary outcomes from
both the RAND form and the prior form of the 2014 survey. For each outcome, the
table provides the prevalence estimate and the standard error for that estimate under
three sets of weights: (1) design weights that account for the intentional oversample
of women, but do not account for nonresponse; (2) WGRA weights that are derived
similarly to the weights DMDC has used in prior WGRA studies and that were used
when reporting estimates derived from the prior form; and (3) RMWS weights that
were used for reporting results from the RAND form of the survey.
Table 3.10 demonstrates that (1) the RMWS and the WGRA nonresponse weights
both result in an upward adjustment to the design-weighted estimate of prevalence for
all survey outcomes, meaning that nonrespondents appear to be at greater risk of experiencing sexual crimes and violations than respondents; (2) the WGRA weights, which
include an important but small set of predictors, produce lower prevalence estimates
than the RMWS weights across all outcomes except for gender discrimination among
men, where the two are almost equal; and (3) the standard errors associated with the
RMWS estimates are larger than for WGRA estimates. Both sets of weights imply that
survey nonresponse introduces a net downward bias in estimates of prevalence. However, the inclusion of additional variables in the development of RMWS nonresponse
weights yields higher prevalence estimates than the reduced set of factors included in
the WGRA weights.

Table 3.10
Evaluation of Survey Estimates with RMWS Weights Compared to WGRA Weights
Design-Weighted Estimatesa WGRA-Weighted Estimatesb RMWS-Weighted Estimatesc
Prevalence

Std. Error

RMWS weights have lower
MSE if true prevalence is
greater [less] than:

0.11

1.68

0.17

1.59

6.00

0.20

6.23

0.24

6.16

0.03

1.30

0.05

1.54

0.08

1.43

6.62

0.10

8.06

0.17

8.85

0.23

8.47

3.19

0.06

3.23

0.09

3.33

0.10

3.29

Unwanted sexual contact

0.63

0.06

0.93

0.12

1.16

0.19

1.08

Sexual harassment

2.73

0.13

3.50

0.22

3.64

0.27

3.65

Sexual assault

0.47

0.03

0.74

0.06

0.95

0.09

0.85

Sexual harassment

4.24

0.11

5.92

0.20

6.61

0.27

6.29

Gender discrimination

1.50

0.06

1.74

0.10

1.73

0.11

[1.58]

Outcome

Prevalence

Std. Error

Prevalence

Unwanted sexual contact

1.11

0.06

1.43

Sexual harassment

5.51

0.12

Sexual assault

1.03

Sexual harassment
Gender discrimination

Std. Error

Overall
Prior Form

Men
Prior Form

RAND Form

The Efficacy of Sampling Weights for Correcting Nonresponse Bias

RAND Form

65

66

Table 3.10—Continued

Outcome

Prevalence

Std. Error

RMWS weights have lower
MSE if true prevalence is
greater [less] than:

0.22

4.61

0.26

4.48

20.23

0.40

20.94

0.44

20.60

0.08

4.51

0.11

4.87

0.14

4.70

17.82

0.22

20.35

0.28

21.57

0.31

20.97

11.11

0.18

11.77

0.21

12.40

0.24

12.09

Prevalence

Std. Error

Prevalence

3.37

0.15

4.31

18.57

0.33

3.60

Sexual harassment
Gender discrimination

Std. Error

Women
Prior Form
Unwanted sexual contact
Sexual harassment
RAND Form
Sexual assault

a The survey estimate was adjusted for oversampling of women, with no adjustment for nonresponse.
b The prevalence estimate was computed using the WGRA nonresponse weights, in addition to design weights.
c The prevalence estimate was computed using the RMWS nonresponse weights, in addition to design weights.

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

Design-Weighted Estimatesa WGRA-Weighted Estimatesb RMWS-Weighted Estimatesc

The Efficacy of Sampling Weights for Correcting Nonresponse Bias

67

The differences between the various columns within Table 3.10 give direct information about the possible bias-variance trade-off for WGRA and RMWS weights. The
difference between the standard errors indicates the relative sampling error associated
with the two sets of weights. The difference between prevalence estimates provides
information about the possible bias reduction associated with the different weights.
For example, when two weights yield the same prevalence estimates, they necessarily
offer the same bias reduction. In such a case, the weight that yields the smaller standard error provides the lower MSE, or greatest accuracy. When the estimates diverge,
however, the estimate closer to the true population value offers greater bias reduction.
However, that bias reduction needs to be considered in light of the standard error of
the estimates to determine if it yields better accuracy overall, i.e., a lower MSE.
A direct calculation of MSE requires knowing the true population prevalence of
these outcomes. Rather than estimating the MSE under specific assumptions about the
true population prevalences for all outcomes, the final column of Table 3.10 presents
the specific population prevalence for which the RMWS and WGRA weights would
have the same MSE. This column integrates information about the difference between
the two prevalence estimates and their relative standard errors into a single number.3
Whenever the true population prevalence is on the same side of that number as the
RMWS weighted estimate, the RMWS weights provide lower MSE. Conversely, when
the true population value is on the other side, the WGRA weights provide lower MSE.
Across the key outcomes, the population prevalence under which both weights
would yield the same MSE typically falls between the WGRA and RMWS estimates.
The two exceptions are sexual harassment of men (assessed on the prior form), and
gender discrimination of men (assessed on the RAND form). For gender discrimination of men, the RMWS estimate yields higher accuracy only if the true population
prevalence was lower than 1.58 percent. This appears unlikely, because both estimates
are greater than that value. For this measure, the two weights have almost identical
estimates and thus the slightly lower standard errors achieved by the WGRA weights
3	

The population prevalence at which the two weights have the same MSE was calculated as
2

µequal _ MSE =

2

X r − X w + Vdiff

(

2 Xr − Xw

)

where X r and X w are the prevalence estimates using the RMWS and WGRA weights, respectively, and Vdiff
is the difference in the squared standard errors of the estimates attributable to weighting. The computation of Vdiff
for these binary variables adjusts for the dependence of the standard error on the mean under the binomial distribution. Specifically, we have adjusted for this dependence by retaining the change in variance from the WGRA
to the RMWS weights if the WGRA weights had resulted in the estimate obtained by the RMWS weights. The
formula for μequal_MSE reduces to the mean of the two estimates when Vdiff is zero, i.e, when the two weights have
the same variance inflation, the estimate closer to the true value has lower MSE. When the two weights yield
nearly same prevalence estimates, the formula takes on arbitrarily high or low values, indicating that the estimate
with the lower standard error is preferred for all plausible values of the true population prevalence.

68

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

(0.01-percent lower) make it likely to be the more accurate measure, although the difference in standard errors is too small to be of any practical significance.
For all of the other variables, identifying which weights yield lower MSE depends
on whether the variables added to the RMWS nonresponse model (relative to the
WGRA model) resulted in (a) an exact correction of the true nonresponse bias; (b) an
undercorrection for the true nonresponse bias, meaning that the true rate of sexual
assault, for instance, falls above the RMWS estimate; or (c) an overcorrection for the
true nonresponse bias, meaning the true value falls below the RMWS estimate. The
RMWS weighting process identified substantial differences in risk between nonrespondents and respondents across the outcomes (as indicated by such characteristics as
age, occupation, and the gender distribution of the unit). Because the RMWS weights
achieved better balance between respondents and the full population on these risk
indicators than the WGRA weights, it may be that the true prevalence on these outcomes is very close to the RMWS-weighted estimates. If one assumes that the RMWS
weights provide an estimate very close to the true value, the RMWS weights yield
more accurate results in all cases except sexual harassment of men (assessed on the
prior form) and gender discrimination of men (assessed on the RAND form). In fact,
for most variables, the RMWS weights appear to yield more-accurate estimates if one
assumes that the nonresponse bias remaining with the RMWS weights is only slightly
better than with the WGRA weights. For instance, the true value at which sexual
assaults measured on the RAND form are equally accurately measured (a true prevalence of 1.43) falls less than 1 percent above the midpoint between the WGRA and
RMWS weighted estimates (1.42). Therefore, if the RMWS weights yield estimates
even moderately closer to the true value than the WGRA weights, we can conclude
that the RMWS weights result in greater accuracy.
As noted elsewhere in this report (Chapters Two, Four, and Six), there are multiple indications that prevalence estimates using the RMWS weights may still underestimate the true prevalences. If the RMWS estimates are, in fact, underestimates of
the true values, the RMWS weights yield more-accurate estimates than the WGRA
estimates for all outcomes other than gender discrimination among men.
Finally, it is worth noting some limitations of selecting weights based on an MSE
criterion, or designing weights to minimize MSE. The MSE metric treats errors due to
biased estimation and errors due to sampling variability as equivalent. However, these
are not always considered to be equal threats to the validity of a study. This is because
errors caused by variance inflation can be easily assessed and are indicated by the confidence intervals presented alongside every prevelance estimate. The magnitude of this
type of error is fully reflected in the population inference we draw from the results (i.e.,
in all tests of statistical significance). In contrast, error due to bias is invisible in the
analyses and represents a direct threat to the validity of all study conclusions regardless of statistical significance. Thus, our analysis arguing that RMWS weights offer
improved MSE relative to the WGRA weights underestimates the full advantage of the

The Efficacy of Sampling Weights for Correcting Nonresponse Bias

69

RMWS weights if they result in lower bias, as appears likely. One might reasonably
prefer the lower-bias estimate even if it were substantially less accurate, so long as the
confidence intervals around estimates were sufficiently small to draw useful population
inferences.
Conclusion
In estimating prevalence rates with survey data, individuals with the characteristic
of interest (e.g., experienced past-year sexual assault) may be more or less likely to
participate in the survey than people without it. The sample prevalence, based on
respondents’ data, is then a biased estimator of the true prevalence. The standard statistical approach to adjustment for this potential bias is nonresponse weighting. The
nonresponse weighting approach developed for the RMWS employed novel methods
necessary for modeling a rich and large set of individual characteristics drawn from
demographic, career, and military environment characteristics available in DMDC
administrative data, as well as indicators from the survey fieldwork. This new approach
has allowed us to account for differences between respondents and nonrespondents on
a much wider range of known characteristics than has previously been possible, without unacceptably elevating the variance of survey estimates.
In analyses comparing sample weights generated using the RMWS approach to
those of the earlier WGRA approach, we found that the new weights reduced differences between the analytic sample and the population on a wide range of factors associated with both nonresponse and key outcomes. These differences between the respondents and the population were not satisfactorily addressed using the earlier methods.
Moreover, nearly all of these factors drove bias in the same direction—specifically,
service members with characteristics associated with a higher risk of sexual assault or
harassment also had the lowest likelihood of responding to the survey. Therefore, the
upward adjustment of estimates due to the RMWS weights, compared with the estimates produced by applying the WGRA weights, reflects a reduction in nonresponse
bias and provides more accurate estimates of prevalence.
Examination of the factors leading to the greatest differences between the RMWS
and WGRA weights revealed several fieldwork metadata factors that contributed substantially to explaining risk and survey nonresponse, including having no valid email
address, having email addresses that bounced, and having an email address known
only to the Marine Corps. It is obvious why the validity of a service member’s email
address would affect the likelihood of participation in the survey. Much less clear, and
worthy of further investigation, is the question of why those at higher risk of sexual
assault and sexual harassment would be disproportionately likely to have poor email
address information in the DMDC data set.

70

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

Another important predictor of the differences between the WGRA and RMWS
weights was whether the service member recently left the military, as those who recently
separated appear to be at substantially elevated risk of having experienced past-year
sexual assault, sexual harassment, or gender discrimination. This finding has important implications for how the survey is conducted in the future. If those who are sexually assaulted or harassed are more likely to leave the military, failure to count past-year
sexual assaults among those who left the military by the time the survey is administered will bias prevalence downward. Therefore, our finding suggests the importance
of estimating rates of sexual assault and harassment among all who served in the military in the past year, even those who separated prior to survey fielding. At a minimum,
those who separated from the military after the sample was drawn and start the survey
should not be counted as ineligible to participate when their service overlapped with
the period in which prevalence is estimated. This, however, requires a change from past
practice, as in earlier WGRA administrations all such recent separations were excluded
from the sample of eligible respondents on the basis of the first survey question.
A common disadvantage of weighting is an increase in the variance of the estimates. We found that the RMWS weights led to only a modest increase in the sampling variability of the prevalence estimates. Whereas the overall design effect associated with the traditional WGRA weights was 2.62, the RMWS weights produced a
design effect about 40 percent larger (3.69). An assessment of the trade-off between bias
and variance for primary survey outcomes suggested that, under reasonable assumptions, the increases in variance were more than offset by reductions in bias for almost
every outcome examined (the one exception being gender discrimination against men).
Finally, given that large sample sizes reduce variance but not bias, even small bias
reductions are a good trade-off in large sample sizes. We conclude, therefore, that there
are several reasons to believe that the nonresponse weighting approach employed by
the RMWS corrected for important sources of nonresponse bias without unacceptably
driving up variability in prevalence estimates.

CHAPTER FOUR

Investigation of Total Survey Error Using Official Records of
Reported Sexual Assaults
Terry L. Schell and Andrew R. Morral

Most of the efforts to assess or quantify error in RAND’s estimates of the prevalence
of sexual assault have been focused on specific sources of potential error. Our nonresponse weights, and the various analyses of nonresponse, are focused on potential nonresponse bias; analysis of the instrument content, wording, and complexity is focused
on minimizing potential classification error; efforts to reduce telescoping or maintain
confidentiality are designed to reduce response biases; the large sample and attention
to the variance of the weights are designed to minimize sampling error, while the
inclusion of confidence intervals on our population estimates are intended to quantify
sampling error.
What we would most like to know, however, is the net effect of all of sources
of error, including potential coverage errors in our sample frame or processing errors
in our handling of collected data. Do these various sources of error counteract each
another, or are the errors compounded? In some cases, it may be possible to quantify
total survey error regardless of the source. For example, a survey designed to predict
population voting behavior can be compared with the subsequent voting behavior. The
challenge in empirically estimating total survey error is finding alignment between
a known value in the population of interest and a survey-based estimate of the same
value. The current study was designed to estimate the prevalence of sexual assault
in the past year within the military, and if the true prevalence over that period were
known, there would have been no reason to conduct the survey.
The military does collect official statistics on the subset of sexual assaults that
are reported (in either a restricted or unrestricted manner). OSD’s SAPRO maintains
records of these reports. While the number of official reports is just a subset of all
sexual assaults, it is a number that is closely tracked in SAPRO annual reports. Moreover, the number of such reports provides a real-world benchmark against which we
can compare survey estimates.
To facilitate this comparison, the RAND form included questions that were
designed to align as closely as possible to official records of sexual assault. Specifically,
respondents who were classified as having experienced a sexual assault in the past 12
months were asked:

71

72

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

Since [Date one year prior to survey administration], did you initial and sign
a form labeled VICTIM REPORTING PREFERENCE STATEMENT (DD
Form 2910 or CG Form 6095)? This form allows you to decide whether to make a
restricted or unrestricted report of sexual assault. A Sexual Assault Response Coordinator (SARC) or Victim Advocate (VA) would have assisted you with completing this form. To see a version of this form, click here. [The final phrase served as a
hyperlink to an image of DD Form 2910; Respondents were given three response
options: “yes,” “no,” and “not sure.”]

DD Form 2910 serves as the basis for official records of sexual assaults. That is,
to be included in the administrative records SAPRO uses to tabulate the number of
official reports, service members who were sexually assaulted had to have completed
this form. Therefore, to the extent that (a) the weighted sample was representative of
the full population in terms of risk for sexual assault, (b) respondents were correctly
classified as experiencing a sexual assault in the past year, and (c) they answered the
questions in an unbiased manner, a survey-based estimate of the number of signed DD
Form 2910s should closely correspond to recorded number of signed forms.
To align our survey estimates with numbers from the administrative data, we
do not use the overall number of reported sexual assaults from the SAPRO report for
FY 2014, but look at the subset of those reports that match the scope and timeframe
of the survey estimates. Specifically, we requested from SAPRO the number of official
reports where (a) the victim was in the active component of a DoD service, (b) the
incident being reported occurred in a one-year period (FY 2014)1 and (c) the report
itself was signed in the same one-year period. SAPRO identified 2,997 reports that met
these criteria.
As discussed in Volume 2 of this series, the RMWS survey estimated that 11.2 percent of those who experienced a sexual assault in the past year also indicated that they
signed/initialed a DD Form 2910 in the past year. In addition, 10.7 percent indicated
that they were “not sure” if they signed a DD Form 2910. Table 4.1 presents the population count of individuals that corresponds to those percentages.
Assessing the correspondence between survey estimates and official records of
reported sexual assaults requires making assumptions about those respondents who
answered “not sure” when asked about signing a DD Form 2910 and those who experienced a sexual assault but did not answer this question at all. In Table 4.1, we present
two assumptions about those respondents:
•	 Method 1. One could estimate the total number of official reports based just on
the proportion of the entire sample who answered “yes” when asked if they signed
1	

The period assessed by the survey was always one year prior to date of administration (for both the assessment
of any sexual assault itself and the reporting of an assault). Thus, the dates used in the survey do not exactly correspond to FY 2014, but both estimates are for a one-year period and the periods almost entirely overlap.

Investigation of Total Survey Error Using Official Records of Reported Sexual Assaults

73

Table 4.1
Comparison of Survey-Estimated Counts of Reported Sexual Assaults to Official Reports of
Sexual Assault
Type of Count

Total

Men

Women

Official record of DD Form 2910

2,997

640

2,357

“Yes” survey response (Method 1 estimate)

2,177

374

1,803

“Not sure” survey response

2,165

1,150

1,015

“Yes” distributing “not sure” proportionately to Yes/Noa
(Method 2 estimate)

2,435

420

2,015

Survey estimates of DD Form 2910

NOTE: Official records of signed DD Form 2910 for FY 2014 provided by SAPRO.
a Estimates assume that the proportion of respondents indicating “Not Sure” who actually signed the
form is the same as the proportion observed among those who answered with a definite response of
“Yes” or “No.”

a DD Form 2910. The counts in the row labeled “Yes” survey response assume that
none of the respondents who answered “not sure” actually signed form DD2910.
This is computed based on the percentage of nonmissing responses to this question among those who experienced a sexual assault in the past 12 months. Thus,
this approach assumes the true distribution across yes/no/not sure among victims
who failed to answer this question is the same as the distribution among those
who answered. However, this method assumes that none of those who said they
were “not sure” if they completed the form actually did complete it. This does not
appear plausible. There are many reasons why victims who signed DoD paperwork shortly after a traumatic experience may be unsure as to whether they signed
a particular form. However, we include the Method 1 estimate here because the
proportion of respondents who indicated “not sure” is substantial and we want to
be explicit about the importance of assumptions about the true rate of reporting
sexual assaults among that group.
•	 Method 2. In this approach, we estimated the number of official reports based on
the number of survey respondents who say “yes” to the Form DD 2910 question,
plus a portion of those who said “not sure.” Specifically, we made the assumption that among those who experienced a sexual assault in the past 12 months
but who were either missing the item or who indicated “not sure,” the proportion
who actually did sign the form is the same as the proportion as victims who gave
a definite response of “yes” among all those who answered either “yes” or “no”
to the question.2 That is, “not sure” responses are treated in the same manner as
2	

More specifically, this method is equivalent to mean imputation of both item missing and “not sure” responses,
conditioning those imputed values on both (a) experiencing a sexual assault in the past 12 months and (b) respondent gender.

74

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

missing data in Method 1. This is a more plausible assumption about the true
rate of reporting among the “not sure” respondents, and is a relatively standard
approach to handling ambiguous survey responses, but may still be inaccurate.
Using either method, the estimated number of reported sexual assaults in the
population extrapolated from the survey is lower than the true rate of reported sexual
assaults. For our primary estimate (Method 2), the survey estimate is 81 percent of the
true population value (86 percent among women, 66 percent among men). This suggests that the survey is undercounting the number of individuals who experienced a
sexual assault in the past year and reported it.
It may be, however, that our assumption that victims who say they are “not sure”
they completed the form, actually completed it at higher rates than the proportion who
said they did complete it among all who gave a definitive response. For example, if one
assumes that 33 percent of those who responded “not sure” actually filed an official
report, we would get a survey estimate that is almost identical to true value, suggesting
no survey error.
Overall, we interpreted these analyses as demonstrating low levels of total survey
error for the portion of sexual assaults in the past year that have been officially reported.
Under relatively common assumptions, the survey estimates for the DoD population
appear well aligned with, but smaller than, the true values. The precise causes of the
identified total survey error are not known. This error could be a result of errors in
our assumptions about the true reporting behavior of individuals who responded “not
sure,” errors that apply only to the current analyses. Alternatively, the error could originate in some features of the broader survey that would affect our primary survey
estimates. Possible sources of error include some combination of a measure of sexual
assault that misses some true victims, undersampling of members at risk, nonresponse
biases that are not fully mitigated by the RMWS weights, random/sampling error,
or data processing errors. Although we cannot precisely identify the source of survey
error, we can investigate two of these candidates, random sampling error and coverage
error (i.e., undersampling some groups at risk).
Sampling error alone appears unlikely to fully account for the discrepancy. The
95-percent margin of error for these estimated total population counts is 370 individuals (251 among women, 365 among men). Thus, the true value is slightly greater than
the upper confidence limit on the “Method 2” survey estimate for both the estimate
among women and overall. (However, among men the true value is within the confidence interval of the survey estimate.) It is unlikely that the survey estimates (total and
for women) would be this low if the only source of error were from random sampling
variability.
Similarly, we know that the sample frame of the survey excluded some individuals
who were included within the population covered by the SAPRO records of reported
sexual assaults, suggesting some coverage error in the survey sample frame. For a range

Investigation of Total Survey Error Using Official Records of Reported Sexual Assaults

75

of logistical and policy reasons discussed in Volume 1, the survey excluded (1) members
who entered the service fewer than six months before the survey field period; (2) members who left the military early in the year, prior to survey sampling; and (3) general
or flag officers (O7 and higher). In addition, the official statistics of reported sexual
assaults may include more than one case filed by a single individual, whereas our survey
estimates assume just one Form DD2910 per person making an official report.
As will be discussed in Chapter Six, the available evidence suggests the sample
frame error is in the direction of undercounting members who were sexually assaulted,
and that the primary source of this undercount comes from excluding individuals who
separated from the military before our sample was drawn. We estimate that this coverage error could result in an undercount of 1,000 to 3,000 people that were sexually
assaulted in the past year, of which perhaps 100 to 300 filed a sexual assault report (see
Chapter Six). Therefore, coverage errors in the sample frame could account for much
of the total survey error found using the Method 2 estimates above, although it cannot
account for all of the identified error. However, the identified total survey error is small
enough that the estimated sample frame coverage error, along with the known random
sampling error, could explain the estimated total survey error without positing additional biases due to measurement, nonresponse adjustment, or data processing.
A substantial limitation of this analysis of total survey error is that it does not
directly estimate error for the primary outcomes of interest. While the overall survey
was designed to estimate the population prevalence of sexual assault, sexual harassment, and gender discrimination, we were only able to investigate total survey error for
the small proportion of sexual assaults that were officially reported. It is possible that
the low levels of survey error found on reported sexual assaults do not generalize to the
survey error that occurs on the much larger number of unreported sexual assaults. For
example, those victims who avoided telling military authorities about their assault may
also avoid revealing the assault on a DoD survey. Alternatively, such individuals could
conceivably be more interested than other service members in completing the survey
and be overrepresented in the survey estimates. Our investigation of total survey error
ruled out substantial error in one class of sexual assaults, but this investigation does not
definitively rule out the possibility of substantial error in our overall study estimates of
sexual assault, sexual harassment, or gender discrimination.

CHAPTER FIVE

Performance of the Sexual Assault Survey Module
Lisa H. Jaycox, Andrew R. Morral, Terry L. Schell, and Coreen Farris

The construction of a new survey to assess incidence and prevalence of sexual assault
raises questions about how the survey items function and whether the classification
scheme is valid. A full description of the way in which the sexual assault questions map
to the UCMJ can be found in Volume 1 (specifically, Appendix B) of this series. But
even if the questions themselves are well aligned with the law, it is possible that the way
survey items are used to classify individuals could result in misclassification of those
who did or did not experience a sexual assault. In this chapter, we explain how the classification scheme works and how respondents answered each of the classification items.
This information shows clearly which individuals were classified as having experienced
a sexual assault and why others were ruled out.
RAND sought to pattern its survey questions on criteria in military law defining
sexual assaults (Article 120 of the UCMJ). The law, however, is complex. Multiple criteria must be met to establish a crime, and these criteria specify events that themselves
require complex definitions. To simplify for purposes of a self-administered survey,
complex concepts are broken down into a series of sequential questions, and a skip logic
is used to present relevant follow-up questions to determine whether unwanted events
(the six sexual assault screening items) meet the intentionality and offender behavior
criteria that define a UCMJ sexual assault in the past year.
Intentionality
For all assaults other than rape, the UCMJ requires that the unwanted contact must
not have occurred accidentally or for some legitimate purpose, but instead must either
have been for a sexual purpose or to abuse, degrade, harass, or humiliate the victim.
The RAND form includes questions about offender intent from the perspective of the
victim, so as to exclude events that were unwanted, but were, for instance, accidental
contacts or contacts that were legitimate. There are many legitimate reasons that someone may touch a service member in a private area of his or her body that should not be
counted as sexual assault. Unwanted contact with private areas can occur with some
frequency during combat training, working in close quarters, uniform adjustments,

77

78

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

required medical exams, and the like. Although these contacts may be unwanted—
and may occur without consent—they are not sexual assaults in either military or civilian jurisdictions if they are accidental or legitimate.
Indeed, because intent is such a core part of the law and of common-sense understandings of sexual assault, it figured prominently in the 2012 WGRA unwanted
sexual contact question, which tried to exclude legal contacts where there was no sexual
intent, in two ways: (1) It asked only about “intentional sexual contacts” and (2) it
described the events covered by the question in a way that implied they occurred for a
sexual purpose, using descriptions such as “sexual intercourse,” “oral sex,” or “anal sex.”
These criteria were part of a single complex question that included additional conditions as well. The RAND form separates questions concerning intent, so as to focus
respondents on one condition at a time. In addition, we expanded the intent criteria to
include events that were not considered “sexual” and yet are classified as sexual assaults
in the UCMJ. An example of this might be contact with genitals as part of a hazing
ritual that is not viewed as a sexual experience, but rather as one intended to abuse,
humiliate, or degrade.
Thus, the RAND form assesses intentionality with two questions asked of respondents who indicate an unwanted event other than penetration by a penis (under the
UCMJ, penetration by a penis does not require that a sexual or abusive intent be
established):
1.	 The first question asks if the unwanted experience was “abusive or humiliating,
or intended to be abusive or humiliating?” Respondents who indicated that
the unwanted event was abusive or humiliating, or was intended as such, met
our “intent” criterion. While this is slightly different than the criteria laid out
in 10 U.S. Code §920, Article 120(g)(1)(B), it is far easier for the respondent
to answer. Moreover, it expands the domain of sexual assaults counted in our
instrument to include hazing, abuse, and harassment incidents that some victims may not consider “sexual.”
2.	 Only if respondents answered “no” to the first question were they asked “Do you
believe the person did it for a sexual reason? For example, they did it because
they were sexually aroused or to get sexually aroused.” This is effectively equivalent to the approach used in the WGRA assessment of unwanted sexual contact
to establish intent, though it is split off as a separate question rather than bundled with the other clauses making up the unwanted sexual contact question.
Offender Behavior/Lack of Consent
The second criterion that all sexual assaults must satisfy requires that the offender
used one of several mechanisms specified in the UCMJ to compel the act or contact

Performance of the Sexual Assault Survey Module

79

(see Volume 1, Appendix B, for exact correspondence with the UCMJ). To assess this,
respondents were asked to indicate which of the following occurred during the incident:
1.	 They [the assailant(s)] did not stop even when you told them or showed them
that you were unwilling.
2.	 They used physical force to make you comply. For example, they grabbed your
arm or used their body weight to hold you down.
3.	 They physically injured you.
4.	 They threatened to physically hurt you (or someone else).
5.	 They threatened you (or someone else) in some other way. For example, by using
their position of authority, by spreading lies about you, or by getting you in
trouble with authorities.
6.	 They did it when you were passed out, asleep, or unconscious.
7.	 They did it when you were so drunk, high, or drugged that you could not understand what was happening or could not show them that you were unwilling.
8.	 They tricked you into thinking that they were someone else or that they were
allowed to do it for a professional purpose (like a person pretending to be a
doctor).
If a respondent indicated that none of the offender behaviors listed above was
present in the incident, he or she was asked about three additional situations that might
describe the incident. The first two of these incidents are likely to meet the criteria of a
crime under Article 120, but are less clear-cut than the criteria embodied by questions
1–8 above: (1) They made you so afraid that you froze and could not tell them or show
them that you were unwilling; and (2) They did it after you had consumed so much
alcohol that the next day you could not remember what happened.
The final item (“It happened without your consent”) was delivered to respondents
who did not endorse any of the eight primary types of offender coercive behavior. It
was placed last in the survey to catch any instances of nonconsent that were not captured in the earlier items. It was explicitly included to capture instances where an event
happened so suddenly that explicit refusals were not possible and threats or force were
not used. This could occur, for example, with a sudden groping of genitals that would
not be well described by the offender behaviors listed earlier. The UCMJ also includes
such a blanket “without consent” criterion to account for the same types of crimes.
Confirming Past-Year Time Frame
Finally, toward the end of the sexual assault section on the survey, respondents are
asked a few questions to verify that the assault they just described occurred in the past
year, and if not, when the most recent event of this type occurred (if there had been

80

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

other such events). Thus, there are several steps to the classification process. Every
respondent is asked all six past-year sexual assault screener items in order from mostserious crime to least-serious crime, as indicated by the UCMJ.
•	 For the first item, penile penetration, a “yes” answer is followed up with just
those questions establishing offender behaviors defined in the UCMJ. If they
indicate that any of the qualifying offender behaviors occurred, they are classified
as having experienced a sexual assault.
•	 For the next five screener items, a “yes” answer is followed by up to three steps if
the respondent has not yet been classified as experiencing a sexual assault on one
of the prior screening items.
–– Respondents are asked if the offender intent was abusive, humiliating, or
intended to be abusive or humiliating.
◦◦ Only if respondents answer “no” to the question on abusive intent are they
asked if the intent was sexual.
–– Respondents who answered “yes” to either intent question are then asked which,
if any, coercive offender behaviors occurred. If any of the specified behaviors
occurred, the respondent is classified as having experienced a sexual assault.
•	 Later in the survey, respondents are asked to confirm that the event they identified happened in the past 12 months. If they confirm this, or that some other such
event did occur in the past 12 months, they are classified as having experienced a
sexual assault in the past year.
To illustrate how questions were presented, consider two scenarios:
1.	 Person A did not experience any unwanted contacts of these types in the past
year, and thus answered “no” to each of the six unwanted event sexual assault
screener items. He was not presented with any questions other than those six
screener items. He was not classified as having experienced a sexual assault in
the prior year.
2.	 Person B experienced unwanted kissing with tongue penetration and, in a separate event, unwanted touching of private areas in a hazing-type event. She
answered “yes” to the second sexual assault screener item (i.e., “Since X date,
did you have any unwanted experiences in which someone put any object or any
body part other than a penis into your vagina, anus, or mouth? The body part
could include a finger, tongue, or testicles”), but then answered “no” to the two
questions inquiring about the offenders’ abusive, humiliating, or sexual intent,
because she believed it to be the result of joking among a group of coworkers.
By answering “no” to both items, she was skipped to the next screener item. She
indicated a “no” for being made to penetrate another person, and was skipped to
the next screener item. She indicated here that she experienced unwanted touching of her private areas. In this case, she did experience it as abusive or humiliat-

Performance of the Sexual Assault Survey Module

81

ing, and answered “yes” to this item. She then received questions related to nonconsent or coercion, and indicated there was use of physical force, injury, and
that the event continued when she showed she was unwilling. Accordingly, this
respondent is classified as having experienced a non-penetrative sexual assault.
She was skipped to the last two screener items, but was not asked follow-up
questions regardless of her answers.
Table 5.1 shows the results of classification of past-year sexual assault for female
and male respondents for each of the six screener items.
Next, we review the flow of individuals through the categorization process in
order to understand Table 5.1 and the discussion that follows. Figure 5.1 illustrates the
flow of the screening logic captured in Table 5.1.
In the first column of Table  5.1 (column A), we present the proportion of all
active-component service members who endorsed any of the six unwanted event screening items. More members indicated that at least one of these events occurred than were
classified as having experienced a sexual assault. Indeed, as reported in Volume 2 of
this series, the percentage endorsing one or more unwanted events is approximately
1 percentage point higher than the percentage classified as experiencing a sexual assault
for women, men, and overall. Respondents could answer “yes” to more than one of the
six screening questions, so percentages in this column are overlapping. Note that this
column provides confidence intervals, as does the last (Column F), because these are
estimates that can be applied to the population of service members under study. The
columns in the middle (B–E), however, do not have confidence intervals because they
reflect the actual flow through this particular survey among the survey respondents
and are not meant to be generalized to the broader population.
Some of the types of unwanted events were much more rare than others. For
example, among women, being forced to penetrate someone else (0.31  percent) was
less common than being penetrated (1.79  percent indicated penile penetration and
1.03 percent indicated penetration by some other body part or object). Penile penetration was more common among women (1.79 percent) than among men (0.23 percent).
This pattern of results is similar to what would be expected given gender differences
and known prevalence of the different types of crimes.
Column B shows the percentage of service members who indicated that they
experienced each type of unwanted event, and who were not already classified as
having experienced a sexual assault based on an earlier screening item (those who were
already counted as experiencing a sexual assault were not asked the follow-up questions
for subsequent screening items to reduce response burden).
Percentages in Column B are higher for men than women for three of the five
remaining several assault types. This indicates that women were somewhat more likely
to be classified as experiencing a sexual assault on the more-severe screener items that
occur earlier in the questionnaire. For instance, about one-third of women who indicated they were subjected to unwanted touching of their private parts had already been

Sexual Assault Screener
Since [X Date], . . .

Percent who had
the unwanted
experience
(Steps 1–6 are
overlapping
categories)a

B

C

D

Of those in column
A, percent who
Of those in
did not qualify for column B, percent Of those in column
sexual assault on a
who indicated
C, percent who
previous question offender’s intent
indicated that at
and, therefore,
was abusive,
least one offender
received follow-up
humiliating, or
coercive behavior
questions
sexual.
occurred

E

F

Percent classified
as experiencing
Of those in column this type of sexual
D, percent who
assault
confirmed that the
(Steps 1–6 are
event occurred in mutually exclusive
the prior 12 months
categories)a

FEMALES
Step 1. . . . did you have any
unwanted experiences in
which someone put his penis
into your vagina, anus, or
mouth?

1.79%
(1.63–1.96)

(100%)b

(100%)b

94.18%

95.55%

1.61%
(1.46–1.77)

Step 2. . . . did you have
any unwanted experiences
in which someone put any
object or any body part
other than a penis into your
vagina, anus, or mouth? The
body part could include a
finger, tongue or testicles.

1.08%
(0.95–1.23)

49.02%

92.07%

97.7%

92.52%

0.44%
(0.36–0.54)

Step 3. . . . did anyone make
you put any part of your
body or any object into
someone’s mouth, vagina,
or anus when you did not
want to?

0.31%
(0.25–0.39)

76.70%

75.09%

87.41%

0.03%
(0.01–0.05)

Step 4. . . . did you have any
unwanted experiences in
which someone intentionally
touched private areas of
your body (either directly or
through clothing)?

4.66%
(4.41–4.93)

85.35%

95.49%

96.41%

2.44%
(2.25–2.64)

(50.98% already
classified in Step 1)

16.71%
(83.29% already
classified in Step
1-2)
66.65%
(33.35% already
classified in Step
1-3)

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

A

82

Table 5.1
Classification of Past-Year Sexual Assault Among Female and Male Respondents

Table 5.1—Continued
A

Sexual Assault Screener
Since [X Date], . . .

Percent who had
the unwanted
experience
(Steps 1–6 are
overlapping
categories)a
1.43%
(1.28–1.6)

Step 6. . . . did you have any
unwanted experiences in
which someone attempted
to put a penis, an object,
or any body part into your
vagina, anus or mouth, but
no penetration actually
occurred?

1.23%
(1.09–1.39)

C

D

Of those in column
A, percent who
Of those in
did not qualify for column B, percent Of those in column
sexual assault on a
who indicated
C, percent who
previous question offender’s intent
indicated that at
and, therefore,
was abusive,
least one offender
received follow-up
humiliating, or
coercive behavior
questions
sexual.
occurred
11.15%

E

Percent classified
as experiencing
Of those in column this type of sexual
D, percent who
assault
confirmed that the
(Steps 1–6 are
event occurred in mutually exclusive
the prior 12 months
categories)a

73.39%

89.50%

97.82%

0.10%
(0.07–0.15)

92.70%

92.21%

94.55%

0.17%
(0.12–0.24)

(88.85% already
classified in Step
1-4)

17.18%
(82.82% already
classified in Step
1-5)

F

Performance of the Sexual Assault Survey Module

Step 5. . . . did you have any
unwanted experiences in
which someone made you
touch private areas of their
body or someone else’s body
(either directly or through
clothing)? This could involve
the person putting their
private areas on you. (Private
areas include buttocks, inner
thigh, breasts, groin, anus,
vagina, penis, or testicles.)

B

83

84

Table 5.1—Continued

Sexual Assault Screener
Since [X Date], . . .

Percent who had
the unwanted
experience
(Steps 1–6 are
overlapping
categories)a

B

C

D

Of those in column
A, percent who
Of those in
did not qualify for column B, percent Of those in column
sexual assault on a
who indicated
C, percent who
previous question offender’s intent
indicated that at
and, therefore,
was abusive,
least one offender
received follow-up
humiliating, or
coercive behavior
questions
sexual.
occurred

E

F

Percent classified
as experiencing
Of those in column this type of sexual
D, percent who
assault
confirmed that the
(Steps 1–6 are
event occurred in mutually exclusive
the prior 12 months
categories)a

MALES
Step 1. . . . did you have any
unwanted experiences in
which someone put his penis
into your anus or mouth?

0.23%
(0.15–0.34)

(100%)b

(100%)b

88.06%

97.90%

0.20%
(0.12–0.31)

Step 2. . . . did you have
any unwanted experiences
in which someone put any
object or any body part
other than a penis into your
anus or mouth? The body
part could include a finger,
tongue, or testicles.

0.26%
(0.16–0.39)

65.37%

53.08%

97.76%

99.10%

0.09%
(0.03–0.21)

Step 3. . . . did anyone make
you put any part of your
body or any object into
someone’s mouth, vagina,
or anus when you did not
want to?

0.18%
(0.12–0.26)

55.80%

88.79%

96.89%

0.04%
(0.02–0.07)

Step 4. . . . did you have any
unwanted experiences in
which someone intentionally
touched private areas of
your body (either directly or
through clothing)?

1.41%
(1.20–1.64)

50.91%

95.35%

93.58%

0.54%
(0.43–0.66)

(34.63% already
classified in Step 1)

49.82%
(50.18% already
classified in Step
1-2)
83.78%
(16.22% already
classified in Step
1-3)

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

A

Table 5.1—Continued
A

Sexual Assault Screener
Since [X Date], . . .

Percent who had
the unwanted
experience
(Steps 1–6 are
overlapping
categories)a
0.45%
(0.32–0.61)

Step 6. . . . did you have any
unwanted experiences in
which someone attempted
to put a penis, an object, or
any body part into your anus
or mouth, but no penetration
actually occurred?

0.24%
(0.14–0.39)

C

D

Of those in column
A, percent who
Of those in
did not qualify for column B, percent Of those in column
sexual assault on a
who indicated
C, percent who
previous question offender’s intent
indicated that at
and, therefore,
was abusive,
least one offender
received follow-up
humiliating, or
coercive behavior
questions
sexual.
occurred
30.89%

E

Percent classified
as experiencing
Of those in column this type of sexual
D, percent who
assault
confirmed that the
(Steps 1–6 are
event occurred in mutually exclusive
the prior 12 months
categories)a

67.11%

95.28%

90.78%

0.08%
(0.03–0.18)

12.61%

65.69%

100.00%

0.00%
(0–0.01)

(69.11% already
classified in Step
1-4)

20.83%
(79.17 already
classified in Step
1-5)

F

NOTE: There are minor differences between the percentages reported in column F and related data presented in earlier reports. The data in this
table handle missingness slightly differently than for primary study estimates. In this table, item missing data are treated as “no” responses because
they had the same effect on the respondents’ skip pattern. In primary survey estimates, individuals with item missingness on one-half or more of the
required criteria for a given measure are treated as missing at random, and population estimates are based on percentages among nonmissing.
a Numbers in parentheses in columns A and F are confidence intervals.
b Respondents automatically pass through these two steps for penile penetration. It is the first question on sexual assault, so they cannot have

85

qualified for a sexual assault in an earlier item (column B). Respondents were not asked about the intent of the offender for penile penetration to
align with UCMJ code (column C).

Performance of the Sexual Assault Survey Module

Step 5. . . .did you have any
unwanted experiences in
which someone made you
touch private areas of their
body or someone else’s body
(either directly or through
clothing)? This could involve
the person putting their
private areas on you. (Private
areas include buttocks, inner
thigh, breasts, groin, anus,
vagina, penis, or testicles.)

B

86

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

Figure 5.1
Flowchart of Survey Logic Underlying Categorization of Past-Year Sexual
Assault
Respondent had specific type
of unwanted experience
% in Column A

Respondent was NOT classified in
an earlier type of sex assault

Respondent already classified in a
previous type of sexual assault
In Column F of an earlier row

% in Column B

Offender intent was abusive,
humiliating, or sexual
% in Column C

Offender used coercive behavior
% in Column D

Respondent confirmed an event
occurred in last year
% in Column E

Classification in one of six mutually
exclusive types of sexual assault
Column F = A × B × C × D × E
RAND RR870/6-5.1

classified as having experienced a penetrative sexual assault. In contrast, 15  percent
of men who indicated they had been subjected to unwanted touching of their private
parts had been already classified as experiencing a penetrative sexual assault.
Men were more likely than women to indicate that the unwanted event was neither sexual nor intended to abuse or humiliate them. Whereas women indicated these
intentions occurred 73 percent to 93 percent of the time (across screener items), men
noted these intentions 13 percent to 67 percent of the time (Column C in Table 5.1).
These rates of endorsement were lower for men in three of the five assault types. Thus,
men were less likely to have their unwanted events classified as sexual assaults. We
break down the responses to these two intent items further in Table 5.2, but note that
small numbers make these estimates imprecise. As can be seen in Table 5.2, there is
variation in responses depending on the type of event. Fewer than one-half of respondents indicated that the event was intended to be abusive or humiliating. On the other
hand, among those who indicated no abusive or humiliating intent, men indicated less
often that the event was done for a sexual reason, as compared with women for three
of the five types of events. Together, these items work to classify men and women on
somewhat different grounds. As reported in Volume 2 of this series, men were more
likely than women to have qualified as having experienced a sexual assault by indicating that the intent was to abuse or humiliate them (as opposed to a sexual purpose):

Table 5.2
Affirmative Responses to Questions About Assault Intent, Among Those Presented with the Question
Affirmed Intent Was
Abusive or Humiliating

(If No,) Affirmed
Intent Was Sexual

Affirmed Intent Was Either
Abusive, Humiliating, or Sexuala

Female

Male

Female

Male

Female

NA

NA

NA

NA

NA

NA

Penetration by other
body part of object

48.14%
(21.68–75.40)

43.74%
(35.17–52.59)

9.00%
(1.68–25.22)

86.09%b
(77.90–92.11)

52.81%
(25.63–78.80)

92.17%
(87.49–95.52)

Forced to penetrate
another person

21.71%
(7.11–44.48)

43.39%
(22.47–66.20)

41.73%
(19.09–67.26)

NR
(NR)

54.38%
(30.81–76.59)

72.17%
(48.61–89.20)

Touched in private areas

34.97%
(27.71–42.77)

35.78%
(32.47–39.18)

23.84%
(17.03–31.82)

76.80%b
(73.15–80.17)

50.47%
(42.42–58.51)

85.10%
(82.63–87.34)

Forced to touch another
person

34.33%
(9.17–68.54)

22.30%
(12.32–35.32)

51.20%
(31.60–70.54)

64.03%
(47.55–78.40)

67.95%
(46.53–84.97)

72.05%
(57.94–83.56)

Attempted penetration

7.36%
(0.30–31.75)

29.37%b
(18.35–42.48)

NR
(NR)

90.35%b
(79.82–96.50)

11.20%
(1.15–36.73)

93.19%
(85.64–97.50)

Penile penetration

a The numbers in these columns differ slightly from those in Table 5.1, column C. Table 5.1 only includes individuals who were classified as
study respondents, which required nonmissing responses to a minimum set of sexual assault questions. To better understand the pattern of
exclusions caused by nonaffirmative responses to these questions, this table includes everyone presented with the question, regardless of
whether they are considered to be an overall study respondent.
b Indicates significant gender difference.

Performance of the Sexual Assault Survey Module

Male

87

88

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

70.0 percent of men (95% CI: 60.2–77.8) compared with 41.7 percent of women (95%
CI: 38.3–45.2). These estimates exclude sexual assaults involving penile penetration
because intent questions were not asked in those cases.
Across screening items, 66 percent to 98 percent of respondents who indicated
that their unwanted contact involved a criminal intent also indicated that there was
offender coercion or lack of consent (Table 5.1, Column D). As such, most unwanted
events for which criminal intent was present were classified as sexual assaults with the
RAND form. As reported in Volume 2, penetrative assaults were more likely to have
involved physical force, injuries, and threats than non-penetrative assaults, particularly
among men, and also more likely to involve drug and alcohol incapacitation (for men
and women) than non-penetrative assaults.
Column E of Table 5.1 indicates the rates at which individuals confirmed that the
unwanted event classified as a sexual assault, or some other such assault, occurred in
the previous year. In general, these rates were very high, with 87 percent to 100 percent
of the assault types confirmed as occurring in the past year. Across sexual assault types,
4.7 percent of those classified as experiencing a sexual assault were found to have had
no such experience in the past year (which is 0.08 percent [95% CI: 0.05–0.11] of the
active-component population), with no differences apparent across service, gender, pay
grade, or type of assault.
Finally, Column F of Table 5.1 shows the population estimates for individuals
experiencing each type of assault. These values represent the product of values in Columns A through E. As can be seen, classification within the different events show different patterns. For example, very few individuals were excluded from being classified
as having experienced a sexual assault after having experienced unwanted penetration,
whereas more are excluded from being classified as having experienced a sexual assault
after having experienced unwanted touching of private parts. This pattern makes sense,
because it is more plausible that unwanted touching could occur by accident or for a
legitimate purpose, and without explicit coercion, than could penile penetration. Thus,
the pattern of classification has some clear face validity.
Columns C, D, and E together represent the filters that cause individuals who
have not yet been classified as experiencing a sexual assault to have their unwanted
event excluded from classification as a sexual assault. Their combined effect is fairly
small for events involving penetration with a penis, for which 10 percent of women and
14 percent of men with such an unwanted experience end up not having this event classified as a past-year sexual assault. For women, these events are not counted as past-year
sexual assaults either because they indicated there was no coercive offender behavior or
failure of consent, or because they indicated the event took place more than a year ago,
with both such conditions occurring more or less equally often. For men, in contrast,
events involving penile penetration failed to be counted as past-year sexual assaults
almost exclusively because the event did not involve coercion or failure of consent.

Performance of the Sexual Assault Survey Module

89

For other screening items, and for men in particular, the primary reason unwanted
events fail to be classified as a past-year sexual assault is because of responses to the intent
questions. That is, many respondents indicated that their unwanted event involved neither abuse or humiliation, an intention to abuse or humiliate, nor a sexual intention.
Indeed, in a separate analysis, we have calculated that 99 percent of those active-component members who had an unwanted event but were not classified as experiencing
a sexual assault were excluded from this classification based on their responses to the
intent questions (the remaining 1 percent were excluded because their unwanted events
did not involve coercion or the circumvention of consent).
Given the large number of unwanted events excluded from sexual assault classification due to the intent questions, we have considered whether the intent items may
be causing the misclassification of true sexual assaults, with a resulting undercount of
sexual assaults.
It could be argued, for instance, that victims are reluctant to speculate about the
offender’s intent, and that the intent questions present a difficult or impossible task for
some victims. If this were the case, however, we might expect large numbers skipping
the intent questions. Among those who indicated an unwanted event but who failed to
be classified as having a sexual assault, just 1 percent did not meet the intent criterion
because they skipped these questions. Thus, these respondents did not indicate that
they could not speculate on the intention behind the contact; rather, they indicated
that the intentions were not criminal. It is the victim’s interpretation of that intent that
determines whether they experience it as a violation or sexual assault as opposed to, for
instance, an accident or a misunderstanding.
A second reason to suspect that the intent questions are not unduly restricting
the experiences classified as sexual assaults is the fact that the RAND survey questions
actually capture more instances of sexual assault than the WGRA’s older method,
which did not separately inquire about intent, but included intent as a part of a lengthy
definition of unwanted sexual contact. As we discuss in Chapter Eight, after excluding
those unwanted sexual contacts that occurred more than a year ago, the WGRA question produced a population estimate of about 18,400, in comparison with the RAND
sexual assault assessment that produced an estimate of 20,300. These estimates are not
statistically significantly different, but neither do they provide any suggestion that the
RAND sexual assault assessment is leading to an undercount of sexual assaults. Indeed,
we suspect that the intent questions contributed to our identification of sexual assaults
not counted by the prior form, because our first intent question captures sexual assaults
that are designed to abuse or humiliate, an intention identified by many respondents
who also described the assault as “hazing.” In contrast, the prior-form question emphasizes just sexual intentions, and could thereby fail to identify hazing and other abusive
sexual assaults.
Finally, the large majority (81 percent) of those who experienced one of the sexual
assault screening behaviors but who were subsequently not classified as experiencing

90

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

a sexual assault because of their responses on the intent questions occurred among
those indicating an unwanted contact that did not involve nonpenile penetration or
attempted penetration. This, too, may provide indirect support for the use of intention
screening items. Specifically, our approach was to cast a broad net around instances of
unwanted contact with private areas, and then to winnow out those unwanted contacts
that were accidental, legitimate (such as medical exams), or other events that were not
perceived as abusive, degrading, or sexual. In most of these cases of unwanted contacts
that would not be considered sexual assaults, we would expect the contacts not to
involve penetration. Therefore, the fact that non-penetrative assaults make up the large
majority of cases that are discounted as sexual assaults due to the intention questions is
consistent with our expectations.
The fact that our offender-behaviors questions resulted in just 1 percent of those
with unwanted contacts being classified as not sexually assaulted raises the possibility that those questions are too broad, effectively counting all offender behaviors as
evidence of a crime. Whereas most of these questions describe unambiguous forms of
unlawful coercion, such as use of force, threats, or deception, there are a few offender
behavior questions that are arguably more ambiguous. This is particularly true for the
last three questions, which are seen only by respondents who have not selected any of
the eight offender behaviors that are unambiguous. These final three items concern
contacts that occurred when (1) “they made you feel so afraid that you froze and could
not tell them or show them that you were unwilling”; (2) “they did it after you had consumed so much alcohol that the next day you could not remember what happened”; or
(3) “it happened without your consent.”
As a practical matter, only the last of these potentially ambiguous questions is
important, because, as discussed in Chapter Six, less than one-quarter of 1 percent of
those we classified as experiencing a sexual assault indicated alcohol blackout or frozen
in fear were the only means by which the unwanted contact was coerced. In contrast,
approximately 18 percent of unwanted contacts were described as happening without
the respondent’s consent.
The rationale for including a “without your consent” option is to account for a
range of types of sexual assault that do not involve threats of violence or deception, but
where the victim never had a chance to say or show that they did not consent to the
contact. The most obvious example is unexpected groping that occurs before the victim
has a chance to refuse the contact. This appears to be the reason that an all-purpose
nonconsensual contact mechanism is included in the Article 120 definition of sexual
assault. Specifically, in addition to the mechanisms involving force, threats, drugs,
or deception, Article 120 explicitly includes “offensive touching of another, however
slight, including any nonconsensual sexual act or nonconsensual touching.”
Here, again, the most conspicuous examples of sexual assaults that involve no
other listed coercive mechanism, yet the respondent did not provide consent, would
seem to be non-penetrative contacts that occur too quickly for the victim to indicate

Performance of the Sexual Assault Survey Module

91

they do not consent. This is, in fact, consistent with what we see in our results: whereas
less than 1 percent of penetrative sexual assaults were classified as such on the basis of
the “did not provide consent” option, closer to 30 percent of non-penetrative sexual
assaults qualified as such on the basis of respondents indicating they did not provide
consent (see detailed results in Volume 2, Chapter Three). This does not prove that
the “did not provide consent” option is not counting too many unwanted contacts as
sexual assaults. However, it does suggest that the pattern of events classified as assaults
on the basis of this question is plausible, and it does not strain credulity to imagine that
30 percent of non-penetrative assaults, or 18 percent of all sexual assaults, could involve
contacts like groping where coercive force is not used, and where the victim does not
have the opportunity to say or show that they do not consent to the contact.
Given the very slight effect the offender-behavior items have on the classification
of sexual assault, it might be reasonable in some contexts to omit those questions altogether from future administrations of the survey. Still, the contextual information they
provide about the circumstances and nature of the assaults is valuable, and including
these items may be important for demonstrating strict adherence to the definitions of
sexual assault provided in the law.
Conclusions
Results of this analysis show a few key aspects of the performance of the sexual assault
module of the survey, and lend some support to face validity—in as much as the pattern of results resemble known rates of offenses and fit logical and consistent patterns
based on gender and type of crime:
•	 Use of specific behavioral screener items alone, without qualifiers as to intentionality or lack of consent, results in higher estimates of prevalence than when these
criteria are applied. That is, some individuals are not classified as having experienced a sexual assault despite having experienced unwanted events in the prior
year.
•	 Use of specific behavioral screener items may pick up individuals who would not
use the word “sexual” to describe their experience, but who indicate the intent
was to abuse or humiliate. Items used to capture intention of the event show that
men and women indicate that the intent was abusive or humiliating at equal
rates, but when they indicate the event was not abusive, women are more likely to
indicate it was sexual intent than men. As such, among those classified as having
experienced a sexual assault, a higher proportion of men than women are classified on the basis of saying the unwanted event was done to humiliate or abuse.
Thus, this classification scheme may detect some assaults against men in particular that would not be detected with screener items used in the past.

92

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

•	 Items used to capture intention of the event cause some non-penetrative contacts
to not be classified as assaults, but few nonpenile penetrative contacts are excluded
on the basis of the intention items. This is consistent with the expectation that
some unwanted contacts identified with the screener items could include accidental or legitimate contacts that correctly would not be classified as sexual assaults.
For the most part, the specific behavioral screener items with intentionality confirmed capture the types of events that are described in the UCMJ. That is, the vast
majority of respondents indicate that the event included some type of offender behavior or lack of consent that conforms to UCMJ definitions.

CHAPTER SIX

Undercounting and Overcounting of Service Members
Exposed to Sexual Assault
Andrew R. Morral, Terry L. Schell, and Lisa H. Jaycox

Our methods for estimating the proportions of service members exposed to sexual
assault in the past year and the total number of past-year assault victims are subject to
several sources of possible error. Here, we consider six:
•	
•	
•	
•	
•	
•	

Inclusion of preservice assaults
Exclusion of assaults against members with fewer than six months of service
Inclusion or exclusion of tonic immobility and alcohol blackouts
Exclusion of members who recently separated from the military
Inclusion of nonpenile oral penetration in the penetration counts
Possible exclusion of civilian sexual assaults among reserve-component members.

Inclusion of Preservice Sexual Assaults
The study sample frame included service members who joined the service six months
ago or longer. Therefore, some sampled members had only been in the military for six
to 12 months. Possibly, therefore, any past-year sexual assault reported by this group
could have occurred during the portion of the past year that preceded their service
entry, in which case including these respondents would inflate the apparent risk to
members of the military. That is, if a member with eight months of active-component
experience describes an event that occurred 11 months earlier (when they were not a
service member), this assault should not be counted as one against a service member.
A total of 2.4 percent of the active-component sample had served for six to 12
months, and 1.3 percent of this subgroup reported a past-year sexual assault. If all of
their sexual assaults were excluded from the estimate of the proportion of active-component men and women who experienced one or more sexual assaults, it would have
the effect of reducing that proportion from 1.54 percent to 1.51 percent, or it would
reduce the total number of active-duty members projected to have experienced a pastyear sexual assault from about 20,300 to about 19,900. Therefore, the maximum possible overcount due to those with six to 12 months of service is relatively small.

93

94

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

We can get a better estimate of the possible effect of preservice assaults on our
service estimates by examining what respondents said about who assaulted them and
the circumstances of the assault. Table 6.1 describes the proportion of all new service
members who provided details of the “worst” past-year assault that links it to their
time in the military, as opposed to during the time before they joined. The results suggest that more than 98 percent of the assaults reported by those with six to 12 months
of active-duty service occurred at a military installation or on a ship, during basic
training or other military training, or was in some other way linked to their military
service. Indeed, approximately 91 percent of the assaults were committed by another
service member. While it is possible that the remaining 2 percent of assaults against
new service members occurred before joining the military, the effect of excluding these
cases from the overall estimates is small: the overall proportion of active-duty members
estimated to have experienced a past-year sexual assault declines from 1.537 percent
to 1.536 percent, meaning the population estimate of 20,300 members with past-year
sexual assaults could be an overestimate by fewer than ten members.
Together, these results suggest that the influence of preservice past-year assaults on
our estimates of the proportion of members who experienced past-year assaults is negliTable 6.1
Proportion of Sexual Assaults Linked to Military Service, All Sexually Assaulted Active-Duty
Respondents with Six to 12 Months of Service
Question

Proportion

At the time of the event, was the person who did this to you someone in the
military?

90.67
(76.97–100)

Did the unwanted event occur at a military installation/ship, armory, or reserve unit
site?

NR
(67.38–97.92)

Did the unwanted event occur while you were on temporary duty/temporary
additional duty, at sea, or during field exercises/alerts?

NR
(0–66.18)

Which of the following best describe the situation when this unwanted event
occurred? You were at a military function.

NR
(31.24–100)

Offender was a civilian employee or contractor working for the military?

6.75
(0–20.51)

Did the unwanted event occur while you were completing military occupational
specialty school/technical training [etc.]?

NR
(31.43–88.96)

Did the unwanted event occur while you were in recruit training/basic training?

NR
(8.98–78.6)

Did the unwanted event occur while you were in Officer Candidate or Training
School/Basic or Advanced Officer Course?

1.11
(0–3.47)

Did the unwanted event occur while you were in any kind of military combat
training?

NR
(0–68.17)

Any of the above indicators that crime related to military service or military
personnel

97.68
(93.93–100)

Undercounting and Overcounting of Service Members Exposed to Sexual Assault

95

gible. The shift in the estimate is not practically significant—smaller than our rounding
error—and would not be statistically significant even within our large sample.
Exclusion of Assaults Against Members With Fewer Than Six Months
of Service
Following procedures established by DMDC for the WGRA surveys, the 12,469 officers and enlisted members with fewer than six months of service as of August 1, 2014,
were excluded from the sample frame because they are hard to contact—they have
often moved since their address was pulled for the sample frame—and much of their
past year was spent before entering the military. An additional 72 active-component
members were excluded because they had not reached the age of 18 by August 1, 2014.
Assuming these 12,541 active-component members experienced any assaults
during their short careers in the service, these assaults will have gone uncounted in our
study. However, we can estimate bounds for the extent of the resulting undercounting. Because RAND sampled service members more than a month before August 1,
2014, it is reasonable to assume that most who were excluded for having less than six
months of service had either completed basic training/recruit training, or were nearing
its completion. Therefore, as a lower bound for the proportion of this group exposed
to sexual assaults, we used the proportion of active-component members with between
six and 12 months of service who indicated they were assaulted during basic or recruit
training (0.21  percent). For an upper bound, we took the proportion of those with
six to 12 months of service who were assaulted by another member of the military or
in a military setting (1.5 percent). These bounds suggest that the exclusion of activecomponent members with fewer than six months of service or who were not yet 18
resulted in an undercount of between 25 and 190 people who experienced a past-year
assault during their time in the military. That is, if we attempted to adjust for this, our
estimate of 20,300 active-duty members exposed to past-year sexual assault(s) should
be increased by 25 to 190 people.
Exclusion of Members Who Recently Left the Service
Our estimate that 20,300 service members experienced at lease one sexual assault in
the past year excludes those past-year sexual assaults that occurred against members
who left the service before our sample frame was drawn. We do not have statistics on
the number of service members who separated from the service sometime during the
year prior to the survey, but we can make a rough guess that it is close to the number
who separated from the service in 2013, or about 206,000 separations (Office of the
Deputy Secretary of Defense [Military Community and Family Policy], 2014). If we

96

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

assume that the rate of these losses is consistent over the year, nine of the 12 months
of the losses would have been excluded from our sample frame (three months of losses
would have been included in our sample frame, because we fielded the survey about
three months after the population data was current). We can additionally assume that,
among those who separated in the nine months preceding our sample frame, on average they spent 4.5 months, or one-half of the nine-month period, in the military.
With these assumptions, the true prevalence of sexual assaults against service
members in the past year, SAt, could be calculated as
SAt = RmP + (1/2)(9/12)RnL = RmP + (9/24)RnL
where Rm is the rate of past-year sexual assault measured in the sample frame, Rn
is the annualized rate of sexual assault among the past-year service members lost to
the sample frame, P is the size of the sample frame (1,317,561), and L is the number of
losses or separations over the year (206,000, by assumption).
Using this calculation, if those who separated in the past year have no higher or
lower monthly risk of having a past-year sexual assault than those who were included in
our survey (i.e., Rm = Rn), we can estimate that, in addition to the 20,300 service members in the sample frame who experienced past-year sexual assaults, there were 900
more who were sexually assaulted but left the service before they could be represented
in the survey. Thus, the true number of service members who experienced an assault in
the past year would be closer to 21,200.
Of course, if those who have been sexually assaulted are more likely to separate
from the service than those who have not been sexually assaulted, as evidence in Chapter Three suggests, the true number who experienced a sexual assault would be higher
yet. For instance, if those who separated in the past year had an annualized sexual
assault prevalence comparable to that of women in the military (i.e., Rm = 4.87%),
we would generate a corrected estimate for the number of service members who were
sexually assaulted of 23,100, which is 14-percent higher than the 20,300 estimate that
takes no account of past-year assaults against members that have since left the service.
In sum, of all the sources of possible over- or underestimation we have considered
here, the underestimation due to separations in the past year had the largest effect. A
more-accurate assessment of the effects of differential loss rates on sexual assault estimates would require research evaluating the past-year sexual assault experiences of
members who are separating from the service.

Undercounting and Overcounting of Service Members Exposed to Sexual Assault

97

Inclusion or Exclusion of Alcohol Blackouts and Fear Responses That
Immobilize
UCMJ crimes require that one of several types of coercion occur that deprive the
assaulted person of the opportunity to freely decline the unwanted contact. Most
of these coercion methods, such as the use of force, threats, or drugs are reasonably
straightforward to assess. As discussed in Volume 1 of this series (Morral, Gore, and
Schell, 2014, Appendix A), there are two forms of coercion that require more information to establish clear evidence of a crime than can reasonably be collected from a
self-report survey instrument. These concern cases where victims do not indicate their
unwillingness because they are frozen with fear, and cases in which the victim is sufficiently intoxicated by alcohol at the time of the assault as to sustain a subsequent
alcohol-induced blackout state.1 For both of these more-complex circumstances, the
UCMJ has sections that can be interpreted as defining such events as crimes, but more
detail would be needed to verify that the event qualifies as a crime under Articles 80
or 120.
As part of the sexual assault screening questions, all respondents classified as
having a past-year sexual assault indicate some form of coercion. Most indicate that one
of eight primary forms of coercion occurred. However, those who say none of the primary forms of coercion applied to their unwanted sexual experience are asked whether
they could not object to the contact because they were frozen in fear, or whether they
could not remember what happened because of an alcoholic blackout.
Few service members were classified as experiencing a past-year sexual assault
solely on the basis of either the frozen-in-fear or blackout methods of coercion. Specifically, 0.23 percent (95% CI: 0.08–0.53) of all whom we classified as experiencing
a past-year sexual assault fit into this category. As such, if we were to exclude all such
service members from our estimates of the prevalence of past-year sexual assaults, that
rate would fall from 1.537 percent to 1.533 percent. This would mean that our estimate
of 20,300 sexually assaulted service members in the past year was too high by fewer
than 50 members.

1	

Alcohol-induced blackouts (i.e., memory loss for events that occurred during intoxication) typically occur after
approximately ten alcoholic drinks, and with blood alcohol concentrations of 0.16 percent or greater (Goodwin
et al., 1970; White, Simpson, and Best, 1997). In addition to the potential for alcohol-induced blackout, individuals with blood alcohol concentrations at this level experience gross motor impairment, loss of balance, and may
require assistance to walk. Vomiting is common, and the gag reflex is impaired, raising the risk for asphyxiation.
Judgment, reaction time, vision, and hearing are impaired. Speech is slurred. The individual may be disoriented to
time and place.

98

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

Inclusion of Nonpenile Oral Penetration in the Penetration Counts
In contrast to prior WGRA surveys, we included the mouth as one of the orifices that,
when violated, would be counted as a penetrative assault. We included the mouth
because Article 120 of the UCMJ is unambiguous that penetration of the mouth by
any object or body part is a “sexual act” subject to Article 120. However, some reviewers have suggested that such offenses may involve unwanted kissing with the tongue,
or even just putting a grape in the person’s mouth. Such incidents could clearly qualify as sexual assaults as defined in Article 120, even if in practice they might not be
prosecuted as such. Nevertheless, to address concerns that have been raised about our
inclusion of penetration of the mouth, we analyze here how this category of offenses
influences our estimated rates of sexual assault in the military.
Few service members were categorized as experiencing a past-year sexual assault
solely on the basis of a penetration of their mouth by something other than a penis.
Across the active component, just 0.02 percent (95% CI: 0.01–0.03) were classified as
sexually assaulted on the basis of this question. If all such cases were excluded from our
counts, the estimated rate of past-year sexual assault in the military would fall from
1.54 percent to 1.52 percent. This would have the effect of lowering our population
estimate of 20,300 service members experiencing sexual assault in the past year by
about 250 cases.
Possible Exclusion of Civilian Sexual Assaults Among Reserve
Component Members
An unexpected finding reported in Volume 2 of this series was that a high percentage of assaults against reserve-component members involved an offender in the military or the assault took place in a military setting (86 percent). When we considered
only “part-time” members of the reserve component (those with administrative records
indicating they were not full time and who indicated on the survey that they spent
180 days or less drilling or working for the military), their assaults disproportionately
involved military members or settings as well (85 percent).
A reasonable question these findings raise is whether reserve-component members
understood that this DoD survey with a title referencing the “military workplace” was
asking about all unwanted sexual experiences in the past year, not just those associated
in some way with their military duties. If, instead, they underreported their unwanted
nonmilitary experiences, but correctly reported their military experiences, this would
result in an apparent overrepresentation of assaults that involve military members or
settings.
While such a bias is possible, the instrument was written to minimize this misunderstanding. The instructions to the sexual assault module state, “please include

Undercounting and Overcounting of Service Members Exposed to Sexual Assault

99

experiences no matter who did it to you or where it happened. It could be done to you
by a male or female, service member or civilian, someone you knew or a stranger”
[emphasis in original]. In addition, each behaviorally specific screening question was
worded to be broadly inclusive without any reference to military context, e.g., “Since
8/1/2013, did you have any unwanted experiences in which someone put his penis into
your anus or mouth?” Thus, while it is possible that the entire study underestimates the
rate at which service members are sexually assaulted by civilians, it would require that
the respondents ignore the instructions and add restrictions to the questions that were
not present in the text.
There are other reasons to doubt that reserve-component members were systematically underreporting nonmilitary sexual assaults, at least to an extent that would
produce the pattern of results we observed. Such a bias would have to be very large
to explain the high rate of military sexual assaults among reservists. If, for instance,
the true proportion of sexual assaults against part-time reserve-component members
involving military personnel or settings was 11  percent (the average portion of the
year this group indicated they spent in compensated military duties) rather than the
85  percent that we observe, it would require that 98  percent of reserve-component
respondents who experienced a nonmilitary sexual assault failed to indicate that on
the survey.
Of course, the proportion of time that reserve-component members spend with
other members or in military settings may be substantially greater than the time they
spend in compensated duties or drilling. However, even if we assumed the true proportion was twice the proportion of compensated time (22 percent rather than 11 percent),
that would still imply that 95  percent of all nonmilitary sexual assaults went unreported. If we quadruple the estimate of the amount of time these part-time reservists
spent with other members of the military or in military settings (to 44 percent), this
would still imply that 86 percent of all nonmilitary sexual assaults went unreported.
Such high rates of underreporting would imply that virtually all reserve respondents
misunderstood the instructions and questions.
An undercount of non-military sexual assaults by such a large fraction would
imply that the true risk of past-year sexual assault among reservists is much higher
than estimated. For example, a bias this large would imply that 7.4 percent experienced
a sexual assault in the past year, rather than the currently estimated value of 0.9 percent (0.38 percent for men, 3.13 percent for women). For reference, averaged over the
decade 2003–2013, the annual rate of sexual assault in the total U.S. population was
estimated to be 0.25 percent for men and 2.03 percent for women by the National
Crime Victimization Survey (Bureau of Justice Statistics, undated).
In addition, the pattern of results we observe using the RAND survey is generally
similar to results from previous WGRR and WGRA studies. Our estimates of sexual
assault rates in the reserve component (0.38 percent for men, 3.13 percent for women)
are not statistically significantly different than the 2012 WGRR estimates of unwanted

100

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

sexual contact (0.5 percent for men, 2.8 percent for women). We know from our comparison of the WGRA and RMWS questions that unwanted sexual contact and sexual
assault rates from the two methods arrive at similar estimates. If our RMWS estimates of sexual assault were substantially underreported due to confusion over question wording or scope, it suggests that either (a) the 2012 WGRR also suffered from
the same systematic response bias, despite the fact that it used substantially different
question wording and instructions, or (b) the true rate of sexual assault in the reserve
component jumped markedly since 2012, but this jump was obscured by the offsetting response bias created by confusion over the RAND questions. Finally, our finding
that reserve-component men and women experienced lower rates of past-year sexual
assaults than active-component members agrees with the parallel finding for unwanted
sexual contact that has been found in all previous WGRR surveys. Indeed, the difference in risk between the active and reserve components observed in the 2012 WGRA
and WGRR were very similar to the differences observed in the RMWS.
For these reasons, we doubt that our findings of elevated risk of sexual assault
among active-component members compared to reserve, and our finding that sexual
assaults against reserve-component members disproportionately involve military personnel or settings, can be explained entirely by a systematic response bias on the part of
reserve-component members. Nevertheless, we note that in 2015 DMDC conducted a
new WGRR survey, and could use this new survey to further examine whether reservecomponent members understand the scope of the unwanted sexual contact and sexual
assault questions.
Conclusions
This chapter considers a range of possible sources of bias on our survey estimates, such
as exclusions from the sample frame (e.g., when service members exposed to sexual
assault risk were nevertheless excluded from the sample frame) and specification errors
(when people are incorrectly categorized as experiencing a sexual assault). Table 6.2
summarizes the estimates presented in this chapter for the likely direction and approximate magnitude of these possible sources of bias on our estimate of the number of service members who experienced a sexual assault in the past year.
By far the largest source of potential bias is the exclusion from the sample frame
of members who served in the military during the past year, but separated before the
sample frame was drawn. Using a range of estimates for their sexual assault risk, from
rates that are almost certainly too low to almost certainly too high, this source of error
contributes to an underestimate of 4 percent to 14 percent of the true number.
This finding, in conjunction with the effect of excluding members in their first
six months of service, suggests that our survey estimates are almost certainly slightly
biased in the direction of providing underestimates of sexual assault prevalence. That

Undercounting and Overcounting of Service Members Exposed to Sexual Assault

101

Table 6.2
Summary of Possible Biases in the Estimated Number of Active-Component Members Who
Experienced a Sexual Assault Due to Sample Frame and Specification Errors
Rank

Source of Bias

Possible Size of
Overestimate

1

Exclusion of recent separations

2

Inclusion of nonpenile oral penetration

3

Exclusion of assaults in first six months of service

4

Inclusion of frozen in fear and blackouts

50

5

Inclusion of preservice assaults

<10

Possible Size of
Underestimate
900–2,800

250
25–190

NOTE: The overall RMWS estimate of the number of active-component members who experienced a
sexual assault in the past year was 20,300 individuals. Thus, an underestimate of 203 would correspond
to the RMWS estimate omitting 1 percent of all true cases.

is, the sources of bias that plausibly lead to underestimates are considerably larger than
the countervailing sources of bias that may lead to overestimates. Moreover, the two
largest potential sources of overestimation (our inclusion of blackouts, frozen in fear,
and oral penetration) are not actually errors in our opinion. The decision to include
these members in our population estimate was consistent with the definitions of these
crimes within the UCMJ. We list them in this chapter only for the benefit of those
who question how large an effect these choices had on our overall estimates. Finally,
we also conclude that, although some members of the reserve component may have
misconstrued the survey instructions, and therefore reported only those sexual assaults
they experienced that were linked in some way to the military, such errors are unlikely
to be large enough to explain the high proportion of sexual assaults against reservists
that are linked to their military service.

CHAPTER SEVEN

Performance of the Sexual Harassment and Gender
Discrimination Module
Coreen Farris, Lisa H. Jaycox, and Terry L. Schell

The survey measures of sexual harassment and gender discrimination in the military
were designed to assess the extent to which negative workplace experiences rose to the
level of a DoD-defined MEO violation. As such, the measures differ from other measures in the literature, which do not require respondents to meet such a high threshold for inclusion in the group of individuals who have experienced sexual harassment
or gender discrimination. This chapter provides a thorough description of the survey
construction and derived variable definitions. By comparing the percentage of service members who endorsed a screening item (e.g., repeated sexual “jokes”) with those
who also met the follow-up criteria that the behavior was persistent or met a “reasonable person” standard, the reader can examine how these additional definitional
requirements affected the final estimates of sexual harassment. In addition, this chapter describes a programming error that was discovered by DMDC in the definition of
sexually hostile workplace harassment and quantifies the very small effect it had on the
previously reported estimates of sexual harassment.
Sexual Harassment and Gender Discrimination Screening Items
The section of the survey that assessed respondents’ experiences with sexual harassment
or gender discrimination began with a brief set of instructions that informed them that
the next set of questions would query “several things that someone from work might
have done to you that were upsetting or offensive, and that happened AFTER [date
exactly one year prior to survey completion date].” The instructions also provided a
detailed definition of “someone from work”:
[A]ny person you have contact with as part of your military duties. “Someone from
work” could be a supervisor, someone above or below you in rank, or a civilian
employee/contractor. They could be in your unit or in other units.

103

104

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

The instructions also specified that incidents could have occurred on-duty or off-duty,
on-base or off-base, and should be included provided the person who did them was
someone from work.
The module began with 15 screening questions that assessed inappropriate workplace behaviors (see Table  7.1, items SH1–SH15). Respondents who denied having
experienced any of the 15 inappropriate workplace behaviors in the past year were categorized as service members who were not sexually harassed or discriminated against;
these respondents received no further questions in this module.
Table 7.1
Fifteen Inappropriate Workplace Behaviors and the Percentage of Men and Women Who
Indicated They Experienced Each Behavior in the Past Year
Men
(95% CI)

Women
(95% CI)

11.86
(11.20–12.54)

25.76
(25.13–26.41)

SH1: Since [X Date], did someone from work repeatedly tell sexual
“jokes” that made you uncomfortable, angry, or upset?

5.17
(4.69–5.68)

13.08
(12.57–13.61)

SH2: Since [X Date], did someone from work embarrass, anger, or
upset you by repeatedly suggesting that you do not act like a [man/
woman] is supposed to?

6.28
(5.75–6.84)

7.65
(7.24–8.08)

SH3: Since [X Date], did someone from work repeatedly make sexual
gestures or sexual body movements (for example, thrusting their
pelvis or grabbing their crotch) that made you uncomfortable,
angry, or upset?

2.65
(2.28–3.07)

5.13
(4.77–5.51)

SH4: Since [X Date], did someone from work display, show, or send
sexually explicit materials like pictures or videos that made you
uncomfortable, angry, or upset?

1.58
(1.34–1.85)

3.59
(3.31–3.90)

SH5: Since [X Date], did someone from work repeatedly tell you
about their sexual activities in a way that made you uncomfortable,
angry, or upset?

3.53
(3.15–3.95)

7.55
(7.14–7.97)

SH6: Since [X Date], did someone from work repeatedly ask you
questions about your sex life or sexual interests that made you
uncomfortable, angry, or upset?

2.86
(2.49–3.27)

8.22
(7.79–8.68)

SH7: Since [X Date], did someone from work make repeated
sexual comments about your appearance or body that made you
uncomfortable, angry, or upset?

2.01
(1.70–2.35)

8.70
(8.26–9.15)

SH8 and SH8a: Since [X Date], did someone from work either take
or share sexually suggestive pictures or videos of you when you did
not want them to? AND Did this make you uncomfortable, angry, or
upset?

0.45
(0.31–0.62)

1.03
(0.88–1.19)

SH9 and SH9a: Since [X Date], did someone from work make
repeated attempts to establish an unwanted romantic or sexual
relationship with you? These could range from repeatedly asking
you out for coffee to asking you for sex or a “hook-up.” AND Did
these attempts make you uncomfortable, angry, or upset?

0.61
(0.44–0.83)

9.02
(8.58–9.48)

Inappropriate Workplace Behavior
Any Sexually Hostile Workplace Behaviors (SH1–SH11)

Performance of the Sexual Harassment and Gender Discrimination Module

105

Table 7.1—Continued
Men
(95% CI)

Women
(95% CI)

SH10: Since [X Date], did someone from work intentionally touch
you in a sexual way when you did not want them to? This could
include touching your genitals, breasts, buttocks, or touching you
with their genitals anywhere on your body.

1.18
(0.95–1.45)

3.06
(2.77–3.36)

SH11: Since [X Date], did someone from work repeatedly touch you
in any other way that made you uncomfortable, angry, or upset?
This could include almost any unnecessary physical contact including
hugs, shoulder rubs, or touching your hair, but would not usually
include handshakes or routine uniform adjustments.

1.38
(1.16–1.64)

5.31
(4.97–5.68)

0.55
(0.39–0.75)

2.35
(2.10–2.61)

SH12: Since [X Date], has someone from work made you feel as if
you would get some [If reserve, insert “military”] workplace benefit
in exchange for doing something sexual? For example, they might
hint that they would give you a good evaluation/fitness report, a
better assignment, or better treatment at work in exchange for
doing something sexual. Something sexual could include talking
about sex, undressing, sharing sexual pictures, or having some type
of sexual contact.

0.41
(0.27–0.61)

1.81
(1.60–2.05)

SH13: Since [X Date], has someone from work made you feel like
you would get punished or treated unfairly in the [If reserve, insert
“military”] workplace if you did not do something sexual? For
example, they hinted that they would give you a bad evaluation/
fitness report, a bad assignment, or bad treatment at work if you
were not willing to do something sexual. This could include being
unwilling to talk about sex, undress, share sexual pictures, or have
some type of sexual contact.

0.33
(0.22–0.47)

1.38
(1.20–1.57)

4.11
(3.77–4.46)

29.73
(29.09–30.38)

SH14: Since [X Date], did you hear someone from work say that
[men/women] are not as good as [women/men] at your particular
[If reserve, insert “military”] job, or that [men/women] should be
prevented from having your job?

1.84
(1.62–2.08)

19.56
(18.96–20.16)

SH15: Since [X Date], do you think someone from work mistreated,
ignored, excluded, or insulted you because you are a [man/woman]?

3.10
(2.82–3.41)

24.36
(23.76–24.97)

Inappropriate Workplace Behavior

Any Quid Pro Quo Behaviors (SH12–SH13)

Any Gender Discrimination Behaviors (SH14–SH15)

Classification of Sexual Harassment of the Sexually Hostile Work
Environment Type
Respondents who indicated that they experienced an inappropriate workplace behavior received follow-up questions specific to the item(s) that they endorsed. The first 11
items (SH1–SH11) correspond to workplace behaviors that could be categorized as
the sexually hostile workplace form of sexual harassment if the offensive behavior was
either (1) persistent (i.e., the respondent indicated the behavior continued even after the
coworker knew that it was upsetting to others) or was described by the respondent as

106

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

(2) severe (i.e., the behavior was so severe that most service members of the respondent’s
gender would find it offensive). A flow chart describing the flow of follow-up questions
and the logic for final categorization as an MEO violation is illustrated in Figure 7.1.
For nine of the 11 items that assessed behaviors that might meet the criteria for
sexually hostile work environment harassment (SH1–SH7, SH9, SH11), respondents
received up to three follow-up questions to assess whether the behavior was persistent or severe. For ease of responding, persistence was measured with a series of two
questions. First, we assessed whether the offender was aware that his or her behavior
was offensive. The item text was: “Do you think they knew that you or someone else
wanted them to stop? If it happened more than once or by more than one person, do
you think any of them ever knew?” For those who believed the offender was aware that
others were offended by his or her behavior, the second question asked: “Did they continue this unwanted behavior even after they knew that you or someone else wanted
them to stop?” If the respondent indicated that the behavior persisted even after the
offender was aware that someone wanted them to stop, their experience met the persistence criterion, and the respondent was classified as having experienced sexual harassment of the sexually hostile work environment type.
For respondents who had experienced an inappropriate workplace behavior that
was not persistent, their experience might still constitute sexual harassment of the
sexually hostile work environment type if the behavior they experienced was suffiFigure 7.1
Flowchart of the Assessment of Sexually Hostile Workplace Harassment, Quid Pro Quo
Sexual Harassment, and Gender Discrimination
Hostile Workplace
Behaviors

Quid Pro Quo
Behaviors

Gender Discrimination
Behaviors

Respondent experienced
an inappropriate
workplace behavior

Respondent experienced
an inappropriate
workplace behavior

Respondent experienced
an inappropriate
workplace behavior

Respondent indicates they
had direct evidence of a
quid pro quo offer
or exchange

Respondent indicates that
negative gender behavior
harmed or limited his
or her career

Behavior constitutes
quid pro quo sexual
harassment

Behavior constitutes
gender discrimination

Offender knew that someone
wanted him/her to stop and
continued the behavior
(persistance criterion)

If not persistent, behavior
was severe/meets the
reasonable person standard
Behavior constitues
hostile workplace
sexual harassment
RAND RR870/6-7.1

Performance of the Sexual Harassment and Gender Discrimination Module

107

ciently severe as to meet the reasonable person standard. Consistent with DoD policy,
severity was assessed via the item “Do you think that this was ever severe enough that
[most men/women] in the military would have been offended by [these jokes] if they
had heard them?” The phrase in braces was specific to the screening question assessed.
Service members who responded “yes” to this follow-up item were classified as having
experienced sexual harassment of the sexually hostile work environment type.
For one of the 11 items corresponding to sexually hostile work environment harassment (SH10), no follow-up questions to assess persistence or severity were required (or
administered). This item assessed whether “someone from work intentionally touched
you in a sexual way when you did not want them to” which could include touching “genitals, breasts, buttocks, or touching you with their genitals anywhere on your
body.” If this behavior occurred in a work environment (as assessed), it is automatically categorized as an incident of sexual harassment of the sexually hostile workplace
environment type. According to case law and DoD directives, unwanted sexual contact need not be persistent (i.e., a single occurrence is sufficient to establish an MEO
violation) and is, by definition, severe, as it is also classified as a criminal offense under
UCMJ Article 120.
Finally, for one of the 11 items corresponding to sexually hostile work environment harassment (SH8), the follow-up question assessing severity was administered,
but questions assessing persistence were not. This screening question assessed whether
someone from work took or shared “sexually suggestive pictures or videos of you when
you did not want them to.” If this behavior occurs in a work setting, it need not be persistent to rise to the level of an MEO violation—a single occurrence is adequate. However, it does have to be sufficiently severe that a reasonable person would be offended by
the incident. For example, while some service members might be personally offended
by an image of themselves in a form-fitting shirt, circulating the photo would not be
considered sexual harassment if most service members would not be offended by a
similar photograph of themselves. Respondents who endorsed the screening item, and
verified that they believed that most service members of the same gender would be
offended if it had happened to them, were categorized as having experienced sexual
harassment of the sexually hostile work environment type.
Results

As seen in Table 7.1, 25.8 percent of service women and 11.9 percent of service men
indicated that they had experienced at least one of the 11 inappropriate workplace
behaviors that could indicate a sexually hostile work environment in the past year.
Some of the 11 behaviors, such as a coworker taking or sharing sexually suggestive pictures (SH8 and SH8a), were relatively rare (1.0 percent of women and 0.45 percent of
men endorsed the item). Whereas other hostile workplace behaviors, such as coworkers
repeatedly telling jokes that the service person found offensive (SH1), were experienced
by many women (13.1 percent) and men (5.2 percent). Across the 11 behaviors that

108

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

could indicate a sexually hostile workplace, women were more likely than men to have
experienced each (Table 7.1). In the most extreme differentiation between the genders,
women (9.02 percent) were nearly 15 times more likely than men (0.61 percent) to indicate that someone from work had made repeated attempts to establish an unwanted
romantic or sexual relationship that the respondent found offensive (SH9 and SH9a).
Tables 7.2 and 7.3 show the flow of service women and men (respectively) through
the categorization process by which inappropriate workplace behaviors were categorized as sexual harassment of the sexually hostile work environment type. Considering first the persistence criterion for establishing that a sexually hostile work environment is present, there was variation between men and women. For service women who
endorsed inappropriate hostile workplace behavior, 39 percent to 59 percent indicated
that the person knew that someone was offended and continued the behavior anyway
(Table 7.2). In general, fewer men indicated that the inappropriate behavior that they
experienced was known to be offensive and persisted (32 percent to 47 percent across
behaviors; Table 7.3).
Even when an inappropriate hostile workplace behavior is not persistent, DoD
directives categorize events that were so severe that a reasonable person would be
offended as MEO violations. Among women endorsing a hostile workplace behavior
and who denied that it was persistent, 59 percent to 85 percent indicated that it was
severe enough that most women in the military would be offended (Table 7.2). For
men, the proportion who believed the nonpersistent, inappropriate workplace behavior was severe enough to meet a reasonable-person standard varied from 20 percent to
54 percent across behaviors (Table 7.3).
Combined across the two criteria used to categorize inappropriate workplace
behaviors as sexual harassment of the sexually hostile work environment type, at least
three-quarters of women who experienced the inappropriate behaviors were ultimately
categorized as experiencing an MEO violation (see column 3 in Table 7.2). At the lower
end, 74 percent of women who indicated that someone at work had touched them in
a way that made them uncomfortable or upset (SH11) were categorized as experiencing an MEO violation (i.e., the behavior was persistent or severe). At the upper end,
88 percent of women who indicated that someone at work had repeatedly made sexual
gestures or sexual body movements (SH3) were categorized as experiencing an MEO
violation. This range excludes an item assessing unwanted sexual touching (SH10),
which is automatically categorized as an MEO violation.
The percentages of men who experienced inappropriate workplace behaviors that
were ultimately categorized as having experienced an MEO violation tended to be
lower than for women (49–66 percent). For men, the behavior least likely to meet one
of the criteria for classification as sexual harassment of the sexually hostile work environment type was screening item SH5 (someone from work repeatedly discussing their
sexual experiences in a way that offended the respondent). Of the men who indicated
that they had this experience, 49 percent indicated that it was either persistent or met

Table 7.2
For Women, Questionnaire Flow from Experiencing an Inappropriate Workplace Behavior to Being Categorized as Having Experienced
Sexual Harassment of the Sexually Hostile Work Environment Type

Percentage of
female respondents
who experienced
the inappropriate
workplace behavior

Of those,
percentage who
indicated the
behavior was
persistent

13.1
(12.57–13.61)

49%

70%

84%

11.0
(10.52–11.49)

SH2: Embarrass, anger, or upset you by
repeatedly suggesting that you do not act like a
[man/woman] is supposed to?

7.7
(7.24–8.08)

48%

69%

83%

6.3
(5.96–6.75)

SH3: Repeatedly make sexual gestures or sexual
body movements that made you uncomfortable,
angry, or upset?

5.1
(4.77–5.51)

59%

72%

88%

4.5
(4.16–4.85)

SH4: Display, show, or send sexually explicit
materials like pictures or videos that made you
uncomfortable, angry, or upset?

3.6
(3.31–3.90)

51%

67%

82%

3.0
(2.71–3.26)

SH5: Repeatedly tell you about their sexual
activities in a way that made you uncomfortable,
angry, or upset?

7.6
(7.14–7.97)

50%

71%

85%

6.4
(6.04–6.81)

SH6: Repeatedly ask you questions about
your sex life or sexual interests that made you
uncomfortable, angry, or upset?

8.2
(7.79–8.68)

52%

66%

83%

6.8
(6.44–7.26)

SH7: Make repeated sexual comments about
your appearance or body that made you
uncomfortable, angry, or upset?

8.7
(8.26–9.15)

50%

71%

84%

7.3
(6.94–7.77)

Hostile Workplace
Inappropriate Workplace Behaviors
SH1: Repeatedly tell sexual “jokes” that made
you uncomfortable, angry, or upset?

Performance of the Sexual Harassment and Gender Discrimination Module

Of those who
experienced the
If not persistent,
inappropriate
percentage who workplace behavior, Percentage of all
indicated the
percentage
female respondents
behavior was
categorized as
categorized as
severe/meets
experiencing a
experiencing a
reasonable person
sexually hostile
sexually hostile
standard
work environment work environment

109

110

Table 7.2—Continued

Percentage of
female respondents
who experienced
the inappropriate
workplace behavior

Of those,
percentage who
indicated the
behavior was
persistent

SH8 and SH8a: Either take or share sexually
suggestive pictures or videos of you when you
did not want them to? AND Did this make you
uncomfortable, angry, or upset

1.0
(0.88–1.19)

NAa

86%

86%

0.9
(0.74–1.04)

SH9 and SH9a: Make repeated attempts to
establish an unwanted romantic or sexual
relationship with you? AND Did these attempts
make you uncomfortable, angry, or upset?

9.0
(8.58–9.48)

53%

67%

83%

7.6
(7.14–7.98)

SH10: Intentionally touch you in a sexual way
when you did not want them to?

3.1
(2.77–3.36)

NAa

NAb

100%

3.1
(2.77–3.36)

SH11: Repeatedly touch you in any other way
that made you uncomfortable, angry, or upset?c

5.3
(4.97–5.68)

39%

59%

74%

3.9d
(3.64–4.24)

Hostile Workplace
Inappropriate Workplace Behaviors

NOTE: 95-percent confidence intervals for each population estimate are indicated in parentheses. Confidence intervals are not provided for nonpopulation estimates.
a Criterion was not assessed, because the behavior need not be persistent to rise to the level of an MEO violation.
b Criterion was not assessed, because unwanted sexual touching in a workplace—a criminal behavior—is considered severe without requiring

respondent verification.
c Item SH11 was asked only of respondents who answered “no” to SH10. Respondents who answered “yes” to SH10 were automatically coded (for final
estimates) as having experienced the broader category represented in SH11.
d This value (3.9 percent) represents the proportion of respondents who were presented with SH11 who were ultimately categorized as having

experienced a sexually hostile workplace environment. When respondents who experienced unwanted sexual touching (SH10) are included in this
category, the percentage is 7.0.

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

Of those who
experienced the
If not persistent,
inappropriate
percentage who workplace behavior, Percentage of all
indicated the
percentage
female respondents
behavior was
categorized as
categorized as
severe/meets
experiencing a
experiencing a
reasonable person
sexually hostile
sexually hostile
standard
work environment work environment

Table 7.3
For Men, Questionnaire Flow from Experiencing an Inappropriate Workplace Behavior to Being Categorized as Having Experienced
Sexual Harassment of the Sexually Hostile Work Environment Type

Percentage of
male respondents
who experienced
the inappropriate
workplace behavior

Of those,
percentage who
indicated the
behavior was
persistent

SH1: Repeatedly tell sexual “jokes” that made
you uncomfortable, angry, or upset?

5.2
(4.69–5.68)

33%

24%

49%

2.5
( 2.19–2.89)

SH2: Embarrass, anger, or upset you by
repeatedly suggesting that you do not act like a
[man/woman] is supposed to?

6.3
(5.75–6.84)

35%

34%

57%

3.6
(3.14–4.02)

SH3: Repeatedly make sexual gestures or sexual
body movements that made you uncomfortable,
angry, or upset?

2.7
(2.28–3.07)

40%

30%

58%

1.5
(1.25–1.85)

SH4: Display, show, or send sexually explicit
materials like pictures or videos that made you
uncomfortable, angry, or upset?

1.6
(1.34–1.85)

32%

28%

51%

0.8
(0.63–1.02)

SH5: Repeatedly tell you about their
sexual activities in a way that made you
uncomfortable, angry, or upset?

3.5
(3.15–3.95)

36%

20%

49%

1.7
(1.48–1.99)

2.9(2.49–3.27)

35%

26%

52%

1.5
(1.23–1.79)

2.0
(1.70–2.35)

47%

38%

66%

1.3
(1.07–1.65)

Hostile Workplace
Inappropriate Workplace Behaviors

SH6: Repeatedly ask you questions about
your sex life or sexual interests that made you
uncomfortable, angry, or upset?
SH7: Make repeated sexual comments about
your appearance or body that made you
uncomfortable, angry, or upset?

Performance of the Sexual Harassment and Gender Discrimination Module

Of those who
experienced the
If not persistent,
inappropriate
percentage who workplace behavior, Percentage of all
indicated the
percentage
male respondents
behavior was
categorized as
categorized as
severe/meets
experiencing a
experiencing a
reasonable person
sexually hostile
sexually hostile
standard
work environment work environment

111

112

Table 7.2—Continued

Percentage of
male respondents
who experienced
the inappropriate
workplace behavior

Of those,
percentage who
indicated the
behavior was
persistent

SH8 and SH8a: Either take or share sexually
suggestive pictures or videos of you when you
did not want them to? AND Did this make you
uncomfortable, angry, or upset

0.4
(0.31–0.62)

NAa

54%

54%

0.2
(0.14–0.38)

SH9 and SH9a: Make repeated attempts to
establish an unwanted romantic or sexual
relationship with you? AND Did these attempts
make you uncomfortable, angry, or upset?

0.6
(0.44–0.83)

46%

38%

66%

0.4
(0.26–0.59)

SH10: Intentionally touch you in a sexual way
when you did not want them to?

1.2
(0.95–1.45)

NAa

NAb

100%

1.2
(0.95–1.45)

SH11: Repeatedly touch you in any other way
that made you uncomfortable, angry, or upset?c

1.4
(1.16–1.64)

35%

32%

55%

0.8d
(0.58–0.99)

Hostile Workplace
Inappropriate Workplace Behaviors

NOTE: 95-percent confidence intervals for each population estimate are indicated in parentheses. Confidence intervals are not provided for nonpopulation estimates.
a Criterion was not assessed, because the behavior need not be persistent to rise to the level of an MEO violation.
b Criterion was not assessed, because unwanted sexual touching in a workplace—a criminal behavior—is considered severe without requiring

respondent verification.
c Item SH11 was asked only of respondents who answered “no” to SH10. Respondents who answered “yes” to SH10 were automatically coded (for final
estimates) as having experienced the broader category represented in SH11.
d This value (0.8 percent) represents the proportion of respondents who were presented with SH11 who were ultimately categorized as having

experienced a sexually hostile workplace environment. When respondents who experienced unwanted sexual touching (SH10) are included in this
category, the percentage is 1.9.

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

Of those who
experienced the
If not persistent,
inappropriate
percentage who workplace behavior, Percentage of all
indicated the
percentage
male respondents
behavior was
categorized as
categorized as
severe/meets
experiencing a
experiencing a
reasonable person
sexually hostile
sexually hostile
standard
work environment work environment

Performance of the Sexual Harassment and Gender Discrimination Module

113

the reasonable person standard for severity. The behaviors most likely to meet classification criteria among men were SH7 and SH9; 66 percent of men who said that someone
from work made repeated sexual comments about their appearance and 66  percent
who had repeated and unwanted attempts to establish a romantic relationship were categorized as experiencing a MEO violation. Again, this range excludes unwanted sexual
touching, which is automatically categorized as an MEO violation.
Classification of Sexual Harassment of the Quid Pro Quo Type
Two screening items assessed inappropriate workplace behaviors that could indicate
that a quid pro quo violation had occurred, provided the level of evidence was sufficient
to meet DoD criteria for an MEO violation (SH12 and SH13). As shown in Table 7.1,
2 percent of women and less than 1 percent of men endorsed one of the two screening
items indicating that someone from work had either offered a workplace benefit (SH12)
or threatened a workplace punishment (SH13) in exchange for sexual behavior.
For those respondents who indicated an inappropriate workplace experience that
could constitute sexual harassment of the quid pro quo type, we assessed the level of
evidence they had for such an offer with five follow-up questions (see Table  7.4 for
exact wording). The first three follow-up items are defined in DoD directives as sufficient evidence to indicate that a quid pro quo violation occurred; thus, respondents
who endorsed one of the screening items (SH12 or SH13) and who indicated that they
had adequate evidence for an exchange (e.g., SH12a, SH12b, or SH12c) were classified as having experienced sexual harassment of the quid pro quo type. Individuals who
did not endorse any of the first three follow-up items (e.g., SH12a, SH12b, or SH12c),
and instead indicated that they believed a quid pro quo exchange was offered based on
rumor, hearsay, or inference based on the person’s personality (e.g., SH12d or SH12e)
were not classified as having experienced an MEO violation.
As shown in Table 7.5, among women who indicated an inappropriate workplace
behavior suggestive of a quid pro quo exchange, 74.3 percent of the women who were
offered a workplace benefit and 67.2 percent of women who were threatened with a
workplace punishment had adequate evidence of an exchange as to suggest that an
MEO violation had occurred and were subsequently classified as having experienced
sexual harassment of the quid pro quo type. The percentages among men were 77.2 percent and 56.9 percent, respectively (Table 7.6).
Classification of Gender Discrimination
Two screening items assessed inappropriate workplace behaviors that could indicate that gender discrimination may have occurred (provided subsequent questions

114

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

Table 7.4
Follow-Up Items Assessing the Level of Evidence for a Possible Quid Pro Quo Offer
You indicated that, after [date exactly one year prior], someone from work made you feel as if you
would get some workplace benefit in exchange for doing something sexual. What led you to believe
that you would get a workplace benefit if you agreed to do something sexual? Select “Yes” or “No”
for each item.
SH12a. They told you that they would give you a reward or benefit for doing something
sexual.

Yes

No

SH12b. They hinted that you would get a reward or benefit for doing something sexual.
For example they reminded you about your evaluation/fitness report about the same time
that they expressed sexual interest.

Yes

No

SH12c. Someone else told you they got benefits from this person by doing sexual things.

Yes

No

SH12d. You heard rumors from other people that this person treated others better in
exchange for doing sexual things.

Yes

No

SH12e. Based on what you knew about their personality, you thought you could get a
benefit.

Yes

No

verified harm to career, as is necessary to meet DoD criteria for an MEO violation;
SH14 and SH15). As shown in Table 7.1, 30 percent of women and 4 percent of men
endorsed one of the two screening items indicating that someone from work had either
indicated that someone of their gender should not be in their work position (SH14)
or believed that someone from work had mistreated them due to their gender (SH15).
These experiences were more common among women than men (Table 7.1). One out
of every five female service members indicated that someone from work had said that
women were not as good as men at their military occupation or that women should
be prevented from having the respondent’s job. One out of every four women said that
someone from work had mistreated, ignored, excluded, or insulted them because of
gender.
For those respondents who indicated an inappropriate workplace experience that
could constitute gender discrimination, we assessed whether the behavior “ever harmed
or limited [the respondent’s] career.” To clarify the meaning of harm to career, we provided examples including harm to an evaluation or fitness report and impact on promotion or the respondent’s next assignment. Service members who responded “yes” to
the follow-up question were categorized as having experienced a gender discrimination
MEO violation.
Among women who had experienced an inappropriate workplace behavior suggestive of gender discrimination, 41–43 percent indicated that the behaviors had risen to a
level that harmed or limited the respondent’s career (an MEO violation; see Table 7.7).
Among men who had these inappropriate workplace experiences, 35–50 percent were
categorized as experiencing an MEO violation (Table 7.8).

Table 7.5
For Women, Questionnaire Flow from Experiencing an Inappropriate Workplace Behavior to Being Categorized as Having Experienced
Sexual Harassment of the Quid Pro Quo Type
Of those who experienced
the inappropriate workplace Of those who experienced
behavior, percentage who
inappropriate workplace
Percentage of female
indicated they had direct
behavior, percentage
respondents categorized
evidence of an offer or
categorized as experiencing as experiencing a quid
exchange
a quid pro quo violation
pro quo violation

Since [X Date], has someone from work made you feel as if you would get . . .
SH12: . . . some workplace benefit in
exchange for doing something sexual?

1.8
(1.60–2.05)

74%

74%

1.3
(1.16–1.56)

SH13: . . . punished or treated unfairly
in the workplace if you did not do
something sexual?

1.4
(1.20–1.57)

67%

67%

0.9
(0.77–1.10)

Table 7.6
For Men, Questionnaire Flow from Experiencing an Inappropriate Workplace Behavior to Being Categorized as Having Experienced
Sexual Harassment of the Quid Pro Quo Type

Quid Pro Quo
Inappropriate Workplace Behaviors

Percentage of male
respondents who
experienced the
inappropriate
workplace behavior

Of those who experienced
the inappropriate workplace Of those who experienced
behavior, percentage who
inappropriate workplace
Percentage of male
indicated they had direct
behavior, percentage
respondents categorized
evidence of an offer or
categorized as experiencing as experiencing a quid
exchange
a quid pro quo violation
pro quo violation

Since [X Date], has someone from work made you feel as if you would get . . .
SH12: . . . some workplace benefit in
exchange for doing something sexual?

0.4
(0.27–0.61)

77%

77%

0.3
(0.18–0.52)

SH13: . . . punished or treated unfairly
in the workplace if you did not do
something sexual?

0.3
(0.22–0.47)

57%

57%

0.2
(0.10–0.31)

Performance of the Sexual Harassment and Gender Discrimination Module

Quid Pro Quo
Inappropriate Workplace Behaviors

Percentage of female
respondents who
experienced the
inappropriate
workplace behavior

115

Since [X Date], . . .
SH14: . . . did you hear someone from work
say that [men/women] are not as good as
[women/men] at your particular job, or
that [men/women] should be prevented
from having your job?

19.6
(18.98–20.16)

41%

41%

8.1
(7.69–8.51)

SH15: . . . do you think someone from work
mistreated, ignored, excluded, or insulted
you because you are a [man/woman]?

24.4
(23.76–24.97)

43%

43%

10.6
(10.15–11.03)

Table 7.8
For Men, Questionnaire Flow from Experiencing an Inappropriate Workplace Behavior to Being Categorized as Having Experienced
Gender Discrimination

Gender Discrimination
Inappropriate Workplace Behaviors

Percentage of male
respondents who
experienced the
inappropriate
workplace behavior

Of those who experienced Of those who experienced
the inappropriate workplace inappropriate workplace
Percentage of male
behavior, percentage who
behavior, percentage
respondents categorized
indicate that it harmed
categorized as experiencing as experiencing an MEO
their career
a probable MEO violation
violation

Since [X Date], . . .
SH14: . . . did you hear someone from work
say that [men/women] are not as good as
[women/men] at your particular job, or
that [men/women] should be prevented
from having your job?

1.8
(1.62–2.08)

35%

35%

0.6
(0.52–0.77)

SH15: . . . do you think someone from work
mistreated, ignored, excluded, or insulted
you because you are a [man/woman]?

3.1
(2.82–3.41)

50%

50%

1.6
(1.35–1.78)

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

Gender Discrimination
Inappropriate Workplace Behaviors

Percentage of female Of those who experienced Of those who experienced
respondents who
the inappropriate workplace inappropriate workplace
Percentage of female
experienced the
behavior, percentage who
behavior, percentage
respondents categorized
inappropriate
indicate that it harmed
categorized as experiencing as experiencing an MEO
workplace behavior
their career
a probable MEO violation
violation

116

Table 7.7
For Women, Questionnaire Flow from Experiencing an Inappropriate Workplace Behavior to Being Categorized as Having Experienced
Gender Discrimination

Performance of the Sexual Harassment and Gender Discrimination Module

117

Error in Categorizing Hostile Workplace Experiences
A programming error caused a small number of respondents to be mischaracterized
as missing on key sexual harassment variables, when, in fact, they had answered in a
way that met criteria for having experienced an MEO violation. This occurred within
the series of questions that assessed hostile work environment harassment. The coding
error affected the categorization of respondents who indicated that the inappropriate
behavior occurred, indicated that the offender(s) knew that someone wanted them to
stop, skipped the question assessing whether the offender(s) continued after they knew,
and, finally, indicated that the behavior was serious enough that most service members
would have been offended. This series of answers is adequate to indicate that an MEO
violation occurred, but respondents with this pattern of responses were incorrectly categorized as missing too many responses to characterize their experiences.
To investigate the possible effect of the error on the previously reported estimates,
we calculated the number of respondents who were affected and how correcting the
error would change the percentage estimates for MEO violations. Table 7.9 shows the
number of respondents in each question series who met criteria for having experienced
the type of sexual harassment assessed, but who were misclassified as missing. For
example, in the second row, showing MEO violations attributable to being told one
is not acting according to one’s gender role, four women were incorrectly categorized
as missing when they should have been categorized as experiencing hostile workplace
sexual harassment. These respondents went on to answer the remaining questions that
assessed other types of hostile workplace behaviors. Two of the four women described
other types of hostile workplace violations, such that they were ultimately correctly categorized as belonging in the group of respondents who experienced a sexually hostile
workplace environment despite the misclassification on one set of questions. Finally,
the remaining two women were categorized as having experienced another type of
MEO violation, and, thus, were ultimately correctly categorized as having experienced
an MEO violation.
The last column of Table 7.9 provides the number of respondents who were not
counted as having experienced an MEO violation. This error occurred for, at most, five
women, and sometimes no one, for each type of hostile work environment harassment.
Across all types, a total of eight women and one man were not ultimately categorized
as experiencing hostile workplace harassment when they should have been, and seven
women were not ultimately characterized as experiencing an MEO violation when
they should have been. The total number of people is lower than the sum across rows,
because in some cases the same person was mischaracterized in more than one row.
Finally, we calculated the change in the top-line estimates of sexually hostile work
environment, sexual harassment, and any MEO violation that occurred as a result of
the coding error. At most, the estimates increased only by three one-hundredths of a
percentage and, in some cases, not at all (see Table 7.10). These discrepancies are too

Number of respondents
Number of respondents
who were not subsequently who were not subsequently
categorized as experiencing categorized as experiencing
a sexually hostile work
an MEO violation on the
environment on the basis of
basis of responses to
responses on another hostile
any other series of MEO
workplace behavior
questions
(Men/Women)
(Men/Women)

SH1: Repeatedly tell sexual “jokes” that made you
uncomfortable, angry, or upset?

1 / 10

1/5

0/5

SH2: Embarrass, anger, or upset you by repeatedly
suggesting that you do not act like a [man/woman] is
supposed to?

0/4

0/2

0/0

SH3: Repeatedly make sexual gestures or sexual body
movements that made you uncomfortable, angry, or upset?

2/1

0/0

0/0

SH4: Display, show, or send sexually explicit materials like
pictures or videos that made you uncomfortable, angry, or
upset?

1/6

0/0

0/0

SH5: Repeatedly tell you about their sexual activities in a way
that made you uncomfortable, angry, or upset?

1/8

0/0

0/0

SH6: Repeatedly ask you questions about your sex life or
sexual interests that made you uncomfortable, angry, or
upset?

0/6

0/0

0/0

SH7: Make repeated sexual comments about your
appearance or body that made you uncomfortable, angry, or
upset?

1/6

0/0

0/0

SH8 and SH8a: Either take or share sexually suggestive
pictures or videos of you when you did not want them to?
AND Did this make you uncomfortable, angry, or upset

0/0

0/0

0/0

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

Sexually Hostile Work Environment Violations

Number of respondents
mischaracterized as
“missing,” when their
responses to follow-up
items qualified them as
having experienced a
hostile work environment
(Men/Women)

118

Table 7.9
Number of Active-Component Respondents Who Were Mischaracterized as “Missing” When They Should Have Been Coded as
Experiencing a Hostile Work Environment

Table 7.9—Continued

Sexually Hostile Work Environment Violations

Number of respondents
mischaracterized as
“missing,” when their
responses to follow-up
items qualified them as
having experienced a
hostile work environment
(Men/Women)

Number of respondents
Number of respondents
who were not subsequently who were not subsequently
categorized as experiencing categorized as experiencing
a sexually hostile work
an MEO violation on the
environment on the basis of
basis of responses to
responses on another hostile
any other series of MEO
workplace behavior
questions
(Men/Women)
(Men/Women)

0/7

0/2

0/2

SH10: Intentionally touch you in a sexual way when you did
not want them to?

0/0

0/0

0/0

SH11: Repeatedly touch you in any other way that made you
uncomfortable, angry, or upset?

0/6

0/2

0/2

–

1/8

0/6

Total People

Performance of the Sexual Harassment and Gender Discrimination Module

SH9 and SH9a: Make repeated attempts to establish an
unwanted romantic or sexual relationship with you? AND Did
these attempts make you uncomfortable, angry, or upset?

119

120

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

Table 7.10
Changes in Top-Line MEO Violation Estimates as a Result of Programming Error
MEO Violation

With Coding Error

Error Corrected

Change

6.58%
(6.07–7.12)

6.58%
(6.07–7.13)

+ 0.0023%

21.41%
(20.81–22.03)

21.44%
(20.84–22.06)

+ 0.0292%

6.61%
(6.09–7.15)

6.61%
(6.09–7.16)

+0.0023%

21.57%
(20.96–22.19)

21.60%
(20.99–22.22)

+ 0.0293%

7.43%
(6.91–7.99)

7.43%
(6.91–7.99)

25.97%
(25.34–26.61)

26.00%
(25.36–26.64)

Sexually Hostile Work Environment
Men
Women
Sexual Harassment
Men
Women
Any MEO Violation
Men
Women

0.0000%
+ 0.024%

small to have any practical or policy implications, so we record the corrected values in
this volume but will not revise and republish Volumes 2 and 3 of Sexual Assault and
Sexual Harassment in the U.S. Military, nor the two top-line reports where the erroneous values were previously published (National Defense Research Institute, 2014a;
2014b). Appendix D provides complete summary tables of the changes in top-line
estimates by service branch and gender (Tables D.1 and D.2), by pay grade and gender
(Tables D.3 and D.4), and for the reserve component by gender (D.5). All of the analyses presented earlier in this chapter use the corrected variable derivation for sexually
hostile work environment, sexual harassment, and any MEO violation.
Conclusion
In the RAND form, the measures of sexual harassment and gender discrimination
first assess whether the respondent had a series of inappropriate workplace experiences.
For respondents who have had these negative events occur, additional follow-up items
assess characteristics of the events that are required in order for the experiences to be
categorized as an MEO violation. By definition, a higher percentage of service members had inappropriate workplace experiences in the past year than had an MEO violation. Depending on the needs of the leader or decisionmaker accessing these numbers, he or she may be more interested in the higher percentage of service members

Performance of the Sexual Harassment and Gender Discrimination Module

121

who experienced inappropriate workplace behaviors (whether those experiences rose
to the level of an MEO violation or not), or he or she may be more interested in the
smaller percentage of service members whose experiences met DoD policy definitions
of an MEO violation. This chapter provides clarity about the nature of each of these
estimates. In addition, we described a programming error in our definition of sexually hostile work environment, which affected a very small number of respondents and
shifted prevalence estimates of hostile workplace experiences by less than three onehundredths of a percentage.

CHAPTER EIGHT

Comparison of Events Identified by the Prior Form and
RAND Forms
Andrew R. Morral, Terry L. Schell, and Coreen Farris

Whereas most RMWS survey respondents received a version of the new RAND form,
29,541 were randomly assigned to a questionnaire that included the sexual harassment
and unwanted sexual contact questions used in earlier administrations of the WGRA
survey (the prior form). This survey design allowed us to establish whether rates of
sexual harassment and unwanted sexual contact were different in 2014 than in prior
years, using survey questions and methods that were comparable to those used in the
past. It also allowed us to compare estimates derived from the RAND forms with those
from the prior form. That is, because respondents were randomly assigned to one or
the other form, there should be no systematic differences in the respondents to each
form, or their true rates of exposure to criminal or MEO violations. Therefore, in this
chapter, we compare estimates from each form to draw inferences about differences in
the types of events captured.
Although top-line rates of exposure to sexual assault (or, under the WGRA,
unwanted sexual contact) and sexual harassment as measured by the prior form and
RAND forms are similar, this apparent similarity conceals substantial differences
in the people counted and the types of crimes they experienced. The RMWS was
designed to capture sex crimes as defined in the UCMJ and MEO violations as defined
in DoD policy. In contrast, the WGRA measures a climate of unwanted sexual experiences associated with illegal behavior, but was not designed as a precise crime or MEO
violation measure.
As summarized below, comparisons between the results of the prior form and
those of the RAND form suggest that the WGRA counts among those with past-year
“unwanted sexual contacts” and sexual harassment some people who have not experienced sex crimes or MEO violations in the past year, while at the same time missing
others who have had such experiences. We summarize here some of the key differences
in the offenses counted by the two methods.
All comparisons in this chapter are for the members of the DoD active-component population. To ensure that differences noted in this section are attributable to the
questionnaires themselves, and not the sample weighting system used, in this chapter
all estimates use the RMWS weights. This includes estimates for outcomes assessed on
the prior form survey. In earlier volumes, prevalence estimates for outcomes assessed

123

124

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

on the prior form used the WGRA weights to be consistent with existing time trends,
while results based on the RAND-designed survey questions use the RMWS weights,
which generally yield higher estimates of sexual assault, sexual harassment, and gender
discrimination (see Table 3.10). As such, estimates reported in this chapter based on
data from the prior form survey differ slightly from those reported in other volumes of
Sexual Assault and Harassment in the U.S. Military. For example, the estimated number
of members who experienced an unwanted sexual contact in Volume 2 was 18,900; in
this chapter, using the RMWS weights, this is estimated to be 22,100 members.
Some Past-Year Unwanted Sexual Contacts Counted with the Prior
Form Occurred More Than a Year Ago
Both the prior form and RAND forms asked about events occurring in the past year.
Prior research shows that many respondents report crimes as having taken place in
the past year when they actually experienced them more than a year ago. This kind of
timeframe “telescoping” can lead to substantially overestimated crime rates (Andersen, Kasper, and Frankel, 1979; Cantor, 1989; Lehnen and Skogan, 1984). To minimize this bias, the RMWS incorporated many techniques designed to reduce or limit
response telescoping (see Volume 1). This appears to have been effective. At the end
of the sexual assault section on the RAND form and the end of the unwanted sexual
contact section on the prior form, we asked respondents to confirm that the event they
were describing occurred in the past year.1 Whereas 6.8 percent taking the RAND
form said they were sure the event actually occurred more than a year ago (i.e., should
not be counted as a past-year event), 23.0 percent of prior-form respondents said they
were sure the event occurred more than a year ago.2
Moreover, respondents who confirmed that their sexual assaults occurred more
than one year ago were excluded from the past-year estimates derived from the RAND
form. Using the standard WGRA procedures, the much-larger portion who acknowledged that their “one event” occurred more than a year ago were nevertheless included
in estimates for the rate of past-year unwanted sexual contacts, which results in overcounting. Had they been excluded, rates on the RAND and prior forms would have
1	

Such a question had not previously been included in the WGRA survey, and it represented the only item added
to the prior form used in the 2014 RMWS.

2	

Our estimate here differs from the preliminary results offered in the top-line report (National Defense
Research Institute, 2014a). In that report, we said 25 percent of those counted as having a past-year unwanted
sexual contact later said they were sure the event did not occur in the past year. Whereas, in the top-line report,
our prior form analyses all used sample weights that followed the methods previously used by DMDC, in this
report we applied the new weights developed by RAND to both the RAND form results and the prior form
results, so that differences observed between results from the two forms are due exclusively to the form, not to
differences in the weights applied to each form.

Comparison of Events Identified by the Prior Form and RAND Forms

125

looked substantially different from each other, with the RAND form identifying more
cases of past-year sexual assault.
Both the RAND and prior forms asked respondents for details on one of the possibly multiple unwanted events that occurred in the past year, and it is this “one event”
that respondents were asked to confirm occurred within the past year. It could be argued,
however, that even though the “one event” selected by prior-form respondents was more
often found to have occurred more than a year ago, they nevertheless may still have experienced another unwanted sexual contact that did occur in the past year. That is, perhaps
the respondent had several events occur, and even though the one they described happened more than a year ago, others could have occurred within the past year resulting in
their proper classification as having a past-year unwanted sexual contact.
This possibility can be tested indirectly. On the prior form, the item immediately following the unwanted sexual contact question asked how many such events
occurred in the past year. The next item asked which of those events had the “greatest
effect” on the person, and this became the “one event” that respondents were asked
to describe in greater detail. If respondents said that only one such event occurred to
them, it is presumably only this one that qualified them as having experienced a pastyear unwanted sexual contact and which they selected as their “one event.” Therefore,
among those with just one unwanted sexual contact, all who later said their one event
actually occurred more than a year ago were considered as not having had a past-year
unwanted sexual contact.
More than 40 percent of respondents classified as having a past-year unwanted
sexual contact on the prior form said that only one such event occurred. Among this
40 percent, 23 percent (or 9.3 percent of all indicating one or more unwanted sexual
contacts) later said the event definitely occurred more than a year ago, and so should
be excluded from estimated rates of past-year unwanted sexual contacts. Eliminating
these cases would decrease the population estimate for past-year unwanted sexual contact by 9.3 percent, from about 22,100 to 20,300.
Undoubtedly, however, other respondents who said their one event occurred more
than a year ago but said they had more than one unwanted sexual contact in the past
year should also be excluded from the estimate. That is, it would turn out that all of
their “past-year” unwanted sexual contacts actually occurred more than a year ago. On
the prior form, for those who said their one event actually occurred more than a year
ago, we did not follow up to establish whether their other “past year” events did occur
in the past year. We did such a follow-up on the RAND form, however, finding that
73.9 percent of those with more than one past-year sexual assault and who said their
worst event occurred more than a year ago did not have any sexual assault in the past
year. If we assumed that a comparable rate would be found for prior-form respondents
with more than one past-year unwanted sexual contact and who confirmed their one
event occurred more than a year ago, this would further reduce the population estimate for past-year unwanted sexual contact to 18,400.

126

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

Together, these analyses suggest that the WGRA population estimates (as well as
the prior form estimate in 2014) of past-year unwanted sexual contact could be inflated
by 20 percent, an overestimate attributable to telescoping.
It is possible that the question wording we used to confirm that the “one event”
took place in the past year was confusing to respondents, meaning they erroneously
indicated that the event occurred more than a year ago. However, this could not
account for most of the 22  percent of all who were counted as having a past-year
unwanted sexual contact, but who indicated their one event took place more than a
year ago. This is because we used the same question wording on the parallel item in the
RAND form, where just 6.8 percent of those initially classified as experiencing a pastyear sexual assault confirmed the assault occurred more than a year ago. This indicates
that the maximum respondent error rate on this item is unlikely to be substantially
above 7  percent, which still leaves 15  percent of WGRA cases correctly confirming
their one event occurred more than a year ago.
The Prior Form Identifies Fewer Penetrative Sexual Assaults Than the
RAND Form
Whereas all types of sexual assaults can be traumatizing, laws treat penetrative crimes
as most severe, so they represent an important measure of the severity of sex crimes
against service members. Comparison of the number of penetrative sexual assaults and
penetrative unwanted sexual contacts derived from the two forms reveals large differences, however.3
Estimates for 2014 using the prior form suggest there were approximately 4,200
service members (95% CI: 3,200–5,300) who experienced a penetrative unwanted
sexual contact in the past year (including those that were improperly included due to
the telescoping problem described above).4 In contrast, the RAND measure assesses
whether any of the sexual assaults experienced by the service member in the past year
could be counted as a penetrative assault. This estimate suggests the number of penetrative assaults is almost twice as large as was measured using the prior form (7,800
on the RAND form [95% CI: 6,500–9,400]). This effect is most pronounced among
men, with the prior form yielding estimates that are less than one-third the rate found
using the RAND measures (1,200 versus 3,700, with 95-percent confidence intervals
of 600–2,100 and 2,400–5,300, respectively).

3	

We count as “penetrative” unwanted sexual contacts all those listed in the prior form as involving completed
sexual intercourse, oral sex, anal sex, or penetration by an object or finger.

4	

The estimates described in this paragraph differ from those from a similar analysis presented in the DoD
top-line report (National Defense Research Institute, 2014a). In the DoD Top Line report, point estimates were
rounded to the nearest 500. Here all population estimates have been rounded to the nearest 100.

Comparison of Events Identified by the Prior Form and RAND Forms

127

The WGRA was not designed to count the number of people who experienced
penetrative unwanted sexual contacts. The only description detailing types of assault
occurs for the “one event” selected by the respondent. If respondents experienced multiple unwanted sexual contacts, it is conceivable that they would select an event that
involved touching only as their “one event,” even though they also experienced a pastyear penetrative contact, for instance. In that case, the WGRA would collect no information that revealed the respondent had also experienced a penetrative assault.
Evidence from the RAND form, however, suggests respondents almost always
select as their “worst event” a sexual assault that matches the severity of the most severe
sexual assault. On the RAND form, respondents are first asked questions that determined the most severe sexual assault type they experienced in the past year. Respondents were then asked to describe the “worst” such assault. When we compared the
“worst” event to the most severe, they nearly always matched:
•	 99.2 percent of those whose “worst event” type was penetrative had been classified
as having a penetrative assault as their most severe past-year assault
•	 97.1 percent of those whose “worst event” was a non-penetrative (contact only)
assault were classed as having a non-penetrative assault as their most severe pastyear event
•	 74.6 percent of those with a “worst event” of attempted assault were classed as
having attempted assault as their most severe past-year sexual assault.
The overall concordance rate across the three categories is 97.0 percent.
This suggests that it is very unlikely that many people who experienced penetrative assaults in the past year chose some other less-serious crime as the one that had
the greatest effect on them when completing the prior form. Instead, it suggests the
RAND form identified nearly twice as many people who experienced a serious penetrative assault in the past year. 
There may be several reasons for the difference in estimates of penetrative sexual
assaults produced by the two forms. The RAND form asked three behaviorally specific
and detailed questions about penetrative sexual assault that align closely with the definitions used in the UCMJ; those three questions are asked of everyone in the survey. In
contrast, the prior form first filters out most respondents on the basis of a single complex gating question. Research on survey design, however, shows omnibus questions
about rape do not cue memories of relevant experiences as effectively as do a series of
behaviorally specific questions (Cook et al., 2011; National Research Council, 2014;
Koss, 1993). Therefore, one factor contributing to the different rates produced by the
two forms may be that the RAND form is more likely to cue memories of unwanted
sexual contacts.
A second potentially important difference is that the prior form’s screening question emphasized that the events under consideration were “sexual” events. The ques-

128

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

tion makes reference to “sexual contacts,” “sexual intercourse,” “oral sex,” “anal sex,”
and “sexual touching.” However, many sexual assaults may not feel sexual. Indeed, the
UCMJ does not require that the assault be perceived as “sexual” by the victim or the
perpetrator. Penetration or contact with genitalia, mouth, or buttocks that is abusive,
harassing, or demeaning can qualify as sexual assaults in Article 120 of the UCMJ,
even if they are not done with a sexual intent. As such, some instances of bullying,
hazing, or harassment that victims do not experience as in any way “sexual,” but which
are sexual assaults under the law, may well be inadvertently omitted by the focus on
sexuality in the prior-form items. The RAND form, like the UCMJ code, describes
behavioral events without imputing to them an experience of “sexuality,” and therefore
is less likely to exclude sexual assaults that are not experienced as sexual contacts. This
could also explain why the difference in rates of penetrative sexual assaults between
the prior form and RAND forms are disproportionately higher for men than women,
since, as noted in Volume 2, men are far more likely to describe the sexual assaults they
experienced as designed to humiliate or abuse, or as forms of hazing, rather than as
sexual encounters.
Unwanted Sexual Contacts on the Prior Form May Include Events That
Are Not UCMJ Crimes
A large percentage of respondents on the prior form indicated that their unwanted
sexual contact was not described by any of the options meant to classify sexual assaults.
For instance, 14.6 percent of those classified as having experienced an unwanted sexual
contact say the “one event” did not involve another person doing any of the behaviors defining unwanted sexual contact: sexually touching; attempting unsuccessfully
to have sexual intercourse; making the respondent have sexual intercourse; attempting
unsuccessfully to make the respondent perform or receive oral sex, anal sex, or penetration by a finger or object; or making the respondent perform or receive oral sex,
anal sex, or penetration by a finger or object.5 In other words, more than one out of
every seven respondents classified as experiencing an unwanted sexual contact selected
as their “one event” an incident that did not match any of the criteria defining an
unwanted sexual contact.
The 2014 prior-form estimates suggest there were approximately 18,000 service
members who described “one event” involving an unwanted sexual contact that was not
classed as penetrative (it was contact only, attempted, or unspecified). In contrast, the
5	

Although 19.8 percent of those counted as experiencing unwanted sexual contact in the past year cannot
be counted as experiencing a penetrative, non-penetrative, or attempted sexual contact as their “one event,”
14.6 percent of this number indicated “did not do this” for every type of sexual contact listed for establishing the
unwanted sexual contact type categorization. The remaining 5.2 percent skipped one or more items, and did not
mark “yes” on any item they did not skip.

Comparison of Events Identified by the Prior Form and RAND Forms

129

RAND form identified 12,400 members with non-penetrative crimes in the past year.
Thus, it may be that the estimate generated using the RAND form excluded some 5,600
service members who would have been counted as having experienced an unwanted
sexual contact on the prior form, but whose experience does not meet the legal threshold for a sexual assault. An alternative explanation—that many penetrative crimes are
counted as non-penetrative or unclassified unwanted sexual contacts on the prior form—
seems unlikely: if respondents frequently misclassified or failed to classify their sexual
assaults, we would expect to see the same behavior on the RAND form. As discussed
above, however, the concordance between the most serious past-year sexual assault and
the classification offered by respondents on their worst past-year assault is 97.0 percent.
This finding that 5,600 of the estimated 22,100 service members classed as experiencing an unwanted sexual contact using the prior form, or 25 percent, appear not to
have experienced criminal events is consistent with one of the criticisms of the survey
discussed in Volume 1. That is, the wording of the unwanted sexual contact question
could be interpreted to include sexual encounters that were unwanted but not criminal
(Schenck, 2014). While identification of strictly criminal acts was, according to the
WGRA survey developers, never the goal of the unwanted sexual contact measure,
considerable confusion arose over how the unwanted sexual contact results should be
interpreted. By applying the definitional criteria contained in UCMJ Article 120 to
rule out noncriminal events, the RMWS survey results should offer greater clarity on
the significance and severity of the sexual assaults it identifies.
Differences Between the WGRA and RAND Sexual Harassment
Definitions
Data collected with the prior form produced an estimated prevalence of past-year
sexual harassment among men and the total active-duty population that is lower than
the RAND form estimates (Table  8.1). (As with other analyses in this chapter, the
prior-form estimates reported here use the new RMWS weights so any differences
between the RAND form and the prior form are attributable to the surveys themselves, not differences in the weights applied to them.)
This difference likely reflects important dissimilarities in the way the two instruments define sexual harassment. The section that follows explores several ways in
which the two instruments differ in their categorization of sexual harassment, and the
changes in population estimates that occur when the RMWS classification criteria for
sexual harassment are altered to more closely match the classification criteria of the
prior form. Although these exercises provide a general sense of how each of the measurement differences may affect the overall estimate of sexual harassment, in each case,
other significant differences between the instruments remain and no perfect one-toone correspondence is possible between the two.

130

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

Table 8.1
Estimated Percentage of Active-Component Service Members Who
Experienced Sexual Harassment in the Past Year, as Assessed with the
Prior Form and RAND Form
Form

Total

Men

Women

Prior

6.23%
(5.76–6.73)

3.64%
(3.12–4.21)

20.94%
(20.08–21.81)

RAND

8.85%
(8.40–9.31)

6.61%
(6.09–7.15)

21.57%
(20.96–22.19)

NOTES: 95-percent confidence intervals for each estimate are indicated in
parentheses. RMWS weights were applied to both forms for comparability.

Sexual Contact Crimes Occurring in the Workplace

The modified version of the SEQ-DoD-Short (Stark et al., 2002) that is used in the
WGRA surveys and the prior form includes two items that assessed attempted or completed sexual assaults by coworkers. The RAND measure of sexual harassment included
a single item assessing unwanted sexual touching, which is potentially classifiable as a
sexual assault under Article 120. Although these workplace events may be classified as
criminal actions, they may also be sexual harassment if the perpetrator was someone
with whom the victim works. For example, a worker who was sexually assaulted by a
supervisor may have grounds to bring both a criminal case for the sexual assault and a
sexual harassment case for the hostile work environment. The scoring conventions for
the WGRA and RAND sexual harassment measures differ in their treatment of these
sexual contact items. The WGRA excluded them from the estimate of sexual harassment; for example, respondents who indicated that a coworker had sex with them
against their will are not counted as having experienced sexual harassment unless they
also experienced some other form of sexual harassment. The RAND sexual harassment
measure included unwanted sexual touching in the workplace as a possible instance of
sexual harassment. Even though the event may rise to the level of a sexual assault (i.e.,
a crime), respondents who indicated on the RAND form that someone they worked
with touched them sexually without their consent were categorized as having been
sexually harassed whether they had additional sexual harassment experiences or not.
To assess whether inclusion of types of sexual harassment incidents that may also
be sexual crimes may have inflated the RAND rate of sexual harassment relative to the
prior form, we recalculated our sexual harassment estimates excluding all respondents
for whom their only sexual harassment experience was unwanted sexual touching by
a coworker. This alignment in scoring strategy with the WGRA did not significantly
reduce the estimated rate of sexual harassment among women or men as measured by
the RAND form. For women, the rate of sexual harassment with sexual touching at
work excluded (21.48 percent, 95% CI: 20.87–22.10) was not significantly different
than the rate of sexual harassment when incidents of unwanted sexual touching at

Comparison of Events Identified by the Prior Form and RAND Forms

131

work were included (21.57 percent, 95% CI: 20.96–22.19). For men, the rate of sexual
harassment with sexual touching at work excluded (6.58, 95% CI: 6.07–7.13) was not
significantly different than the rate of sexual harassment when incidents of unwanted
sexual touching at work were included (6.61 percent, 95% CI: 6.09–7.15). That is, this
measurement difference does not account for the difference in men’s sexual harassment
prevalence as measured by the prior form versus the RAND form.
This analysis was conducted for the sole purpose of determining whether alignment with WGRA scoring criteria for sexual harassment would change the RAND
estimates of sexual harassment. It is not an endorsement of excluding workplace sexual
assaults from the measure of sexual harassment. In agreement with the original developers of the SEQ (Stark et al., 2002) and legal and DoD directives, we encourage
continued scoring of the RMWS sexual harassment measure as including unwanted
workplace sexual contact as one form of sexual harassment.
Respondent Classifies Events as Sexual Harassment

The RAND measure of sexual harassment does not require respondents to correctly
label their workplace experiences as “sexual harassment” in order to be categorized as
having experienced sexual harassment in the past year. Because most people are not
familiar with the details of equal employment opportunity law and MEO regulations,
many who experience sexual harassment do not recognize it as such, and are unable to
correctly label the events (Fitzgerald, Swan, and Fischer, 1995). Instead, the RAND
form walks respondents through a series of questions assessing the criteria to establish
that an MEO violation had occurred. In contrast, the prior form requires respondents
to indicate that they had an inappropriate workplace experience in the past year and that
they consider it to have been “sexual harassment.” All respondents who do not consider
the events to have been sexual harassment are excluded from the prior form estimates of
sexual harassment in the services, using the scoring criteria of previous WGRA surveys.
This difference in measurement could have dramatic effects on the estimated rate
of sexual harassment. To explore this possibility, we assessed the degree to which the
estimated rate of sexual harassment in the past year, as measured by the RAND form,
would decline if we were to also require, as the WGRA surveys have, that the respondent label their experiences sexual harassment. We are able to assess this on the RAND
form because, although we did not require respondents to label an event as harassment
to count it as such, we did ask them if they considered it to be “sexual harassment.”
If self-labeling were required, the estimated rate of sexual harassment in the past year
among female service members would drop from 21.57 percent (95% CI: 20.96–22.19)
to 15.16 percent (95% CI: 14.62–15.71). That is, requiring women to label their experiences sexual harassment drops the RAND form’s estimated annual prevalence of
sexual harassment below the estimate from the prior form (20.23 percent; 95% CI:
19.45–21.03). For men, requiring potential victims to label their experiences “sexual
harassment” drops the RAND form estimate from 6.61 percent to 3.31 percent (95%

132

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

CI: 2.92–3.73), which more closely aligns with the prior-form estimate (3.50 percent;
95% CI: 3.07–3.97). Again, these modified estimates are provided only to allow an
assessment of comparability between the two forms and are not intended to indicate
support for the survey requirement that respondents have thorough knowledge of
sexual harassment law and policy.
Persistence, Severity, or Direct Evidence of Quid Pro Quo

Unlike the modified version of the SEQ-DoD-Short used in the prior form, the RAND
sexual harassment measure included multiple criteria to assess whether the inappropriate workplace behavior the respondent experienced rose to the level of an MEO violation, as specified in DoD directives. Possible indicators of a hostile workplace environment (e.g., repeated sexual jokes that the respondent found offensive) were followed by
additional questions to ascertain whether the offender(s) ever knew that someone in
the workplace was offended and, if so, whether their behavior persisted, or failing that,
whether the behavior was so severe that a reasonable person would find it offensive.
Only after meeting one of these follow-up conditions was an inappropriate workplace
behavior categorized as sexual harassment.
The SEQ-DoD-Short takes a different approach to classification. Rather than
seeking to classify events that likely would rise to the standards set in legal precedent
and DoD directives, the instrument sought to classify “psychological” sexual harassment (Fitzgerald, Swan, and Magley, 1997). In short, it assessed whether the respondent
experienced workplace events that he or she found offensive (a construct we referred to
as “inappropriate workplace behavior”), but did not follow with an assessment of persistence, severity, or of direct evidence in the case of quid pro quo exchanges.
To assess whether the added classification criteria used in the RAND form may
have suppressed the estimate of sexual harassment relative to the prior form, we calculated the percentage of male and female service members who experienced any of
our initial screening questions assessing inappropriate workplace behavior that could
indicate sexual harassment, without requiring them also to specify that the events were
persistent or severe (in the case of hostile workplace events) or that they had direct
evidence of an exchange (in the case of quid pro quo events). This alignment in scoring strategy increased our past-year estimates among women (26.12 percent; 95% CI:
25.48–26.77) and men (11.98 percent; 95% CI: 11.32–12.67), and increased the discrepancy with the WGRA estimates for women (20.94 percent; 95% CI: 20.08–21.81)
and men (3.64 percent; 95% CI: 3.12–4.21). One potential explanation is that, while
the prior form did not require the events to meet DoD criteria for sexual harassment,
it did require the respondent to label the event “sexual harassment,” which would also
have the effect of reducing the estimates overall.
This analysis was conducted for the sole purpose of determining whether alignment with WGRA scoring criteria for sexual harassment would change the RAND
estimates of sexual harassment. We believe it will be important to continue to report

Comparison of Events Identified by the Prior Form and RAND Forms

133

both experiences of inappropriate workplace behaviors (i.e., negative workplace experiences that do not necessarily rise to the level of an MEO violation, but nonetheless
represent an unprofessional work environment), as well as sexual harassment as defined
by DoD directives.
With All Possible Alignment

To provide the closest match to WGRA criteria for sexual harassment, we calculated
the rate of sexual harassment estimated using the RAND measure with the following
changes: (1) excluding possible sexual contact crimes from the estimate; (2) requiring
the respondent to label his or her experiences “sexual harassment”; and (3) dropping
the requirement that inappropriate workplace behaviors be either persistent, severe, or
that there be direct evidence of a quid pro quo exchange. With these changes to the
RAND classification requirements, the estimated percentage of military women who
would be classified as sexually harassed shifts from 21.57  percent (95% CI: 20.96–
22.19) to 14.91 percent (95% CI: 14.38–15.46), and the estimated percentage of military men who would be classified as sexually harassed shifts from 6.61 percent (95%
CI: 6.09–7.15) to 3.25 percent (95% CI: 2.86–3.67). In relation to the prior form, it
reduces alignment with the prior form estimate of sexual harassment among service
women (20.23  percent; 95% CI: 19.45–21.03) and creates alignment with estimate
among service men (3.50 percent; 95% CI: 3.07–3.97).
Conclusions
In this chapter, we presented analyses comparing responses on the newly designed
RAND form survey to responses on the prior-form survey, which administered the
questions used in the 2012 WGRA to assess sexual harassment and unwanted sexual
contact. These comparisons suggest that the prevalence of unwanted sexual contact
in the past year generated by the prior form is likely overestimated by 20  percent
because of the inclusion of service members whose most-recent unwanted sexual contact occurred more than a year earlier. Similarly, we demonstrate that the prior form
identified only about one-half as many service members who experienced penetrative
sexual assaults as the RAND form (4,200 versus 7,800). This effect is larger for male
service members, with the RAND form identifying three times as many experiencing
penetrative assaults (1,200 versus 3,700). This effect may be partially attributable to the
RAND form identifying more sexual assaults that occur in the context of hazing or
that are not perceived as sexual by the service member relative to the prior form.
On the other hand, the prevalence of unwanted sexual contacts that are not penetrative (assessed on the prior form) was substantially higher than the prevalence of
non-penetrative sexual assault (assessed on the RAND form). Indeed, the prior form
counted 5,600 more individuals in this category than the RAND form. If the experi-

134

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

ences of these individuals does not, in fact, meet the criteria for a UCMJ sexual assault,
as suggested by the fact that the RAND form did not identify a similar proportion
on non-penetrative assault victims, this would suggest that 25 percent of all unwanted
sexual contacts counted using the prior form were not, in fact, crimes. Many may not
have met even the criteria for an unwanted sexual contact, as we found that 18 percent
of all those with a “one event” on the prior form that was not penetrative positively
affirmed that their unwanted sexual contact experience met none of the behavioral
descriptions defining unwanted sexual contact.
The fact that the over- and undercounts described here for the prior form approximately cancel each other should not be taken as evidence that the WGRA questionnaire offers a satisfactory measure of sexual offenses for the purposes of tracking the
effectiveness of DoD policies or for estimating the total number of offenses occurring
against service men and women. Measures that do not accurately and precisely count
those people or events that are the target of training, prevention, or other policies or
programs are unlikely to be sensitive to changes brought about by these programs. For
example, the implementation of policies that effectively reduce sexual assaults may
not result in a detectable corresponding change in this measure of unwanted sexual
contact.
A comparison of responses across forms also revealed differences in how the two
sexual harassment measures identified instances of harassment. One substantial difference between measures was that the prior form measure only counted as cases of sexual
harassment those participants who labeled their inappropriate work experiences as
“sexual harassment.” We found this “labeling” requirement significantly reduced prevalence estimates for sexual harassment. Indeed, if the RAND form required respondents
to label instances as sexual harassment, our overall prevalence rate for past-year sexual
harassment would have fallen by 30  percent. The effect is more pronounced among
men, where rates would have fallen by 50 percent. This largely explains the different
prevalence estimates produced by the two methods. When we adjust the RAND form’s
past-year sexual harassment prevalence rates to match the criteria used in the prior form
(implementing the “labeling” requirement, excluding sexual touching and other adjustments), we found that the RAND form identified fewer cases of sexual harassment
against women and comparable numbers for men compared with the prior form.
The pattern of differences across survey forms suggests that the new form,
designed by RAND to address several concerns about the WGRA instrument, offers
improved validity and interpretability relative to the prior form. We recommend that
DMDC use the questions and scoring rules developed for the RAND form in future
WGRA surveys.

CHAPTER NINE

Analysis of Survey Nonconsent and Breakoff
Terry L. Schell

Documenting the factors that lead to survey nonresponse can be informative about
the possibility of nonresponse bias. One way to investigate these factors is to look separately at different types of nonresponse: sampled individuals who never navigate to the
survey website; individuals who go to the website but did not consent to participate in
the study; and individuals who begin to participate but quit, or break off, at some point
before the end of the survey.
In the current study, the latter two categories are particularly interesting because
detailed information about the topic of the survey was not presented to service members until after they navigated to the survey web portal. The recruitment materials did
not emphasize that the survey was about sexual assault, sexual harassment, or gender
discrimination. This was a deliberate decision designed to reduce the extent to which
decisions about survey participation were based on the individual’s personal experience
with these outcomes—which could create a type of nonresponse bias that cannot be
removed by weighting or other common methods.
Once a sampled service member navigated to the website, however, the informed
consent notice explained that “The survey asks about whether or not you have experienced harassment, discrimination, or inappropriate sexual behavior.” Decisions about
whether to participate made at that point may be directly influenced by the respondent’s personal experiences with those topics and may be the source of non-ignorable
nonresponse bias. Similarly, some respondents started the survey but stopped responding at some point. That type of breakoff may be a reaction to the particular content
of the survey and may result in respondents and nonrespondents being meaningfully
different in their underlying experiences with sexual assault, sexual harassment, and
gender discrimination.
In order to better understand how survey nonconsent and breakoff may have contributed to nonresponse bias, this chapter documents the number of individuals who
stopped participating at each point in the process, and provides information about the
characteristics of the individuals who stopped at that point (“breakoffs”).

135

136

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

Survey Nonconsent Rate
For the purpose of this analysis, all respondents whose unique study identification
number was logged on the survey landing page, but failed to answer any survey questions, were considered survey nonconsent. This included some people who accidently
clicked on the link in the email and did not read the consent form before closing their
browser, as well as some people who clicked the “continue” button on the informed
consent page (indicating an agreement to do the survey), but then chose not to answer
any questions.
A total of 156,130 individuals who had been assigned to the RAND instrument
(long, medium, or short form; active or reserve component; DoD service or Coast
Guard) hit the survey starting page, and 94.1 percent of those answered at least one
survey question (146,986). Thus, 9,144 individuals met our definition of survey nonconsent. This rate of nonconsent represents 2.0 percent of the total sample frame (i.e.,
a 2-percent reduction in the overall response rate), and 5.9 percent of the individuals
who hit the survey start page.
Rates of nonconsent were similar on the prior form, which used an abbreviated
set of questions from past WGRA studies and was administered to a random sample of
active-component DoD service members. Prior-form respondents saw a consent form
that was very similar to that used with the RAND form. The primary difference was
that prior-form respondents were told that the survey took 12 minutes to complete,
while almost all RAND form respondents (87 percent, those randomized to short and
medium forms) were told it would take eight minutes to complete, while the long-form
participants (13 percent) were told it would take 20 minutes. A total of 2,539 service
members assigned to the prior form met our definition of nonconsent. This rate of
nonconsent represents 2.5 percent of the total sample frame (i.e., a 2.5–percent reduction in the overall response rate), and 7.4 percent of the individuals who hit the survey
start page.
Survey Breakoff Rates
Survey breakoff was defined by identifying the last question for which the respondent
selected a valid response option. For most respondents, this was the final question on
their version of the survey. A respondent may have clicked the “next item” button to
investigate the subsequent questions, but would be counted as “breakoff” after the
point where they last entered a response. For items that included explicit “don’t know”
response options, those options were considered valid responses.

Analysis of Survey Nonconsent and Breakoff

137

RAND Instrument

Due to the complex skip pattern and multiple instrument forms, the number of individuals who were presented a given question varied substantially across questions on
the RAND instrument. However, when a question was presented, it was always placed
in the same ordinal position within the instrument. Thus, we can compute the cumulative rate of breakoff as respondents moved through the survey. Table  9.1 presents
the number of individuals for whom a particular item was their final complete item.
It combines respondents across the short, medium, and long forms. For example, 559
respondents’ final complete item was “Intro1,” indicating that some respondents broke
off immediately after completing the first item (“Are you male or female?”). Demo3
was the final complete item for the majority of all respondents (80 percent) because it
was the final item presented to participants in the short and medium forms.
As expected, the 35 base questions that were included in all forms of the instrument and were not imbedded within any item skip pattern occurred as respondents’
final response more frequently than items shown to fewer participants. These base
questions are listed in boldface in Table 9.1. On average, 0.17 percent broke off after
each base question. Within these base questions, there is a clear pattern with the six
items that were followed by survey instructions or a new topic of questions showing
higher rates of breakoff; on average, each such questions were the final response for
0.41 percent. The six sexual assault screening questions were base questions and averaged 0.24 percent breakoff, which is close to the average for other base items.
Because of the complex skip pattern and separate forms, Table 9.1 does not always
identify which item was the first item that respondents failed to answer. We have
included Table 9.2, which presents the same data on the subset of respondents who were
assigned to the short form, which has fewer questions and a simplified skip pattern.
Across all three forms, the total rate of breakoff in the core portion of the RAND
instrument (the portion administered to all participants) was 6.4  percent. That is,
93.6 percent of those who answered at least one survey question also answered the last
question administered to all participants (Demo3). We can also compute the breakoff
within the long form questions for those respondents who were randomized to the
long form instrument and answered at least one question on the survey. Of the 22,164
individuals in this group, 1,261 (5.7 percent) gave their final survey response after the
first item of the long form but before the last item administered to all long-form participants (Longform29). The breakoff rate across the entire long form (combining the
core and the long-form–specific items) was 12.8 percent.
To better understand where in the core instrument that 6.4 percent broke off, as
well as how breakoff affected unit nonresponse, we computed breakoff rates within
each survey module in Table 9.3, rather than for each question.
All breakoff prior to the sexual assault classification module was counted as unit
nonresponse and handled through the nonresponse weights. This type of breakoff represented 3.9 percent of the sample who started the survey and contributed to a reduction in the overall response rate of 1.2 percent.

138

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

Table 9.1
Final Participant Response by Survey Item, All RAND Form Types
Item

Count

Cumulative
Percentage Percentage

Item

Count

Cumulative
Percentage Percentage

Intro1

559

0.38%

0.38%

SH6c

5

0.00%

3.37%

P1

247

0.17%

0.55%

SH6d

8

0.01%

3.37%

P2

91

0.06%

0.61%

SH7b

7

0.00%

3.38%

P3

120

0.08%

0.69%

SH7c

8

0.01%

3.38%

P4

140

0.10%

0.79%

SH7d

7

0.00%

3.39%

P5

1,054

0.72%

1.50%

SH8d

1

0.00%

3.39%

SH1

196

0.13%

1.64%

SH9b

3

0.00%

3.39%

SH2

159

0.11%

1.75%

SH9c

8

0.01%

3.40%

SH3

131

0.09%

1.83%

SH9d

7

0.00%

3.40%

SH4

105

0.07%

1.91%

SH11b

2

0.00%

3.40%

SH5

111

0.08%

1.98%

SH11c

1

0.00%

3.40%

SH6

106

0.07%

2.05%

SH11d

6

0.00%

3.41%

SH7

95

0.06%

2.12%

SH12e

2

0.00%

3.41%

SH8

129

0.09%

2.21%

SH13e

1

0.00%

3.41%

SH8a
SH9

1

0.00%

2.21%

SH14a

13

0.01%

3.42%

111

0.08%

2.28%

SH15a

83

0.06%

3.48%

SH9a

4

0.00%

2.29%

SHFU1

105

0.07%

3.55%

SH10

109

0.07%

2.36%

SHFU2

19

0.01%

3.56%

SH11

137

0.09%

2.45%

SHFU2_1

2

0.00%

3.56%

SH12

146

0.10%

2.55%

SHFU3

23

0.02%

3.58%

SH13

131

0.09%

2.64%

SHFU4

12

0.01%

3.59%

SH14

101

0.07%

2.71%

SHFU4c

12

0.01%

3.59%

SH15

873

0.59%

3.30%

SHFU5

14

0.01%

3.60%

SH1b

6

0.00%

3.31%

SHFU5a

1

0.00%

3.60%

SH1c

6

0.00%

3.31%

SHFU5d

8

0.01%

3.61%

SH1d

8

0.01%

3.32%

SHFU6

32

0.02%

3.63%

SH2b

10

0.01%

3.32%

SHFU7a

3

0.00%

3.63%

SH2c

2

0.00%

3.33%

SHFU7b

3

0.00%

3.64%

SH2d

8

0.01%

3.33%

SHFU7e

66

0.04%

3.68%

SH3b

6

0.00%

3.34%

SHFU8a

2

0.00%

3.68%

SH3c

1

0.00%

3.34%

SHFU8b

2

0.00%

3.68%

SH3d

10

0.01%

3.34%

SHFU8d

2

0.00%

3.68%

SH4c

5

0.00%

3.35%

SHFU8e

3

0.00%

3.69%

SH4d

5

0.00%

3.35%

SHFU8f

2

0.00%

3.69%

SH5b

1

0.00%

3.35%

SHFU8g

1

0.00%

3.69%

SH5c

6

0.00%

3.35%

SHFU8i

26

0.02%

3.71%

SH5d

12

0.01%

3.36%

SHFU9a

2

0.00%

3.71%

SH6b

5

0.00%

3.37%

SHFU9c

2

0.00%

3.71%

Analysis of Survey Nonconsent and Breakoff

139

Table 9.1—Continued
Item

Count

Cumulative
Percentage Percentage

Item

Count

Cumulative
Percentage Percentage

SHFU9d

51

0.03%

3.74%

OB4b

2

0.00%

4.64%

SHFU10a

5

0.00%

3.75%

OB4c

1

0.00%

4.64%

SHFU10c

1

0.00%

3.75%

OB4d

3

0.00%

4.65%

SHFU10e

1

0.00%

3.75%

OB4f

2

0.00%

4.65%

SHFU10g

47

0.03%

3.78%

OB4g

1

0.00%

4.65%

SHFU10h

2

0.00%

3.78%

OB4h

2

0.00%

4.65%

SHFU10i

1

0.00%

3.78%

OB4j

1

0.00%

4.65%

SHFU10n

9

0.01%

3.79%

OB4k

1

0.00%

4.65%

SHFU11f

44

0.03%

3.82%

SA5

179

0.12%

4.77%

SHFU12a

4

0.00%

3.82%

PF5a

1

0.00%

4.77%

SHFU12c

7

0.00%

3.83%

OB5d

SHFU12f

1

0.00%

3.83%

SA6

SHFU12g

14

0.01%

3.84%

SHFU12h

2

0.00%

3.84%

SHFU12i

1

0.00%

SHFU12j

3

0.00%

SHFU12k

2

SHFU12l

5

SHFU12m
SHFU12n

2

0.00%

4.77%

777

0.53%

5.30%

OB6d

1

0.00%

5.30%

OB6h

1

0.00%

5.30%

3.84%

OB6j

1

0.00%

5.31%

3.84%

SAFU1

4

0.00%

5.31%

0.00%

3.84%

SAFU2

11

0.01%

5.32%

0.00%

3.84%

SAFU3f

1

0.00%

5.32%

1

0.00%

3.85%

SAFU5

3

0.00%

5.32%

3

0.00%

3.85%

SAFU6

3

0.00%

5.32%

SHFU12p

6

0.00%

3.85%

SAFU7

4

0.00%

5.32%

SHFU12r

1

0.00%

3.85%

SAFU8d

1

0.00%

5.32%

SHFU12s

4

0.00%

3.85%

SAFU8l

2

0.00%

5.32%

SHFU12t

3

0.00%

3.86%

SAFU8g

1

0.00%

5.33%

SHFU12u

3

0.00%

3.86%

SAFU8h

1

0.00%

5.33%

SHFU12v

3

0.00%

3.86%

SAFU8j

2

0.00%

5.33%

SHFU12w

6

0.00%

3.86%

SAFU9b

1

0.00%

5.33%

SHFU12x

14

0.01%

3.87%

SAFU9d

4

0.00%

5.33%

SA1

401

0.27%

4.15%

SAFU9e

5

0.00%

5.33%

OB1a

2

0.00%

4.15%

SAFU10h

1

0.00%

5.34%

OB1c

1

0.00%

4.15%

SAFU10i

9

0.01%

5.34%

OB1h

6

0.00%

4.15%

SAFU11a

1

0.00%

5.34%

283

0.19%

4.35%

SAFU11f

1

0.00%

5.34%

1

0.00%

4.35%

SAFU11k

1

0.00%

5.34%

SA2
OB2g
SA3

223

0.15%

4.50%

SAFU12

4

0.00%

5.35%

SA4

203

0.14%

4.64%

SAFU13a

1

0.00%

5.35%

PF4a

5

0.00%

4.64%

SAFU13b

1

0.00%

5.35%

PF4b

1

0.00%

4.64%

SAFU13d

5

0.00%

5.35%

OB4a

2

0.00%

4.64%

SAFU14

3

0.00%

5.35%

140

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

Table 9.1—Continued
Item

Count

Cumulative
Percentage Percentage

Item

Count

Cumulative
Percentage Percentage

SAFU15

1

0.00%

5.35%

Longform5

33

0.02%

86.05%

SAFU16

3

0.00%

5.36%

Longform6

18

0.01%

86.06%

SAFU17

7

0.00%

5.36%

Longform7

15

0.01%

86.07%

SAFU18e

4

0.00%

5.36%

Longform8

20

0.01%

86.09%

SAFU19

3

0.00%

5.37%

Longform9

45

0.03%

86.12%

SAFU20n

1

0.00%

5.37%

Longform10b

1

0.00%

86.12%

SAFU21

2

0.00%

5.37%

Longform10e

17

0.01%

86.13%

SAFU22l

1

0.00%

5.37%

USCG3c

1

0.00%

86.13%

SAFU23

3

0.00%

5.37%

USCG3d

SAFU24

2

0.00%

5.37%

Longform11

SAFU28a

1

0.00%

5.37%

SAFU30d

1

0.00%

5.37%

SAFU30g

3

0.00%

5.37%

SAFU30i

1

0.00%

5.38%

SAFU30x

1

0.00%

5.38%

Longform14d

SAFU32

7

0.00%

5.38%

Longform14e

SAFU33a

1

0.00%

5.38%

Longform15a

SAFU33b

1

0.00%

5.38%

Longform15b

SAFU33d

3

0.00%

5.38%

SAFU33b_1

1

0.00%

5.38%

SAFU34

2

0.00%

5.39%

Longform15g

SAFU36

28

0.02%

5.41%

Longform15i

SAFU37a

1

0.00%

5.41%

Longform16a

SAFU37e

3

0.00%

5.41%

Longform16b

SAFU38a

20

0.01%

5.42%

Longform16e

SAFU38b

8

0.01%

5.43%

Longform17

SAFU38c

6

0.00%

5.43%

Longform18

SAFU38d

5

0.00%

5.43%

Longform19

SAFU38e

318

0.22%

5.65%

Longform19e

7

0.00%

5.66%

Longform20a

SAFU40

12

0.01%

5.66%

DEMO1

206

0.14%

5.80%

SAFU39

RGSF1

2

0.00%

86.13%

79

0.05%

86.19%

Longform11_1

4

0.00%

86.19%

Longform12a

2

0.00%

86.19%

Longform12e

8

0.01%

86.20%

Longform13

73

0.05%

86.24%

1

0.00%

86.25%

186

0.13%

86.37%

10

0.01%

86.38%

1

0.00%

86.38%

Longform15e

2

0.00%

86.38%

Longform15f

1

0.00%

86.38%

2

0.00%

86.38%

37

0.03%

86.41%

1

0.00%

86.41%

1

0.00%

86.41%

19

0.01%

86.42%

6

0.00%

86.43%

27

0.02%

86.44%

62

0.04%

86.49%

27

0.02%

86.51%

1

0.00%

86.51%

Longform20b

1

0.00%

86.51%

Longform20d

2

0.00%

86.51%

24

0.02%

86.52%

1

0.00%

86.53%

28

0.02%

5.82%

Longform20h

DEMO2

917

0.62%

6.45%

USCG2

DEMO3

116,907

79.54%

85.98%

Longform21b

25

0.02%

86.00%

Longform22

Longform2

12

0.01%

86.01%

Longform3

10

0.01%

86.01%

Longform4

20

0.01%

86.03%

Longform1

6

0.00%

86.53%

147

0.10%

86.63%

Longform23a

2

0.00%

86.63%

Longform23b

3

0.00%

86.63%

Longform23c

1

0.00%

86.63%

Analysis of Survey Nonconsent and Breakoff

Table 9.1—Continued
Item

Count

Cumulative
Percentage Percentage

Longform23d

1

0.00%

86.63%

Longform23i

1

0.00%

86.63%

Longform23j

22

0.01%

86.65%

USCG1e

10

0.01%

86.66%

USCG4a

1

0.00%

86.66%

USCG4h

2

0.00%

86.66%

USCG5e

1

0.00%

86.66%

Longform24

72

0.05%

86.71%

Longform25d

22

0.01%

86.72%

Longform26

24

0.02%

86.74%

Longform27

19

0.01%

86.75%

Longform28

130

0.09%

86.84%

Longform29

11,729

7.98%

94.82%

USMC2

2

0.00%

94.82%

USMC3

125

0.09%

94.91%

USMC4a

95

0.06%

94.97%

USMC4b

81

0.06%

95.03%

USMC4c

109

0.07%

95.10%

USMC4d

88

0.06%

95.16%

USMC4e

151

0.10%

95.26%

USMC4f
USAF1

727

0.49%

95.76%

6,235

4.24%

100.00%

NOTE: Percentages are given among the
proportion of the sample that answered at least
one question (N = 146,986), including the reserve
components and Coast Guard. Item labels in bold
were administered to all respondents in all forms.
Some survey items are not presented in this table
because they were not the final survey item for
any participant. Questions are listed in the order
they appear in the instrument. The instrument is
included in Volume 1 of this report series.

141

142

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

Table 9.2
Final Participant Response by Survey Item, RAND Short Form
Item

Count

Cumulative
Percentage Percentage

Item

Intro1

240

0.38%

0.38%

OB4g

P1

105

0.17%

0.55%

P2

40

0.06%

P3

42

P4
P5

Count

Cumulative
Percentage Percentage

1

0.00%

4.05%

SA5

89

0.14%

4.19%

0.62%

SA6

330

0.53%

4.72%

0.07%

0.68%

OB6d

1

0.00%

4.72%

66

0.11%

0.79%

SAFU1

1

0.00%

4.72%

432

0.69%

1.48%

SAFU2

1

0.00%

4.72%

SH1

93

0.15%

1.63%

SAFU5

1

0.00%

4.73%

SH2

66

0.11%

1.74%

SAFU6

2

0.00%

4.73%

SH3

55

0.09%

1.82%

SAFU8l

1

0.00%

4.73%

SH4

44

0.07%

1.89%

SAFU8h

1

0.00%

4.73%

SH5

36

0.06%

1.95%

SAFU8j

2

0.00%

4.74%

SH6

46

0.07%

2.03%

SAFU9d

1

0.00%

4.74%

SH7

43

0.07%

2.09%

SAFU9e

2

0.00%

4.74%

SH8

48

0.08%

2.17%

SAFU10h

1

0.00%

4.74%

SH8a

1

0.00%

2.17%

SAFU10i

2

0.00%

4.75%

SH9

43

0.07%

2.24%

SAFU12

2

0.00%

4.75%

SH9a

1

0.00%

2.24%

SAFU13d

2

0.00%

4.75%

SH10

41

0.07%

2.31%

SAFU14

1

0.00%

4.75%

SH11

61

0.10%

2.41%

SAFU16

2

0.00%

4.76%

SH12

67

0.11%

2.51%

SAFU17

3

0.00%

4.76%

SH13

55

0.09%

2.60%

SAFU18e

2

0.00%

4.76%

SH14

35

0.06%

2.66%

SAFU19

2

0.00%

4.77%

SH15

390

0.62%

3.28%

SAFU23

1

0.00%

4.77%

SA1

174

0.28%

3.56%

SAFU24

1

0.00%

4.77%

1

0.00%

3.56%

SAFU30g

1

0.00%

4.77%

105

0.17%

3.73%

SAFU32

3

0.00%

4.78%

1

0.00%

3.73%

SAFU33a

1

0.00%

4.78%

SA3

102

0.16%

3.90%

SAFU33d

1

0.00%

4.78%

SA4

87

0.14%

4.04%

SAFU33b_1

1

0.00%

4.78%

PF4a

4

0.01%

4.04%

SAFU34

2

0.00%

4.79%

OB4c

1

0.00%

4.04%

SAFU36

16

0.03%

4.81%

OB4d

1

0.00%

4.05%

SAFU37e

1

0.00%

4.81%

OB4f

1

0.00%

4.05%

SAFU38a

7

0.01%

4.82%

OB1h
SA2
OB2g

Analysis of Survey Nonconsent and Breakoff

Table 9.2—Continued
Item

Count

Cumulative
Percentage Percentage

SAFU38b

3

0.00%

4.83%

SAFU38d

1

0.00%

4.83%

SAFU38e

126

0.20%

5.03%

SAFU39

2

0.00%

5.04%

SAFU40

2

0.00%

5.04%

DEMO1

85

0.14%

5.17%

RGSF1

14

0.02%

5.20%

DEMO2

442

0.71%

5.91%

DEMO3

58,750

94.09%

100.00%

NOTE: Percentages are given among the
proportion of the sample that answered at least
one question (N = 62,437), including the reserve
components and Coast Guard. Item labels in
bold were administered to all respondents. Some
survey items are not presented in this table
because they were not the final survey item for
any participant. The instrument is included in
Volume 1 of this report series.

143

144

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

Table 9.3
Survey Breakoff by Module, RAND Combined and RAND Short Form
Module

Final Response at or After:

But Final Response Before:

INTRO1

P5

0.79%

0.79%

MEO screeners

P5

SH15

1.92%

1.87%

MEO follow-up

SH15

SHFU12x

1.16%

NA

Sexual assault
classificationa

SHFU12x

SA6

0.91%

1.53%

Sexual assault
follow-up

SA6

SAFU37e

0.63%

0.62%

Lifetime sexual
assault

SAFU37e

SAFU38e

0.25%

0.22%

Demographic

SAFU38e

DEMO3

0.79%

0.87%

INTRO1

DEMO3

6.45%

5.91%

Time-setting
module

Total

Combined Short Only

NOTE: Percentages are given among the proportion of the entire sample that answered at least one
survey question, including the reserve components and Coast Guard. The combined column aggregates
across the short, medium, and long forms. Breakoffs for the MEO follow-up module are not applicable
(NA) for the short form, because that module does not appear in the short form.
a The short form goes directly from SH15 to SA1, so breakoff in the sexual assault classification module
is defined for the short form as having a final response on or after SH15 but before SA6.

Table  9.3 also presents breakoff analysis within the short form respondents to
better estimate the number of individuals who broke off at the beginning of the sexual
assault classification module. In the medium and long forms, there are many different
items that may immediately precede the instructions for the sexual assault module.
In the short form, all respondents were presented with SH15 immediately before the
instructions for the sexual assault module. For the short form, breakoff within the
sexual assault classification model is defined by those respondents with a final survey
response occurring on or after SH15 but before SA6. The rate of breakoff associated
with the sexual assault classification module was 1.5 percent of the short form sample
who started the survey. This rate of breakoff is generally similar to other modules
in the survey. The sexual assault classification module contained between six and 70
questions, depending on skips, but had a slightly lower rate of breakoff than the MEO
screening module (15–17 questions). On the other hand, the breakoff rate in the sexual
assault classification module was slightly higher than the two shorter modules that
contained items presented to everyone, the five time-setting questions, and the three
to four demographic questions. As seen in Table 9.2, however, SH15—the last item in
the short form’s MEO screener module—was the single item most likely to be the final
response for short-form respondents. There were two screens of survey instructions
after SH15 and before respondents get to the next question (SA1). An inspection of web

Analysis of Survey Nonconsent and Breakoff

145

server logs indicated that a large proportion of those who dropped out after completing SH15 but before completing SA1 do so on those instruction screens. Such breakoff
may be a response to the subject matter, which is revealed in the instructions, rather
than the content of SA1 itself.
Prior Form Instrument

The 2012 WGRA instrument had substantial break off (DMDC, 2014, Appendix B).
In 2012, 13.9 percent of those who started the survey gave their final survey response
before reaching the critical unwanted sexual contact assessment, and 18.5 percent broke
off at some point prior to the final question presented to all participants. In particular, survey items that had several response options or were presented in a grid format
were associated with substantial survey breakoff in 2012. For example, 3.7 percent of
the respondents broke off during the posttraumatic stress disorder (PTSD) checklist
(Weathers et al., 1993) , 1.0 percent broke off during the Cohen Perceived Stress scale.
As discussed in Volume 1 of this series, the prior form developed by RAND substantially shortened the WGRA instrument used in previous surveys to minimize breakoff.
When items that presented a substantial response burden were not necessary for our
research goals, we eliminated them from the instrument.
Table 9.4 presents the number of individuals on the prior form for whom a particular item was their final completed item. RAND’s prior form had substantially
lower rates of breakoff than the 2012 WGRA instrument, with 6.5 percent cumulative
breakoff prior to the last item presented to all participants (the unwanted sexual contact assessment, PF32). The biggest single source of breakoff was the gender discrimination scale (PF27) with 1.5 percent dropping out in that module. The rate of breakoff within the gender discrimination scale in the 2012 WGRA was also 1.5 percent
(DMDC, 2014, Appendix B).
Effect of Survey Breakoff on Sample Characteristics
While survey breakoff represents a very small portion of the overall survey nonresponse, because this type of nonresponse is directly informed by—or is a reaction to—
the content of the survey, it can represent a source of bias that is not well mitigated by
survey weights or other nonresponse corrections. Even though the proportion of the
sample that broke off the survey was substantially lower for the RMWS than for prior
administrations of the WGRA, it is still possible that the breakoff represents a meaningful source of bias in study estimates. This is particularly true when estimating rare
outcomes, such as sexual assault, that occur in less than 2 percent of the population
being studied.
To better understand the role played by this type of nonresponse, it is useful to
investigate how the sample characteristics were affected by survey breakoff. For exam-

146

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

Table 9.4
Final Participant Response by Survey Item, Prior Form
Item

Count

Cumulative
Percentage Percentage

Item

Count

Cumulative
Percentage Percentage

PF2

125

0.40%

0.40%

PF27_10

2

0.01%

4.40%

PF3

65

0.21%

0.60%

PF27_12

1

0.00%

4.40%

PF4_1

14

0.04%

0.65%

PF27_13

139

0.44%

4.84%

PF4_2

14

0.04%

0.69%

PF28

2

0.01%

4.85%

PF4_4

4

0.01%

0.70%

PF29_4

10

0.03%

4.88%

PF6

67

0.21%

0.91%

PF29_5

172

0.54%

5.42%

PF7

46

0.15%

1.06%

PF30_1

2

0.01%

5.43%

PF8

27

0.09%

1.14%

PF30_2

9

0.03%

5.46%

PF9

13

0.04%

1.19%

PF30_3

4

0.01%

5.47%

PF10

181

0.57%

1.76%

PF30_4

2

0.01%

5.48%

PF11_1

3

0.01%

1.77%

PF30_6

5

0.02%

5.49%

PF11_2

5

0.02%

1.78%

PF30_8

1

0.00%

5.49%

PF11_3

1

0.00%

1.79%

PF30_9

1

0.00%

5.50%

PF11_4

3

0.01%

1.80%

PF30_10

136

0.43%

5.93%

PF11_5

1

0.00%

1.80%

PF30_11

3

0.01%

5.94%

PF11_6

68

0.22%

2.01%

PF30_13

2

0.01%

5.94%

PF17

20

0.06%

2.08%

PF30_14

2

0.01%

5.95%

PF18

226

0.71%

2.79%

PF30_19

141

0.45%

6.40%

PF19_1

6

0.02%

2.81%

PF31

33

0.10%

6.50%

PF19_2

9

0.03%

2.84%

PF32

28,952

91.57%

98.07%

PF19_3

3

0.01%

2.85%

PF33

23

0.07%

98.15%

PF19_4

1

0.00%

2.85%

PF34_3

1

0.00%

98.15%

PF19_7

2

0.01%

2.86%

PF34_5

8

0.03%

98.17%

PF19_9

194

0.61%

3.47%

PF35_7

1

0.00%

98.18%

PF27_1

5

0.02%

3.49%

PF35_9

12

0.04%

98.22%

PF27_2

7

0.02%

3.51%

PF36

1

0.00%

98.22%

PF27_3

4

0.01%

3.52%

PF37

4

0.01%

98.23%

PF27_4

6

0.02%

3.54%

PF38_10

8

0.03%

98.26%

PF27_5

1

0.00%

3.55%

PF40

2

0.01%

98.26%

PF27_6

5

0.02%

3.56%

PF41

4

0.01%

98.28%

PF27_7

256

0.81%

4.37%

PF42_3

1

0.00%

98.28%

PF27_8

4

0.01%

4.38%

PF44_3

7

0.02%

98.30%

PF27_9

2

0.01%

4.39%

PF46

9

0.03%

98.33%

Analysis of Survey Nonconsent and Breakoff

Table 9.4—Continued
Cumulative
Percentage Percentage

Item

Count

PF47

1

0.00%

98.33%

PF48_10

3

0.01%

98.34%

PF59_4

3

0.01%

98.35%

PF60_7

2

0.01%

98.36%

PF69_14

2

0.01%

98.36%

PF72_15

7

0.02%

98.39%

PF73

6

0.02%

98.41%

504

1.59%

100.00%

SAFU25

NOTE: Percentages are given among the
proportion of the sample that answered at
least one question (N = 31,616), all are activecomponent DoD service members. Item labels
in bold were administered to all respondents.
Some survey items are not presented in this table
because they were not the final survey item for
any participant. The instrument is included in
Volume 1 of this series.

147

148

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

ple, if the participants who break off early are at unusually low risk for sexual assault
(e.g., male, Air Force, senior officers), it supports a theory suggesting that nonresponse
occurs because the topic of the survey is not personally relevant to respondents, it is not
seen as important, or it is considered objectionable. On the other hand, if survey breakoff is associated with characteristics that put the respondent at risk for sexual assault
(e.g., female, Marine Corps, junior enlisted), it supports a theory that nonresponse
occurs because those individuals who have personal experiences with, or knowledge of,
sexual assaults do not want to disclose that information on the survey. These scenarios
have opposite implications for the likelihood of non-ignorable missingness—i.e., missingness attributable to respondents’ reactions to the specific topic of the survey—that
is not well captured in our nonresponse weights. Alternatively, breakoff may occur in a
way that is unassociated with sexual assault risk, which would suggest relatively minimal nonresponse bias.
To investigate this issue, we estimated the risk for sexual assault for every service
member in the sample based on all available administrative data available. Specifically,
we used the “composite variables” that had been derived as part of RMWS weights.
These are predicted risks for sexual assault based on a regression model that predicted
sexual assault from a large list of variables (see Volume 1, Table  5.3) among survey
respondents. These predicted values were computed on both respondents and nonrespondents, including those who started the survey but dropped out before answering
the sexual assault questions.
As discussed in Chapter Three, individuals whose characteristics put them at a
low propensity to respond to the survey also put them at high risk for sexual assault.
The current analyses investigate if that association occurs because high-risk individuals
never hit the survey web page (and possibly never got the invitation), refused to participate when informed about the content of the survey, or began to participate but broke
off when they got to detailed questions about sexual assault and harassment. The data
in Table 9.5 track the average predicted risk for sexual assault for different subsets of
the sample, beginning with the full sample, moving to the portion of the sample who
hit the web page, to those who started the survey, and finally to those who finished the
sexual assault classification module (and were counted as respondents). This predicted
risk estimate is based on the full range of sample characteristics investigated during
derivation of RMWS sample weights. It is based on the sum of the three subtypes of
sexual assault risk that were derived as part of those weights (see Chapter Three).
The predicted risk of sexual assault for the sample shifted substantially between
the full sample invited to participate and the subsample that hit the survey start page.
This step accounted for the bulk of nonresponse, more than 300,000 cases, and nearly
the entire shift in predicted risk. The full, unweighted sample had a mean predicted

Analysis of Survey Nonconsent and Breakoff

149

Table 9.5
Predicted Risk of Sexual Assault by the Type of Nonresponse or Breakoff
Subsample
The full sample invited to RAND form

Sample Size Mean Risk
459,279

2.13%  

. . . who hit the survey start page

156,130

1.81%

. . . who answered at least one survey
question

146,986

. . . who started MEO screeners

Loss
of N

Change in
Mean Risk

Mean Risk
for Dropouts

 

 

303,149

0.32%

2.30%

1.81%

9,144

0.01%

1.94%

144,775

1.80%

2,211

0.00%

1.98%

. . . who finished MEO screeners

143,003

1.80%

1,772

0.00%

2.05%

. . . who started sexual assault module

141,291

1.79%

1,712

0.01%

2.45%

. . . who finished sexual assault
classification

139,968

1.79%

1,323

0.00%

1.66%

NOTES: Samples including the active and reserve components, DoD, and Coast Guard. Risk of sexual
assault is based on a regression model including all predictors in Volume 1, Table 5.3. Data are
unweighted.

risk of sexual assault of 2.1 percent,1 but those who hit the survey start page had a
risk of 1.8 percent, a rate that is 15 percent lower. The sampled service members who
dropped out of the sample at this stage had a mean risk of 2.3 percent, i.e., they were
at higher risk for sexual assault than the overall sample average.
The change in predicted risk for sexual assault after that point was minimal. This
was both because fewer individuals dropped out at those stages, and those who did leave
tended to be similar in their risk of sexual assault to the subsample who hit the survey
web page. Nonconsent (i.e., arriving at the start page but not answering any questions)
shifted the sample to have slightly lower risk, but only by 0.01 percentage points. This
was because the predicted risk for those who did not consent (1.9 percent) was slightly
higher than the subsample that hit the survey start page. On the other hand, those who
did not consent were actually at slightly lower risk for sexual assault than the overall
sample. This pattern, in which those who broke off the survey were at slightly higher
risk than those who started the survey but were at slightly lower risk than the full
population, continued until the sexual harassment follow-up module (between the end
of the MEO screeners and the beginning of the sexual assault module). These followup questions were only given to those respondents who indicated experiencing one of
the sexual harassment or gender discrimination screeners. Such individuals have high
risk of sexual assault, so the individuals who broke off in the MEO follow-up module
had characteristics that put them at higher risk than either the overall sample or the
1	

This value differs slightly from the overall study estimate of sexual assault risk because those estimates were for
the full population, while this estimate is for the full sample. Because the sampling plan included an oversample
of women, these estimates of sexual assault risk are slightly higher than the study estimates for the population.

150

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

subsample that completed the MEO screeners. This resulted in another small shift in
the sample of 0.01 percentage points.2
Breakoff within the sexual assault classification module also had no meaningful effect on the predicted risk of the remaining sample. Those who broke off in the
module had a predicted sexual assault risk of 1.7 percent, which is actually lower than
the full sample (2.1 percent) and very slightly lower than the subsample that started the
module (1.8 percent). In other words, the service members who quit the survey during
the sexual assault assessment were slightly more likely to come from low-risk groups
(men, officers, etc.) than high-risk groups (women, enlisted, etc.), a response bias that
is in the opposite direction of the overall effect on nonresponse.
In summary, the shift in predicted risk of sexual assault between the overall sample
being targeted and the survey respondents occurred almost entirely before the service members arrived at the study web page, and before they had been presented with
detailed information about the content of the survey. Once they got to the survey start
page, the remaining nonresponse (caused by either survey nonconsent or breakoff) was
not meaningfully associated with those respondent characteristics that put them at risk
for sexual assault. Combing across survey nonconsent and survey breakoff, the predicted
risk for these nonrespondents is quite similar to the original sample, with a mean risk of
sexual assault of 2.0 percent. Thus, there is little evidence that the net nonresponse bias
we observe overall was produced as a reaction to the content of the survey.
Conclusions
During the survey design phase, the study team attempted to design an instrument
that minimized nonresponse due to survey breakoff. This included moving base survey
items to the front of the instrument, reducing instrument length (for as many respondents as possible), avoiding questions with complex response options, and avoiding
complex question wording. This resulted in generally low rates of survey breakoff, less
than 4  percent broke off before the mandatory sexual assault items. This compares
favorably with the 2012 WGRA instrument in which 13.9 percent broke off before the
mandatory item assessing unwanted sexual contact.
2	

The decision to include the detailed sexual harassment and gender discrimination follow-up questions before
the sexual assault assessment involved several trade-offs. Because those follow-up questions provided more opportunities for respondents who experienced these MEO violations to drop out of the survey before the required
sexual assault questions, we were concerned that it could result in meaningful survey bias. To mitigate this, we
randomized a large portion of the sample, the RAND short form, to not get those follow-up questions. We can
empirically investigate whether including the MEO follow-up module resulted in downward bias in the sexual
assault prevalence estimates by comparing estimates across forms. The rates of sexual assault estimated in the
forms that included the MEO follow-up questions were actually slightly higher than, but not significantly different from, the rates estimated in the short form. There does not appear to be a net bias introduced by inclusion of
the MEO follow-up questions before the sexual assault questions.

Analysis of Survey Nonconsent and Breakoff

151

Overall, our analysis of the rates of survey nonconsent and survey breakoff, as
well as their effects on the sample characteristics, suggests that there is little evidence
that overall nonresponse was a reaction to the survey content. Only a small proportion of study nonrespondents dropped out after being directly informed about survey
content or seeing the survey questions, and those who dropped out after that point did
not, on average, have a higher or lower predicted risk for sexual assault. The difference
in predicted sexual assault risk between the respondents and the intended sample was
driven by the sampled service members who either did not receive or did not respond
to the mail and emailed invitations.

CHAPTER TEN

Service Member Tolerance of the RAND Form
Amy Grace P. Donohue, Caroline Epley,
Andrew R. Morral, and Dean Kilpatrick

The RAND form used behavioral descriptions of forms of physical contact that can
qualify as sexual assault under UCMJ Article 120. Some of these descriptions were
more explicit than had been used in earlier WGRA surveys, though they are comparable to the behaviorally specific language found in many surveys of the general population (e.g., the National Intimate Partner and Sexual Violence Survey conducted by
the Centers for Disease Control and Prevention), and surveys of special populations,
such as college students.1
The use of behaviorally specific language poses two known risks: some who take
the survey may be offended by its content, and some may find that the language triggers upsetting memories and feelings. Because these are known risks, respondents were
advised during the informed-consent process to take them into consideration in deciding whether to participate in the study. Nevertheless, some participants were offended
or upset by the survey, either because they regarded the language used in the sexual
assault screening items as unnecessarily graphic, too intrusive, or because it triggered
disturbing memories or could trigger such memories in others.2
Given the unprecedented size of the sample (more than one-half of a million
service members were invited to participate, and close to 200,000 accessed the online
survey), the complaints about the survey raised three key questions. First, was the rate
of respondents who complained about the survey language unusually or unacceptably
high? Second, did the survey language harm some victims of sexual assault or discourage their participation? Third, were risks to participants sufficiently great to outweigh the value of scientific knowledge gained by using questions with clear, explicit
language? In this chapter, we examine these questions using data RAND collected
on all survey complaints filed with RAND, Westat (the organization that fielded the
survey and supplied survey helpdesk operators), or to an office at DoD when we were
informed about those complaints.
1	

A good discussion of methods for measuring sexual assault that reviews many survey instruments is found in
National Research Council, 2014.

2	

RAND also received other types of complaints that are common to most survey efforts, such as complaints
about being contacted at home or at work, being contacted too many times, wasting government money, etc. In
this chapter, we focus just on those complaints that concerned the survey language.

153

154

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

Complaint Rates
There is little objective or standardized evidence on the rates of complaints in other
surveys. Moreover, definitions of what constitutes a complaint differ from survey to
survey. According to DMDC, which administered the 2012 WGRA, there were six
complaints about the survey instrument itself: five that it was too long and one that it
was too intrusive (DMDC, 2015). Out of 22,792 completed WGRA surveys in 2012,
these six complaints represented a complaint rate of 27 per 100,000 completed surveys.
In 2014, RAND fielded a shortened version of the 2012 WGRA form, the prior
form, to a sample of respondents, and three versions of the RAND form to others.
Although it was not always possible to determine which form complaints referred to,
we know with certainty that two of 149 survey-language complaints concerned the
prior form, for a complaint rate of seven per 100,000 completes (Table 10.1). This is
significantly lower than the 141 complaints attributable to the RAND form, which
corresponds to a complaint rate of 122 per 100,000 completes.
Table 10.1
Complaints Received About Survey Language in the RAND
Survey, by Respondent and Survey Characteristics (When Known)
Complaints per 100,000 Completes

95% CI

7

(1–24)

122

(103–144)

Men

125

(102–153)

Women

83

(63–108)

Army

119

(91–152)

Navy

65

(38–102)

Air Force

90

(66–119)

Marine Corps

51

(19–111)

E1–E4

43

(25–70)

E5–E9

131

(106–161)

O1–O3

42

(19–79)

O4–O6

200

(138–279)

By form
Prior form
RAND form
By gender

By service

By pay grade

Service Member Tolerance of the RAND Form

155

Men had a statistically significantly higher complaint rate than women, and
junior enlisted and junior officers had significantly lower complaint rates than senior
enlisted and senior officers. Interestingly, therefore, those at the highest risk of sexual
assault tended to complain least, whereas those at the lowest risk of sexual assault were
most likely to find the survey language objectionable:
•	 Women have five times the risk of sexual assault as men, but their complaint rate
was one-third lower than men’s.
•	 Service members in ranks E1–E4 have more than twice the risk of sexual assault
as those in ranks E5–E9, but their complaint rate was one-third that of E5–E9
members.
•	 Junior officers (O1–O3) have twice the risk of sexual assault as more-senior officers, but senior officers’ complaint rate was nearly five times that of junior officers.
Although the propensity for complaint appears associated with risk for sexual
assault, this may not represent a causal association. It may be, for example, that moresenior personnel were more likely to lodge complaints because they were passing along
concerns that had been brought to their attention by the individuals they lead.
Harm to Victims
Behaviorally specific questions can trigger painful memories of attacks, leading to distress among victims. While research on such reactions is not extensive, several studies
suggest that survey-induced distress is usually short-lived. Across several studies, the
number of respondents who reported being upset during administration of surveys on
sexual assault and other traumatic experiences is 4.5 percent to 13 percent (Finkelhor
et al., 2014; Galea et al., 2005; Kilpatrick et al., 2007; Zajac et al., 2011). Moreover,
most who experience distress while answering survey questions no longer feel distressed
after completing the survey (Galea et al., 2005; Kilpatrick et al., 2007; Zajac et al.,
2011). For instance, the National Women’s Study–Replication (Kilpatrick et al., 2007),
funded by the National Institute of Justice, conducted a victimization survey of 3,001
adult women from the U.S. general population and a national sample of 2,000 U.S.
college women. In addition to screening for rape experiences using questions similar to
those used in the RAND survey, women were asked about exposure to other traumatic
events, alcohol and drug use, PTSD, and depression. In both samples, 7.6 percent of
women said they were upset by any of the survey questions. Just one-half of 1 percent,
or 26 out of 5,001 respondents in both samples, said they were still upset at the end
of the interview, ten of whom were sufficiently upset to accept an offer to speak with a
counselor. None felt they needed to talk to a counselor immediately.

156

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

The same pattern of findings was obtained in the National Survey of Adolescents–
Replication project, funded by the National Institute of Child Health and Development (Zajac et al., 2011). This project surveyed 12- to 17-year-old adolescent males and
females about their experiences with physical and sexual assault, witnessed violence,
exposure to other traumatic events, alcohol and drug use, PTSD, depression, and suicidal ideation. Out of 3,614 adolescents, only eight remained upset at the end of the
interview, and only two wished to speak to a counselor. Both of these studies found
that individuals with traumatic-event histories were slightly more likely to experience
distress, but the overwhelming majority was not distressed by the end of the survey
interview. In summary, these finding are consistent with other research that demonstrates asking behaviorally specific questions about rape, sexual assault, and other traumatic events produces distress in only a small percentage of respondents and that even
this transitory distress has dissipated by the end of the interview, except in a fraction of
a percent out of thousands of respondents surveyed.
Twelve respondents contacted RAND or others to express the concern that victims of sexual assault would find the survey too painful to complete, making the survey
invalid for estimating the prevalence of sexual assault and possibly harmful to victims
of sexual assault. Of these 12 contacts, three indicated that they themselves had been
assaulted, and discontinued their participation in the survey because the questions
brought up painful feelings or memories. None of these self-identified victims indicated that they experienced any lasting distress from the survey.3 Clearly, if their experiences were typical of sexual assault victims, this would provide compelling evidence
that the survey was too distressing to achieve its objectives. However, many others with
sexual assault histories were able to complete the survey. Indeed, 12,210 service members with sexual assault histories completed the survey.
Some who complained about the survey language may not have mentioned that
they were sexual assault victims. Nevertheless, considering just the three known sexual
assault victims who complained out of the 12,210 who completed the survey, their complaint rate of 25 per 100,000 would be substantially lower than the overall complaint
rate. Again, this may reflect a greater tolerance of questions concerning the details of
sexual assaults among groups of service members with the highest exposure to this crime.
Others with sexual assault histories appreciate the opportunity to share their
experiences and have them considered when DoD investigates the prevalence of sexual
assault in the military. Indeed, sometimes questions about crime victimization can
be distressing and appreciated. Notably, more self-identified victims (four) contacted
RAND to express appreciation for the survey or in order to provide additional information on their experiences than to object to the language used in the survey.

3	

Each of these reports was reviewed by RAND’s and DoD’s human subjects protection committees, which
determined they represented expected risks of survey participation.

Service Member Tolerance of the RAND Form

157

Benefits of the New RAND Survey Using Explicit Questions to
Measure Sexual Assault
When research poses any risk to participants, it is critically important that the expected
benefits of the research outweigh the potential risks. This raises the question of whether
there were any benefits to individual research participants as well as to society of using
the new RAND survey and its more-explicit questions that appear to produce more
distress among participants than the earlier WGRA questions.
One potential benefit to service members, particularly those with sexual assault
histories, is the validation that comes from DoD taking the issue sufficiently seriously
to measure sexual assault in an unambiguous way, so that their experiences cannot be
discounted. The public discounting of the seriousness of the unwanted sexual contacts
reported on the prior measure (due to the ambiguities in that item) contributes to a perception that victims’ reports of their own experiences on a confidential survey are not
reliable. Relatedly, some with sexual assault histories will appreciate the opportunity
to disclose their experiences to DoD through a mechanism that preserves their anonymity. Others without self-disclosed sexual assault experiences contacted us to express
gratitude that the survey was being conducted, suggesting that some service members
recognized the benefits to their own and their colleagues’ work lives that could result
from DoD gathering better data on sexual assault and sexual harassment in the military.
Conceivably, however, all of the benefits described above could have been achieved
using survey language that offended or upset fewer service members. To what extent,
therefore, was the use of questions with more-explicit language necessary to improve
the quality of sexual assault prevalence research? As noted above, there is scientific consensus that using clear, explicit questions is essential for good sexual assault prevalence
research (National Research Council, 2014). Past surveys using less-precise language
produced results that raised questions among members of Congress and others charged
with creating and implementing sexual assault policies that distracted from efforts to
improve prevention and treatment programs, rather than supported it (see discussion
in Volume 1, Chapter 1). Data cited elsewhere in this report provide evidence that
the new RAND questions yield more-specific and sensitive sexual assault prevalence
data than was obtained using the prior form (Chapters Five and Eight). Therefore, the
research using more-explicit questions had a clear benefit to DoD because the sexual
assault data obtained were much more accurate.
Conclusions
The RAND form was more likely to trigger survey language complaints than the
prior form. Recognizing that many who were offended or upset by the language in
the survey will not file a complaint, the true rate at which respondents were offended
or disturbed is undoubtedly higher than the calculated rate of 122 complaints per

158

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

100,000 completes. Unfortunately, we know of no comparable data from civilian surveys of sexual assault—most of which use similarly specific behavioral and anatomical
language—with which to evaluate whether this rate of complaints is especially high.
We can, however, say that the offense was not sufficiently severe or widespread to cause
a surge of breakoffs during the sexual assault screening module where the language
drawing complaints occurred. As shown in Chapter Nine, more respondents broke off
in the uncontroversial sexual harassment screening module than in the sexual assault
screening module, and breakoffs overall were similar for the combined RAND forms
and prior form (which drew few language complaints). Therefore, we believe that the
fact that one out of every 820 who completed the survey complained about the language indicates an overall level of distress or offense was not excessive for a survey
addressing such sensitive and uncomfortable topics.
Human research participant protection regulations do not require research to
be entirely risk-free, as that would be impossible. Instead, these regulations require
an analysis of whether risks posed by the research are outweighed by potential benefits. As described above, the RAND form had potential benefits both to individual
research participants as well as to scientific knowledge about the prevalence and nature
of sexual assault in the military. Ultimately, however, the decision as to whether the
complaint rate—or distress generally—is too high rests on a judgment about the benefits of asking questions in the manner that triggers more complaints relative to the
approach that triggers fewer. Our view is that obtaining the most-accurate data possible on the number and proportion of service members who are sexually assaulted each
year is critical for sound policy. This enables DoD to determine how well DoD policy
initiatives are preventing sexual assault and whether the existing military justice system
and health care service response systems are adequate.
The fact that participants at the lowest risk for sexual assault tended to object
to the survey language at the highest rates is a surprising finding worthy of further
investigation. Possibly, the risk of assault feels so remote to these respondents that the
minor inconvenience of being asked questions about whether they themselves have
experienced such violations outweighs any benefits they can imagine the survey producing. Alternatively, perhaps men, more-senior pay grades, or officers are more likely
than others to express their complaints about any topic, in which case their higher
complaint rates would have nothing to do with their lower risks of sexual assault or
harassment.
As has been recommended by the White House, fears of victims being harmed by
survey questions should not deter efforts to understand and enumerate sexual assault
crimes (White House Task Force to Protect Students from Sexual Assault, 2014). In
the RMWS study, as in others, there is no evidence that the offense or distress caused
any lasting harm to survey respondents. Moreover, the offense some take when reading
descriptions of unwanted sexual encounters represents a tolerable risk when respondents
are notified of the risk during their informed consent to participate in the research.

CHAPTER ELEVEN

Conclusions and Recommendations for Future
Administrations of the WGRA
Andrew R. Morral, Terry L. Schell, and Kristie L. Gore

The 2014 RMWS was the largest study ever conducted to examine sexual assault and
sexual harassment in the U.S. military. With nearly 170,000 survey respondents, the
prevalence estimates generated from the study frequently had 95-percent confidence
intervals that spanned less than half a percentage point, suggesting extraordinary precision in our estimates. But these confidence intervals, which only assess the uncertainty due to random sampling variability, could be misleading. Sampling variability is
unlikely to be the primary source of error for our estimates. Instead, larger errors could
result from several factors, including specification errors if, for instance, our sexual
assault screening module misclassifies individuals; coverage errors due to the inclusion
criteria used in the sample frame; and survey nonresponse, if our sample weights failed
to fully adjust for important differences between those who chose to participate in the
study and those who did not.
This volume examined the influence and magnitude of these less-easily quantified sources of error. Our investigations across all of these sources found no conclusive
evidence of substantial bias or error in the primary RMWS estimates. However, there
was a general pattern across these investigations, suggesting that our primary RMWS
study estimates of sexual assault, sexual harassment, and gender discrimination are
more likely to be underestimates than to be overestimates of the true values. In particular, three types of evidence suggest that the survey estimates could underestimate
the true values, but any such underestimate is likely to be relatively small: (1) the nonresponse follow-up studies, (2) the analysis of individuals excluded from the sample
frame, and (3) the comparison between survey estimates of officially reported sexual
assaults to the number of actual reports.
In contrast, we found little evidence that the study was overcounting these outcomes. For example, although we conclude that a small number of pre-service sexual
assaults may be captured in our estimates, this number is almost certainly lower than
the larger number of assaults that go uncounted because we excluded members with
fewer than six months of service, and those that left the military shortly before the
survey fielded. Similarly, our analyses of the performance of the sexual assault and
sexual harassment items provide no indication that more incidents were counted as
crimes or violations than should have been.

159

160

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

Our conclusion that the study is more likely to underestimate than to overestimate the true values is stronger for the estimated counts of individuals who experienced these violations (e.g., 20,300 service members experienced a sexual assault in the
past year) than for the estimated prevalence of these crimes (e.g., 1.5 percent of service
members experienced a sexual assault in the past year). This is because the strongest
evidence for bias comes from the fact that the survey sample frame clearly excluded
some individuals who served in the military in the past year and who may have experienced these outcomes (e.g., members who separated before the sample was drawn).
This source of bias may explain a substantial proportion of the total survey error identified by our comparison of survey-estimated counts of reported sexual assault to official reports of sexual assault (see Chapter Four). In contrast, evidence of bias in the
estimated prevalence of sexual assault, sexual harassment, and sexual discrimination is
weaker; the incomplete coverage of the sample frame necessarily has smaller effects on
prevalence rates than on population counts. The three nonresponse follow-up studies
(Chapter Two) provide some limited evidence the reported prevalence underestimates
the true value. However, those effects were descriptively small, were not consistent
across follow-up methods, and were not uniformly statistically significant.
In addition to the primary RMWS estimates, the RAND study also replicated
the methods used in previous WGRA studies to produce time-trend data using the
same measurement and weighting methods. The current investigations of bias provide stronger evidence that the WGRA methods underestimate the true rate of sexual
assault. In particular, the analysis of nonresponse weights found that the WGRA
system of weights resulted in the underrepresentation of a number of groups of service
members who have a high risk for sexual assault and harassment. In addition, the prior
form identified substantially fewer penetrative sexual assaults than the RAND form,
particularly among men. However, this classification error was partially offset by telescoping errors, resulting in a substantial proportion of respondents being counted as
experiencing an unwanted sexual contact in the past year, when in fact their last such
experience occurred more than 12 months prior to the survey.
In the sections that follow, we discuss specific findings from this volume, and
make recommendations for how future administrations of the WGRA might benefit
from what we learned from the 2014 RMWS experience. Whereas most of these recommendations derive from findings reported in this volume, we also offer recommendations based on our experience conducting the 2014 RMWS.
Measurement Approach
Evidence provided in this volume demonstrates that the RAND instrument more
accurately counted sexual assault crimes and MEO violations than the method previously used in the WGRA. For the measurement of sexual assault, the RAND form

Conclusions and Recommendations for Future Administrations of the WGRA

161

counted as past-year crimes fewer events that actually occurred more than a year ago;
it identified a large number of sexual assaults that were abusive, demeaning, or acts
of hazing that appear to be missed by the earlier approach; the language and criteria
used in the RAND form were more interpretable as crimes under the UCMJ; and the
RAND questions used more-descriptive and unambiguous anatomical and behavioral
language, which has been identified as a survey best practice for the measurement of
sexual assault crimes (National Research Council, 2014).
The principal trade-off in implementing this best practice is that the more-precise
language used to describe sexual assaults offends more people than the language previously used to describe “unwanted sexual contacts” (see Chapter Ten). Unfortunately,
although the “unwanted sexual contacts” question offended fewer people, it was not
specific enough to ensure that respondents understood the range of events that counted
as criminal offenses. Moreover, as we have argued in Chapters Five and Eight, some
evidence suggests that its reliance on undefined phrases like “oral sex” and “anal sex,”
may have led to an undercount of sexual assaults that were experienced by the respondent as abusive or humiliating rather than as sexual acts. Because terms like these
can be confusing or misleading, the RAND survey defined sexual assaults using the
same kind of specific behavioral and anatomical language used in the UCMJ. This is
also similar to most civilian surveys of sexual assault that follow the guidance recently
offered by the National Research Council recommending the use of specific behavioral and anatomical language for the measure of sexual assault (National Research
Council, 2014). Although there is evidence that the RAND questions generated more
complaints, the actual rate of complaints was quite low, and did not appear to be a
significant cause of survey breakoff (Chapter Nine). More people broke off from the
RAND form during the uncontroversial sexual harassment screening than during the
sexual assault screening, and breakoffs before completing the sexual assault/unwanted
sexual contact measure were generally lower for the RAND form compared with the
prior form.
For sexual harassment, the prior approach required service members to understand the nuances of MEO regulations and correctly apply them by labeling their
unwanted workplace experiences “sexual harassment.” As shown in Chapter Eight,
this labeling requirement can lead to a large undercount of sexual harassment. Had
we required respondents to correctly label their sexual harassment experiences on the
RMWS, we would have undercounted the prevalence of sexual harassment among
women by 30 percent, and among men by 50 percent. Finally, the RMWS measurement of hostile work environment is more closely aligned to the language and criteria
found in military (and civilian) equal opportunity regulations.
Gender discrimination is more difficult to assess, and the RMWS has some
limitations that may be addressed with further revisions. To classify as discriminatory, comments or experiences must result in damage to the service member’s career,
a causal attribution that respondents may not have sufficient information to be able to

162

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

make accurately. Moreover, at the time we conducted the survey, there were careers
that women were officially barred from entering in the military. Some respondents
may have been told that women were unqualified for those jobs, and may have felt the
individual who expressed that opinion harmed their careers. The RMWS measure of
gender discrimination may not fully distinguish between this type of legal gender discrimination and the unlawful form.
For these reasons, we believe the RMWS measurement approach more accurately
captures UCMJ crimes and MEO violations than earlier WGRA measures. In addition, as we indicate in Chapter Ten, the offense or distress experienced by some service
members asked to consider the RAND questions is a tolerable risk given the benefits
of increased precision in prevalence estimates that are highly policy relevant. However,
additional questions may be helpful to clarify events classified as gender discrimination.
Recommendation: Future WGRA surveys should use the RMWS measurement approach, or comparable survey questions that use behaviorally and
anatomically specific language to clearly define criminal sexual assault and violations of equal opportunity law and policy.
Recommendation: In future WGRA surveys, DMDC should consider supplementing the RMWS measure of gender discrimination with additional questions to establish (a) whether the discrimination was legally mandated by the service, (b) the specific nature of the career harm suffered, and (c) the evidence that
gender biases harmed the service member’s career.
Sample Frame
The sample frame used in the 2014 RMWS and earlier WGRA surveys excluded service members with fewer than six months of service and included some whose past-year
sexual assaults could have occurred before they joined the military (service members
with six to 12 months of service). Although these inclusion criteria could lead to the
exclusion of sexual assaults that should have been included in our prevalence estimates,
or the inclusion of sexual assaults that should not have been counted, we showed in
Chapter Eight that the magnitude of these errors is small, so they have minimal effects
on our sexual assault prevalence estimates.
In contrast, we found that the exclusion of some service members who left the
military in the year prior to drawing the sample could indeed have a significant effect
on prevalence estimates. Under plausible assumptions, we believe that this exclusion
suggests that the true rate of sexual assault could be 14 percent higher than reported.
This is based on evidence in Chapter Three that members who separated from the military after the sample was drawn have higher rates of sexual assault than those who did
not. Active-component women who recently separated from the military at the time of

Conclusions and Recommendations for Future Administrations of the WGRA

163

the survey were almost twice as likely as other women to have been sexually assaulted,
and men who separated were more than four times as likely as other men to have been
sexually assaulted. Recent separations also experienced higher rates of sexual harassment and gender discrimination. Excluding all members who recently separated from
the study creates a downward bias on the estimated number of members who experienced a sexual assault in the past year as well as the rate of such assaults.
Recommendation: Because the omission of recent separations could lead
to significant bias in estimates of past-year sexual assaults, sexual harassment,
and gender discrimination, we recommend including past-year separations in the
sample frame of future WGRA surveys, or developing analytic approaches for
estimating the number of crimes and violations those who separated experienced
in the past year. Minimally, separations that occur after the WGRA sample frame
is drawn should not be counted as ineligible, as has been the practice in earlier
versions of the WGRA.1
Recommendation: Because recent separations appear to have elevated risk
of past year sexual assaults, sexual harassment, and gender discrimination, evaluate what effect such violations have on military careers and retention, and whether
making an official report or receipt of available services reduces the separation rates
of service members who have been sexually assaulted, harassed, or discriminated
against.
Sampling Plan
Women face especially elevated risks of sexual assault and sexual harassment. However, the military is composed of many more men than women. As a result, more men
than women experience past-year sexual assaults, and 60 percent of the sexual assault
incidents in the past year occurred against men. With our large sample size of men, we
were able to see for the first time that the experiences of men are quite different than
those of women, and they suggest different prevention and intervention responses.
Under the current sample design, the error in key estimates was much larger for men
(and for the overall military) than for women, suggesting that a more efficient design
would have oversampled women by a smaller factor than the current study.
Recommendation: DMDC should design future surveys to include sufficient numbers of men in the sample to ensure ongoing assessment of the nature
of sexual assaults against them. In practice, this means large sample surveys that
1	

Our understanding is that any intentional sampling of fully separated personnel would require approval by
the Office of Management and Budget. This regulatory requirement should not be seen as a significant barrier to
implementing an improved sampling plan.

164

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

may not oversample women at rates as great as in the RMWS or previous WGRA
studies. This can be done without reducing the precision of women’s estimates
below those of men.
Sample Weighting
The novel approach to nonresponse weighting that we developed for the 2014 RMWS
solved a longstanding challenge in sample weighting for military surveys. Although the
U.S. military maintains rich data on member demographics, test scores, service experiences, work environments, and other characteristics that could be associated with both
survey nonresponse and risk for sexual assault or harassment, in practice relatively few
of these factors have been directly accounted for in the nonresponse weighting methods used for earlier WGRA and other military surveys.
The rationale for limiting the number of such factors in weighting models is
a good one: Including many variables in such models can drive up variance in the
weights undermining the precision of even the largest surveys. Variance increases when
factors are included in the model that have little or no association with the outcomes of
interest and so can offer little or no nonresponse bias reduction. Our innovation was to
use a large set of population characteristics to construct a small number of derived variables that captured the portion of variance from the larger set that was associated with
the outcomes of interest, and then to use just these derived variables to supplement the
factors traditionally used in WGRA nonresponse weighting models (for details on this
method see Volume 1, Chapter Five).
As demonstrated in Chapter Three, this approach succeeded in reducing differences between the analytic sample and the population on a wide range of factors associated with both nonresponse and key outcomes that were not fully addressed using
the conventional methods. Moreover, nearly all of the factors associated with a higher
risk of sexual assault were also associated with propensity for survey nonresponse, so
the exclusion of these factors resulted in the WGRA weights underestimating the true
rate of sexual assault. In contrast, the RMWS weights result in an analytic sample that
appears to have overall levels of risk for sexual assault, sexual harassment, and gender
discrimination that are similar to the population of interest.
Finally, these reductions in bias were achieved with only modest inflation of variance in the survey estimates. Whereas the overall design effect associated with the
traditional WGRA weights was 2.62, the RMWS weights produced a design effect
about 40 percent larger (3.69). We would argue, however, that for a survey this large—
with its already extraordinary precision—the small loss of precision associated with the
increased variance is well worth the bias reduction we achieved.

Conclusions and Recommendations for Future Administrations of the WGRA

165

Recommendation: DMDC should build on approaches developed for the
RMWS to include a wider set of factors in future nonresponse weighting models
than has previously been possible for military surveys like the WGRA.
Improving Response Rates
With average response rates around 30 percent, nonresponse bias poses perhaps the
greatest threat to the validity of RMWS survey findings. While we present evidence
that the RMWS weights do a better job of mitigating nonresponse bias than the prior
weighting methods, it would be far better to have less nonresponse bias in the first
place. This requires improving recruitment for underrepresented groups.
We recruited sample members using both letters and emails. However, email invitations were the key recruitment tool. Sampled members with missing email addresses
had extremely low response rates while those missing mailing addresses did not. However, more than 10 percent of emails either could not be sent or were bounced back
as undeliverable, and we know that the true proportion of emails that failed to reach
sampled service members was higher, because many military email systems do not
provide delivery failure notices for emails with incorrect addresses. This suggests that
up-to-date email addresses are not available in the OSD personnel systems we used
to collect contact information, even though most service members with at least six
months of active-duty service have email addresses issued to them by DoD. It would be
ideal if DMDC personnel records were automatically and promptly updated whenever
email accounts are created or deleted from any military email system. However, military email systems are not administered by DMDC, so such a change would require
the cooperation of the services and other DoD organizations that maintain email systems. Improving the coverage and reliability of email contact information in the personnel systems used for survey recruitment offers a promising approach for increasing
response rates.
As has been true of other surveys of the military, our response rates were lowest
among junior enlisted members (pay grades E1–E4), the members at highest risk
of sexual assault and sexual harassment. These members may be disproportionately
assigned to duties that do not require routine use of computers, so they may check their
military email accounts only infrequently, and thereby encounter survey invitations
less frequently. Many or most of these members do have smartphones on which they
can access personal email accounts, and some services maintain databases where members can update their personal email addresses so as to receive work-related emails on
these devices. Sending survey invitations to these personal email addresses—assuming
members have volunteered their personal email information for such work-related purposes—offers another promising approach to reaching and recruiting members who
are hard to recruit through DoD email accounts. Doing this, however, would require

166

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

the services to change the rules governing appropriate use of their database information (covered in their System of Record Notices) to explicitly permit sharing of contact
information for official government survey invitations.
As part of our follow-up studies of nonresponse, RAND learned that phone outreach was particularly effective for recruiting groups that were underrepresented in the
main survey. Although the sensitive nature of this survey makes it a poor choice for
live phone administration, it may be possible to use the phone to motivate member
participation in an automated interactive voice-recognition interview over the phone,
or to use phone or text messages to motivate participation in the web-based survey.
Such methods may improve response rates for those groups that are chronically underrepresented in DoD surveys (e.g., junior enlisted, infantry) and who do not use email
as a regular part of their military duties. The feasibility of nonemail outreach methods
should be investigated, as well as the most effective way to construct such messages
(for example, research to determine whether a recorded voice message from the service
chief is more effective than an unnamed caller).
Recommendation: OSD, the Defense Information Systems Agency, and
the services should collaborate to improve the coverage and reliability of email
contact information in the personnel systems used for survey recruitment.
Recommendation: DMDC should investigate additional modes of recruitment (phone or text message) that improve outreach to members who do not routinely use email as part of their military duties.
Further Study of Nonresponse Bias and Survey Error
While the contents of this volume may be highly technical, some of these investigations are critical to addressing legitimate concerns about the validity of the study. So
long as future versions of this survey continue to have relatively low response rates
(and so long as study estimates are shifted meaningfully by nonresponse weighting)
ongoing research is needed to investigate the validity of assumptions used in creating
nonresponse weights. The critical assumption is that there is no association between
survey response propensity and the outcomes of interest while controlling for the factors included in the nonresponse weights. The best way to assess this critical assumption is to attempt to assess the outcomes among individuals who were treated as survey
nonresponse in the primary survey estimates. Thus, consistent with Office of Management and Budget (2006) guidelines, we recommend that future administrations conduct nonresponse follow-up studies. Given the results presented in Chapter Two, we
would suggest several changes relative to the way that RAND conducted the follow-up
studies. First, it may be preferable to avoid using live interviewers due to evidence of
response biases with that mode. Second, it would be helpful to test a web-based survey

Conclusions and Recommendations for Future Administrations of the WGRA

167

(same mode and instrument as the main study) that used high-intensity recruitment
methods.
Similarly, there is value in investigating whether the survey results correspond
to the known population census of reported sexual assaults (Chapter Four). While
reported sexual assaults are a small fraction of all sexual assaults, they are critical for
assessing DoD policy, and it is good to validate the survey estimates against the true
value. Such a comparison offers a test of total survey error, including error due to sampling variability, incomplete sample coverage, measurement specification error, nonresponse bias, and computational errors. This investigation of total survey error should
be continued in future administrations. To facilitate this investigation, as well as to
reduce overall bias due to a sample frame that excludes some individuals who have
service in the military in the prior year, we recommend expanding the sample frame
of the survey to include recently separated/retired members, or assessing this group
through some other data collection effort.
Recommendation: In future administrations of the WGRA, DMDC
should continue to compare survey estimates with actual numbers of filed Victim
Reporting Preference forms as a measure of nonresponse bias and total survey
error more generally. The procedure we used could be further refined to better
match the survey’s sample frame with the Victim Reporting Preference statements counted in the SAPRO database.
Frequency of WGRA Administration
Sexual assault rates are unlikely to change rapidly from year to year. As such, without
enormous sample sizes, annual testing is likely to detect no significant changes in yearto-year rates, which is likely to be interpreted by many observers as “no improvement,”
even though improvement may in fact be occurring. Annual surveys pose other problems too: The WGRA requires a large sample to characterize the sexual assault experiences of roughly 1 percent of the service. This means significant portions of the male
and female service member population will need to be surveyed each year, driving up
survey fatigue, and sensitizing the population to the survey’s content and focus. Survey
fatigue and sensitization risk encouraging selective survey participation and possible
nonresponse bias.
Finally, the survey assesses experiences over a full year period. As such, estimates are
largely insensitive to policy or programmatic changes made during the past year, because
the estimates include incidents both before and after the changes. This problem is compounded by the time required to analyze and disseminate results. For example, the topline RAND report, published in December 2014, included assaults from August 2013.
Essentially, this survey is sensitive to policy interventions that were initiated approxi-

168

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

mately two years prior to the survey results being made public. Conducting the survey
annually would not reduce this lag between the policies being implemented and their
effects showing up in research, but it would make that lag less obvious to the reader. As
such, conducting annual assessments may result in reports that are out of sync with the
policy cycle, and may hamper—rather than facilitate—effective policy decisions.
Recommendation: OSD should conduct the survey no more frequently
than once every two years.

APPENDIX A

Phone Survey Script

Annotated CATI Survey
[START OF SCREENER QUESTIONS]
INTRODUCTION
May I please speak to [SALUTE / NAME]?
[IF ASKED: My name is (INTERVIEWER’S NAME)]
1.	
2.	
3.	
4.	

SUBJECT SPEAKING/COMING TO PHONE
SUBJECT LIVES HERE – NEEDS APPOINTMENT
SUBJECT KNOWN, CANNOT BE REACHED AT THIS NUMBER
NEVER HEARD OF SUBJECT

[If Subject was not the person who initially answered the phone, verify identity]
Am I speaking to [SALUTE / NAME]?
•	 YES
•	 NO
[If “NO” AND SUBJECT IS NOT AVAILABLE, CLICK GO TO RESULT]
[IF YES, CLICK NEXT]
[Hello, my name is (INTERVIEWER’S NAME)].
I am calling on behalf of the 2014 RAND Military Workplace Study. We recently
sent a letter saying we would be calling to conduct a short survey for the Department
of Defense. This letter included an informed consent statement explaining the study
and $4 in cash. The survey takes about 7 to 12 minutes and asks whether or not you
have experienced harassment, discrimination, or inappropriate sexual behavior in your
military work environment.
169

170

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

Your participation in this survey is completely voluntary. Your decision to participate, or not, will have no effect on the benefits you receive. There is no penalty if
you choose not to participate. You can withdraw from the interview at any time, and
you may skip individual questions during the interview. Everyone is encouraged to
participate so that the information we provide to the DoD and Congress is an accurate
assessment of the military workplace environment.
RAND and Westat take many steps to keep your responses confidential. Your
responses will be combined with other survey responses so that you cannot be identified in any reports.
Because some of the questions in the survey are sensitive, I want to suggest you
take this call where no one else can overhear the questions. OK?
•	
•	
•	
•	

YES
NO
DON’T KNOW
REFUSED

[IF “NO”, ASK SUBJECT IF HE OR SHE CAN TAKE THE CALL WHERE NO
ONE WILL HEAR THE QUESTIONS. IF “NO” AGAIN, ASK IF THERE IS
ANOTHER TIME WE COULD CALL WHERE THAT WOULD BE POSSIBLE]
Have you received the letter containing $4 in cash and an informed consent statement explaining the study? INTRO1_Letter
a.	 YES [GO TO SURVEY START] 1
b.	 NO [CONTINUE] 2
c.	 DON’T KNOW [CONTINUE] 97
I would be happy to send that to you. May I please confirm your mailing address?
Street 1:
Street 2:
City:
State:
Zip Code:
Country:
[Here, you should verify the address on file.]
•	 YES [Address confirmed]
•	 NO [Address needs to be updated]

Phone Survey Script

171

[Please make any necessary updates then continue with the interview. Please leave the
Country field blank if it’s a US address.]
I need to give you a few more details about the study before we begin:
The DoD Privacy Advisory states that the Defense Manpower Data Center has
provided certain information about you to allow RAND to conduct this survey. Your
name and contact information have been used to send you notifications and information about this survey. The Defense Manpower Data Center has provided certain
demographic information to reduce the number of questions in the survey and minimize the burden on your time. Your response and demographic data are linked by
RAND to allow for a thorough analysis of the responses by demographics. RAND
has not been authorized by DoD to identify or link survey response and demographic
information with your name and contact information. The resulting reports will not
include analysis of groupings of less than 15.
I also need to share the following information about the study. RAND is a private, nonprofit organization that conducts research and analysis to help improve public
policy and decision making. RAND’s research partner is Westat, an internationally
known research and statistical survey organization. The DoD has funded RAND to
conduct an independent assessment of the military work environment during the past
year. You and other Service members, including all women and approximately 25 percent of men, are being urged to participate in order to ensure that DoD and Congress
have a full understanding of Service members’ experiences. The survey results will have
a direct impact on training, military justice, and services that affect you and other service members.
RAND and Westat will not give the DoD information about who participated
in the study, nor will RAND link your individual responses on this survey with your
name or identity. RAND has also received a federal “Certificate of Confidentiality”
that provides RAND with additional protection against any attempt to subpoena confidential survey records. However, the protections of a Certificate of Confidentiality
are not absolute. If you tell us that a child or elderly person is being abused, or that you
intend to harm yourself or someone else, the researchers may report it to the authorities. The Certificate of Confidentiality is not an endorsement of the project by the U.S.
Department of Health and Human Services.
For most respondents, the survey involves no risks of participation. However, if
you have ever experienced sexual harassment or assault, some questions may cause discomfort or distress. Some questions may be explicit. Therefore, you may prefer to take
the survey in a private setting.
It is important to note that this survey is not a means of making a formal complaint or report that you wish to have DoD act upon. The survey will not collect the
identity of any perpetrators of assault or harassment. Instead, we provide information at
the end of the survey about how you can make a formal report of harassment or assault.

172

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

[Survey Start]
Do you have any questions about the study before we begin? S3
a.	 YES [IF QUESTIONS ASKED, CONSULT INFORMED CONSENT
STATEMENT AND FAQs; THEN CONTINUE IF THE SUBJECT
AGREES TO BEGIN THE SURVEY.] 1
b.	 NO [CONTINUE IF THE SUBJECT AGREES TO BEGIN THE
SURVEY.] 2
c.	 NOT A CONVENIENT TIME: When would it be convenient for me to
call back? [GO TO RESULT AND FILL OUT CALLBACK FORM] 3
d.	 REFUSED TO TAKE SURVEY [GO TO RESULT AND ENTER DISPOSITION CODE FOR TYPE OF REFUSAL] 98
Just to make sure you are the person I am supposed to interview, can you tell me
your year of birth?
[YEAR:]
•	 EXACT MATCH [CLICK NEXT]
•	 NOT A MATCH [Thank you. We will check our records again. GO TO
RESULT]
•	 REFUSED [GO TO RESULT]
[END OF SCREENER QUESTIONS]
Let’s begin. Please answer each question thoughtfully and truthfully. This will
allow us to provide an accurate picture of the different experiences of today’s military
members. If you prefer not to answer a specific question for any reason, just let me
know.
During this interview, if you are feeling distressed, please let me know and I will
provide contact information for crisis counselors who will provide you with confidential support and consultation.
[START OF CATI SURVEY]
Are you male or female? Intro1
•	
•	
•	
•	

MALE 1
FEMALE 2
DON’T KNOW 97
REFUSED 98

Phone Survey Script

173

[Intro1 will determine wording in items—[brackets] indicate alternative forms. If
Respondent does not provide gender then grab sample gender.]
Thank you. Most of this survey asks about experiences that have happened within
the past 12 months. When answering these questions, please do NOT include any
events that occurred before [Day_of_Week, X date].
Please try to think of any important events in your life that occurred near [X date]
such as birthdays, weddings, or family activities. These events can help you remember
which things happened before [X date] and which happened after as you answer the
rest of the survey questions.
[PAUSE TO GIVE RESPONDENT A MOMENT TO RECALL EVENTS ONE
YEAR AGO]
The following questions will help you think about your life one year ago. Please
answer Yes or No to each.
[IF SERVICE MEMBER HESITATES, SAY: Let me know if you do not remember.]
1.	 Do you currently live in the same house or building that you did on [X
Date]? P1
a.	YES 1
b.	NO 2
c.	 DO NOT REMEMBER 97
d.	REFUSED 98
2.	 Are you the same rank today that you were on [X Date]? P2
a.	YES 1
b.	NO 2
c.	 DO NOT REMEMBER 97
d.	REFUSED 98
3.	 Are you in the same military occupation today as you were on [X Date]? P3
a.	YES 1
b.	NO 2
c.	 DO NOT REMEMBER 97
d.	REFUSED 98

174

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

4.	 Were you on vacation or leave on [X Date]? P4
a.	YES 1
b.	NO 2
c.	 DO NOT REMEMBER 97
d.	REFUSED 98
5.	 Were you married or dating someone on [X Date]? P5
a.	YES 1
b.	NO 2
c.	 DO NOT REMEMBER 97
d.	REFUSED 98
For the next questions, I will ask you about several things that someone from
work might have done to you that were upsetting or offensive, and that happened
AFTER [X date].
When I say “someone from work,” please include any person you have contact
with as part of your military duties. “Someone from work” could be a supervisor,
someone above or below you in rank, or a civilian employee or contractor. They could
be in your unit or in other units.
These things may have occurred on-duty or off-duty, on-base or off-base. Please
include them as long as the person who did them to you was someone from work.
Remember, all the information you share will be kept confidential. Please answer
Yes or No for each question.
[Programming note: Use gender questions asked at the beginning of the survey to
branch into parallel forms. Brackets within items show which words will be used by
gender of respondent.]
6.	 Since [X Date], did someone from work repeatedly tell sexual “jokes” that
made you uncomfortable, angry, or upset? SH1
a.	YES 1
b.	NO 2
c.	 DO NOT REMEMBER 97
d.	REFUSED 98
[Programming note: Same sex as respondent]
7.	 Since [X Date], did someone from work embarrass, anger, or upset you by
repeatedly suggesting that you do not act like a [man/woman] is supposed

Phone Survey Script

175

to? (For example, by calling you [male respondents: “a woman, a fag, or gay”;
female respondents: “a dyke, or butch”]). SH2
a.	YES 1
b.	NO 2
c.	 DO NOT REMEMBER 97
d.	REFUSED 98
8.	 Since [X Date], did someone from work repeatedly make sexual gestures
or sexual body movements (for example, thrusting their pelvis or grabbing
their crotch) that made you uncomfortable, angry, or upset? SH3
a.	YES 1
b.	NO 2
c.	 DO NOT REMEMBER 97
d.	REFUSED 98
9.	 Since [X Date], did someone from work display, show, or send sexually
explicit materials like pictures or videos that made you uncomfortable,
angry, or upset? SH4
a.	YES 1
b.	NO 2
c.	 DO NOT REMEMBER 97
d.	REFUSED 98
10.	 Since [X Date], did someone from work repeatedly tell you about their
sexual activities in a way that made you uncomfortable, angry, or upset?
SH5
a.	YES 1
b.	NO 2
c.	 DO NOT REMEMBER 97
d.	REFUSED 98
11.	 Since [X Date], did someone from work repeatedly ask you questions about
your sex life or sexual interests that made you uncomfortable, angry, or
upset? SH6
a.	YES 1
b.	NO 2
c.	 DO NOT REMEMBER 97

176

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

d.	REFUSED 98
12.	 Since [X Date], did someone from work make repeated sexual comments
about your appearance or body that made you uncomfortable, angry, or
upset? SH7
a.	YES 1
b.	NO 2
c.	 DO NOT REMEMBER 97
d.	REFUSED 98
13.	 Since [X Date], did someone from work either take or share sexually suggestive pictures or videos of you when you did not want them to? SH8
a.	YES 1
b.	NO 2
c.	 DO NOT REMEMBER 97
d.	REFUSED 98
[If SH8=2, 97, or 98 (No, Don’t Know, or Refused) then skip to SH9]
14.	 Did this make you uncomfortable, angry, or upset? SH8a
a.	YES 1
b.	NO 2
c.	 DO NOT REMEMBER 97
d.	REFUSED 98
[As a reminder, when I ask about “Someone from work” I want you to include any
person you have contact with as part of your military duties. “Someone from work”
could be a supervisor, someone above or below you in rank, or a civilian contractor.
They could be in your unit or in other units. These things may have occurred off-duty
or off-base. Please include them as long as the person who did them to you was someone from work.
Continue to answer Yes or No.]
15.	 Since [X Date], did someone from work make repeated attempts to establish an unwanted romantic or sexual relationship with you? [These could
range from repeatedly asking you out for coffee to asking you for sex or a ‘hookup.’] SH9
a.	YES 1

Phone Survey Script

177

b.	NO 2
c.	 DO NOT REMEMBER 97
d.	REFUSED 98
[If SH9=2, 97, 98 (No, Don’t Know, or Refused) then skip to SH10]
16.	 Did these attempts make you uncomfortable, angry, or upset? SH9a
a.	YES 1
b.	NO 2
c.	 DO NOT REMEMBER 97
d.	REFUSED 98
17.	 Since [X Date], did someone from work intentionally touch you in a sexual
way when you did not want them to? [This could include touching your genitals, breasts, buttocks, or touching you with their genitals anywhere on your
body.] SH10
a.	YES 1
b.	NO 2
c.	 DO NOT REMEMBER 97
d.	REFUSED 98
[If SH10=1 (Yes) then Skip to SH12 and PerceivedHostileWorkEnvironment = TRUE]
18.	 Since [X Date], did someone from work repeatedly touch you in any other
way that made you uncomfortable, angry, or upset? [This could include
almost any unnecessary physical contact including hugs, shoulder rubs, or
touching your hair, but would not usually include handshakes or routine uniform adjustments.] SH11
a.	YES 1
b.	NO 2
c.	 DO NOT REMEMBER 97
d.	REFUSED 98
19.	 Since [X Date], has someone from work made you feel as if you would
get some workplace benefit in exchange for doing something sexual? [For
example, they might hint that they would give you a good evaluation or fitness
report, a better assignment, or better treatment at work in exchange for doing
something sexual. Something sexual could include talking about sex, undressing, sharing sexual pictures, or having some type of sexual contact.] SH12

178

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

a.	YES 1
b.	NO 2
c.	 DO NOT REMEMBER 97
d.	REFUSED 98
20.	 Since [X Date], has someone from work made you feel like you would get
punished or treated unfairly in the workplace if you did not do something
sexual? [For example, they hinted that they would give you a bad evaluation or
fitness report, a bad assignment, or bad treatment at work if you were not willing to do something sexual. This could include being unwilling to talk about
sex, undress, share sexual pictures, or have some type of sexual contact.] SH13
a.	YES 1
b.	NO 2
c.	 DO NOT REMEMBER 97
d.	REFUSED 98
21.	 Since [X Date], did you hear someone from work say that [men/women] are
not as good as [women/men] at your particular job, or that [men/women]
should be prevented from having your job? SH14
a.	YES 1
b.	NO 2
c.	 DO NOT REMEMBER 97
d.	REFUSED 98
22.	 Since [X Date], do you think someone from work mistreated, ignored,
excluded, or insulted you because you are a [man/woman]? SH15
a.	YES 1
b.	NO 2
c.	 DO NOT REMEMBER 97
d.	REFUSED 98
INTRODUCTION TO SA SECTION
[STARTING AT THIS POINT THROUGH THE REST OF THE SURVEY,
ANY REFUSAL (INCLUDING HANG-UPS AND MILD REFUSALS) MUST
BE CODED AS A FINAL REFUSAL.]

Phone Survey Script

179

Please listen carefully to the following instructions about the next section. The
questions will ask about unwanted experiences of an abusive, humiliating, or sexual
nature. These types of unwanted experiences vary in severity. Some of them could be
viewed as an assault. Others could be viewed as hazing or some other type of unwanted
experience.
They can happen to both women and men.
I want to apologize for some of the graphic words in the next section. I will be
describing things that DoD regulations define with graphic, anatomical language. It is
important that I use the same names for body parts that the DoD uses. This is the best
way to determine whether or not people have had these types of experiences.
When answering these questions, please include experiences no matter who did
it to you or where it happened. It could be done to you by a male or female, Service
member or civilian, someone you knew or a stranger.
Please include experiences even if you or others had been drinking alcohol, using
drugs, or were intoxicated.
The following questions will ask you about events that happened AFTER [X
date].
You do not have to answer any question that you don’t want to answer. Remember, all the information you share will be kept confidential. We will not give your identifiable answers to the DoD.
Please answer Yes or No to the following questions.
117.	Since [X Date], did you have any unwanted experiences in which someone
put his penis into your [If Intro1=2 (Female), display: “vagina,”] anus or
mouth? SA1
a.	YES 1
b.	NO 2
c.	 DO NOT REMEMBER 97
d.	REFUSED 98
[If SA1=1 (Yes) ask “OB1a”, else continue]
118.	Since [X Date], did you have any unwanted experiences in which someone
put any object or any body part other than a penis into your [If Intro1=2
(Female), display: “vagina,”] anus or mouth? [The body part could include a
finger, tongue or testicles.] SA2
a.	YES 1
b.	NO 2
c.	 DO NOT REMEMBER 97

180

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

d.	REFUSED 98
[If SA2=1 (Yes) and sexualAssault_12m ≠ “True”, then ask “PF2a”, else continue]
119.	Since [X Date], did anyone make you put any part of your body or any
object into someone’s mouth, vagina, or anus when you did not want to?
[A part of the body could include your [If Intro1=1 (Male) display: “penis, testicles,”] tongue or fingers.] SA3
a.	YES 1
b.	NO 2
c.	 DO NOT REMEMBER 97
d.	REFUSED 98
[If SA3=1 (Yes) and sexualAssault_12m ≠ “True”, then ask “PF3a”, else continue]
[Programming note: If sexualAssault_12m = “TRUE” on the basis of follow-ups to
SA1-SA3 then penetrativeSA_12m = “TRUE” else penetrativeSA_12m = “FALSE”]
120.	Since [X Date], did you have any unwanted experiences in which someone intentionally touched private areas of your body (either directly or
through clothing)? [Private areas include buttocks, inner thigh, breasts, groin,
anus, vagina, penis, or testicles.] SA4
a.	YES 1
b.	NO 2
c.	 DO NOT REMEMBER 97
d.	REFUSED 98
[If SA4=1 (Yes) and sexualAssault_12m ≠ “True”, then ask “PF4a”, else continue]
121.	Since [X Date], did you have any unwanted experiences in which someone
made you touch private areas of their body or someone else’s body (either
directly or through clothing)? [This could involve the person putting their
private areas on you. Private areas include buttocks, inner thigh, breasts, groin,
anus, vagina, penis or testicles.] SA5
a.	YES 1
b.	NO 2
c.	 DO NOT REMEMBER 97
d.	REFUSED 98

Phone Survey Script

181

[If SA5=1 (Yes) and sexualAssault_12m ≠ “True”, then ask “PF5a”, else continue]
[Programming note: If SexualAssault_12m = “TRUE” on the basis of follow-ups to
SA4-SA5 then contactSA_12m = “TRUE” else contactSA_12m = “FALSE”]
122.	Since [X Date], did you have any unwanted experiences in which someone
attempted to put a penis, an object, or any body part into your [If Intro1=2
(Female), display: “vagina,”] anus or mouth, but no penetration actually
occurred? SA6
a.	YES 1
b.	NO 2
c.	 DO NOT REMEMBER 97
d.	REFUSED 98
[If SA6=2, 97, or 98 (No, Don’t know, or Refused) skip to End of Survey]
123.	As part of this attempt, did the person touch you anywhere on your body?
[This includes grabbing your arm, hair or clothes, or pushing their body against
yours.] SA6a
a.	YES 1
b.	NO 2
c.	 DO NOT REMEMBER 97
d.	REFUSED 98
[If sexualAssault_12m ≠ “True”, then ask “PF6a”, else continue]
[Programming note: If sexualAssault_12m = “TRUE” on the basis of follow-ups to SA6
then attemptedSA_12m = “TRUE” else attemptedSA_12m = “FALSE”]
[Purpose Follow Up Module START]
[Purpose Follow Up module: “X” in the question number refers to appropriate SA
screener number (2-6)]
124.	Was this unwanted experience (or any experiences like this if you had
more than one) abusive or humiliating, or intended to be abusive or humiliating? [If you aren’t sure, choose the best answer.] PFXa
a.	YES 1
b.	NO 2

182

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

c.	 DO NOT REMEMBER 97
d.	REFUSED 98
[If PFXa=1 (Yes) skip to OBX item]
125.	Do you believe the person did it for a sexual reason? [For example, they did
it because they were sexually aroused or to get sexually aroused. If you aren’t
sure, choose the best answer.] PFXb
a.	YES 1
b.	NO 2
c.	 DO NOT REMEMBER 97
d.	REFUSED 98
[If PFXb=1 (Yes) continue to OBX item]
[If PFXb=2, 97, 98 (No, Don’t Know, or Refused) skip to next SA_screener question
(SA3 –SA6)]
[Purpose Follow-Up Module END]
[Offender Behavior Module START]
[Offender Behavior Module: “X” in the question number refers to appropriate screener
number (1-6)]
The following statements are about things that might have happened to you when
you had this experience. In these statements, ‘they’ means the person or people who
did this to you.
Please indicate which of the following happened by answering Yes or No.
126.	They continued even when you told them or showed them that you were
unwilling. OBXa
a.	YES 1
b.	NO 2
c.	 DO NOT REMEMBER 97
d.	REFUSED 98
[If OBXa=1 (Yes) sexualAssault_12m= “TRUE”]

Phone Survey Script

183

127.	They used physical force to make you comply. [For example, they grabbed
your arm or used their body weight to hold you down.] OBXb
a.	YES 1
b.	NO 2
c.	 DO NOT REMEMBER 97
d.	REFUSED 98
[If OBXb=1 (Yes) sexualAssault_12m = “TRUE”]
128.	They physically injured you. OBXc
a.	YES 1
b.	NO 2
c.	 DO NOT REMEMBER 97
d.	REFUSED 98
[If OBXc=1 (Yes) sexualAssault_12m = “TRUE”]
129.	They threatened to physically hurt you (or someone else). OBXd
a.	YES 1
b.	NO 2
c.	 DO NOT REMEMBER 97
d.	REFUSED 98
[If OBXd=1 (Yes) sexualAssault_12m = “TRUE”]
[IF OBXd=1 (Yes) then ask]
130.	Did they threaten you (or someone else) with a weapon? OBXd_1
a.	YES 1
b.	NO 2
c.	 DO NOT REMEMBER 97
d.	REFUSED 98
[IF OBXd=1 (Yes) then ask]
131.	Did they threaten to seriously injure, kill, or kidnap you (or someone else)?
OBXd_2
a.	YES 1

184

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

b.	NO 2
c.	 DO NOT REMEMBER 97
d.	REFUSED 98
132.	They threatened you (or someone else) in some other way. [For example, by
using their position of authority, by spreading lies about you, or by getting you
in trouble with authorities.] OBXe
a.	YES 1
b.	NO 2
c.	 DO NOT REMEMBER 97
d.	REFUSED 98
[If OBXe=1 (Yes) sexualAssault_12m = “TRUE”]
[Please continue to indicate which of the following happened.]
133.	They did it when you were passed out, asleep, or unconscious. OBXf
a.	YES 1
b.	NO 2
c.	 DO NOT REMEMBER 97
d.	REFUSED 98
[If OBXf=1 (Yes) sexualAssault_12m = “TRUE”]
134.	They did it when you were so drunk, high, or drugged that you could not
understand what was happening or could not show them that you were
unwilling. OBXg
a.	YES 1
b.	NO 2
c.	 DO NOT REMEMBER 97
d.	REFUSED 98
[If OBXg=1 (Yes) sexualAssault_12m = “TRUE”]
135.	They tricked you into thinking that they were someone else or that they
were allowed to do it for a professional purpose (like a person pretending
to be a doctor). OBXh
a.	YES 1

Phone Survey Script

185

b.	NO 2
c.	 DO NOT REMEMBER 97
d.	REFUSED 98
[If OBXh=1 (Yes) sexualAssault_12m = “TRUE”]
[If sexualAssault_12m = TRUE, then skip to next screening item SA2-SA6, else
continue.]
136.	They made you so afraid that you froze and could not tell them or show
them that you were unwilling. OBXi
a.	YES 1
b.	NO 2
c.	 DO NOT REMEMBER 97
d.	REFUSED 98
[If OBXi=1 (Yes) sexualAssault_12m= “TRUE”]
137.	They did it after you had consumed so much alcohol that the next day you
could not remember what happened. OBXj
a.	YES 1
b.	NO 2
c.	 DO NOT REMEMBER 97
d.	REFUSED 98
[If OBXj=1 (Yes) sexualAssault_12m = “TRUE”]
138.	It happened without your consent. OBXk
a.	YES 1
b.	NO 2
c.	 DO NOT REMEMBER 97
d.	REFUSED 98
[If OBXk=1 (Yes) sexualAssault_12m = “TRUE”]
[Offender Behavior Module END]
[After OB1-OB5, continue to next screening item SA2-SA6. After OB6 series questions, continue to END. ]

186

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

Thank you for completing the survey with me today. This information will
help improve the climate and safety of the U.S. military.
I understand that answering questions like the ones in this survey can be
upsetting. If you feel you need support or would like to talk to someone, you
can call:
the DoD Safe Helpline number [877-995-5247],
the Military Crisis Line [1-800-273-8255],
or the Rape, Abuse and Incest National Network [1-800-656-HOPE].
A DoD Safe Helpline counselor can also explain how to report a sexual
assault and how to find out the current status of a sexual assault report.
Would you like the telephone numbers for any of those organizations?

APPENDIX B

Mail Survey (Male and Female Respondent Versions)

This appendix contains the male and female versions of the mailed RMWS survey.

187

188

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

2014 RAND Military Workplace Study

PRIVACY ADVISORY
The Defense Manpower Data Center has provided certain information about you to allow
RAND to conduct this survey. Your name and contact information have been used to send
you notifications and information about this survey. The Defense Manpower Data Center
has provided certain demographic information to reduce the number of questions in the
survey and minimize the burden on your time. Your response and demographic data are
linked by RAND to allow for a thorough analysis of the responses by the demographics.
RAND has not been authorized by DoD to identify or link survey response and demographic
information with your name and contact information. The resulting reports will not include
analysis of groupings of less than 15.

RCS#: DD-P&R(QD)1947
Expires: 7/25/2015

Mail Survey

189

Before you begin this survey, please read the informed consent statement that follows.
INFORMED CONSENT STATEMENT
Introduction: The RAND Corporation and Westat are conducting a survey that asks about whether or not you have experienced
harassment, discrimination, or inappropriate sexual behavior. We need your responses whether or not you have had these
experiences. RAND is a private, nonprofit organization that conducts research and analysis to help improve public policy and
decision making. RAND’s research partner is Westat, an internationally known research and statistical survey organization.
Purpose: The Department of Defense (DoD) and Congress are working to understand the full extent of harassment and assault in the
military and whether current efforts to reduce them are helping. The DoD has funded RAND to conduct an independent assessment
of the military work environment during the past year. You and other Service members, including all women and approximately 25%
of men, are being urged to participate in order to ensure that DoD and Congress have a full understanding of Service member
experiences. The survey results will have a direct impact on training, military justice, and services that affect you and other Service
members.
Survey Length: This survey will take about 5 minutes to complete.
Incentive: To thank you in advance for participating in our mail survey, we enclosed $4.00 in cash with the first survey packet we
mailed you. This is a small token of our appreciation for your time and support.
Voluntary Participation: Your participation is completely voluntary, and you may stop at any time. You can skip any question you
don't want to answer.
Privacy: RAND will not give the DoD information about who participated in the study, nor will RAND link your individual responses
on this survey with your name or identity. DoD has agreed to this condition to protect your privacy. RAND also received a federal
“Certificate of Confidentiality” that provides RAND with additional protection against any attempt to subpoena confidential survey
records. This Certificate helps ensure the confidentiality of your information by protecting the researchers from being forced to
release information that might identify you, even under a court order or subpoena. However, the protections of a Certificate of
Confidentiality are not absolute. If you tell us that a child or elderly person is being abused, or that you intend to harm yourself or
someone else, the researchers may report it to the authorities. The Certificate of Confidentiality is not an endorsement of the
project by the U.S. Department of Health and Human Services.
Added Protection Procedures: Only members of the RAND-Westat study team will have access to your individual responses, and we
will take great care to protect your privacy and data. For example, RAND will collapse some categories or ranges of potentially
identifying variables to prevent identification by inference. Study staff members have been trained to deidentify data to protect your
identity and are subject to civil penalties for violating your confidentiality. Our research team has a number of safeguarding
procedures in place to ensure that survey data are protected from accidental disclosure.
Risks of Participation: For most respondents, the survey involves no risks of participation. However, if you have ever experienced
sexual harassment or assault, some questions may cause discomfort or distress. Some questions may be explicit; therefore, you may
prefer to take the survey in a private setting.
Reporting Harassment or Assault: It is important to note that this survey is not a means of making a formal complaint or report that
you wish to have DoD act upon. The survey will not collect the identity of any perpetrators of assault or harassment. Instead, we
provide information below and at the end of the survey about how you can make a formal report of harassment or assault.
Resources Available to You: If you need resources or assistance, the DoD Safe Helpline (https://www.safehelpline.org/) provides
worldwide live, confidential support, 24/7. You can initiate a report or search for your nearest Sexual Assault Response Coordinator
(SARC). You can find links to Service-specific reporting resources and access information about the prevention of and response to
sexual assault on their website or by calling the hotline at 1-877-995-5247.
Some questions in the survey may ask about upsetting experiences. If you feel distressed, for confidential support and consultation
you can contact the Military Crisis Line (http://veteranscrisisline.net/ActiveDuty.aspx) or call them at 1-800-273-8255 (then press 1).
Who do you contact if you have questions or concerns about the survey?
• RAND for questions about the overall study: Contact the RAND team by email at [email protected] or go to the RAND
website www.WGRS2014.rand.org
• Westat Survey Helpdesk for computer, technical, or survey questions: By telephone toll free at 1-855-365-5914 (OCONUS call
collect 240-453-2620) or by email at [email protected].
• Questions about your rights as a participant in this study: Contact the RAND Human Subjects Protection Committee at 310393-0411, ext. 6369, in Santa Monica, California.
• Questions about the licensing of the survey: Information about DoD surveys can be found at
http://www.dtic.mil/whs/directives/corres/intinfocollections/iic_search.html. This survey’s RCS is DD-P&R(QD)1947, Expiration
July 25, 2015.
If you would like to participate in this survey, fill out the survey booklet and mail it to us in the enclosed pre-addressed privacy
envelope. No postage is needed if you use the return envelope provided.

190

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

2014 RAND Military Workplace Study

Survey Introduction: Some of the questions in this survey will be personal. Please answer each
question thoughtfully and truthfully. This will allow us to provide an accurate picture of the
different experiences of today’s military members. If you prefer not to answer a specific question
for any reason, just leave it blank. For your privacy, you may want to take this survey where other
people won’t see your answers. Thank you for agreeing to participate in this important study.
Marking Instructions:
u

Please use a black or blue pen to complete this form.

u

Mark       to indicate your answer. If you want to change your answer, darken the box        and mark
the correct answer.

Start Here
3.
Please try to think of any important events in your
life that occurred near 9/1/2013 such as birthdays,
weddings, or family activities. These events can help
you remember which things happened before 9/1/2013
and which happened after as you answer the rest of the
survey questions.

Yes
No
Do not remember
4.

Do you currently live in the same house or building
that you did on 9/1/2013?

5.

Were you married or dating someone on 9/1/2013?
Yes
No
Do not remember

Yes
No
Do not remember
2.

Were you on vacation or leave on 9/1/2013?
Yes
No
Do not remember

The following questions will help you think about your life
one year ago.
1.

Are you in the same military occupation today as you
were on 9/1/2013?

Are you the same rank today that you were on
9/1/2013?
Yes
No
Do not remember

1

RAND Wrkplc_v3
44565

Draft

Mail Survey

In this section, you will be asked about several things that someone from work might have done to you that were
upsetting or offensive, and that happened AFTER 9/1/2013.
When the questions say "someone from work," please include any person you have contact with as part of your
military duties. "Someone from work" could be a supervisor, someone above or below you in rank, or a civilian
employee/contractor. They could be in your unit or in other units.
These things may have occurred on-duty or off-duty, on-base or off-base. Please include them as long as the
person who did them to you was someone from work.
Remember, all the information you share will be kept confidential.
6.

12. Since 9/1/2013, did someone from work make
repeated sexual comments about your appearance
or body that made you uncomfortable, angry, or
upset?

Since 9/1/2013, did someone from work repeatedly
tell sexual "jokes" that made you uncomfortable,
angry, or upset?
Yes
No

7.

Yes
No

Since 9/1/2013, did someone from work embarrass,
anger, or upset you by repeatedly suggesting that
you do not act like a man is supposed to? For
example, by calling you "a woman, a fag, or gay."

13. Since 9/1/2013, did someone from work either take
or share sexually suggestive pictures or videos of you
when you did not want them to?

Yes
No

Yes
No

g Go to Question 14
q
13a. Did this make you uncomfortable, angry,
or upset?

8. Since 9/1/2013, did someone from work repeatedly
make sexual gestures or sexual body movements
(for example, thrusting their pelvis or grabbing their
crotch) that made you uncomfortable, angry, or
upset?

Yes
No

Yes
No

14. Since 9/1/2013, did someone from work make
repeated attempts to establish an unwanted
romantic or sexual relationship with you? These
could range from repeatedly asking you out for
coffee to asking you for sex or a 'hook-up.'

9. Since 9/1/2013, did someone from work display,
show, or send sexually explicit materials like pictures
or videos that made you uncomfortable, angry, or
upset?

Yes
No

Yes
No

g Go to Question 15, page 3

q
14a. Did these attempts make you uncomfortable,
angry, or upset?

10. Since 9/1/2013, did someone from work repeatedly
tell you about their sexual activities in a way that
made you uncomfortable, angry, or upset?

Yes
No

Yes
No
11. Since 9/1/2013, did someone from work repeatedly ask
you questions about your sex life or sexual
interests that made you uncomfortable, angry, or
upset?
Yes
No

2

RAND Wrkplc_v3
44565

Draft

191

192

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

"Someone from work" includes any person you have contact with as part of your military duties. "Someone from
work" could be a supervisor, someone above or below you in rank, or a civilian employee/contractor. They could
be in your unit or in other units.
These things may have occurred off-duty or off-base. Please include them as long as the person who did them to
you was someone from work.
Remember, all the information you share will be kept confidential.

15. Since 9/1/2013, did someone from work intentionally
touch you in a sexual way when you did not want
them to? This could include touching your genitals,
breasts, buttocks, or touching you with their genitals
anywhere on your body.
Yes
No

18. Since 9/1/2013, has someone from work made you
feel like you would get punished or treated unfairly
in the workplace if you did not do something sexual?
For example, they hinted that they would give you a
bad evaluation/fitness report, a bad assignment, or
bad treatment at work if you were not willing to do
something sexual. This could include being unwilling
to talk about sex, undress, share sexual pictures, or
have some type of sexual contact.

g Go to Question 17

16. Since 9/1/2013, did someone from work repeatedly
touch you in any other way that made you
uncomfortable, angry, or upset?

Yes
No

This could include almost any unnecessary physical
contact including hugs, shoulder rubs, or touching
your hair, but would not usually include handshakes
or routine uniform adjustments.

19. Since 9/1/2013, did you hear someone from work
say that men are not as good as women at your
particular job, or that men should be prevented from
having your job?

Yes
No

Yes
No

17. Since 9/1/2013, has someone from work made you
feel as if you would get some workplace benefit in
exchange for doing something sexual?

20. Since 9/1/2013, do you think someone from work
mistreated, ignored, excluded, or insulted you
because you are a man?

For example, they might hint that they would give
you a good evaluation/fitness report, a better
assignment, or better treatment at work in exchange
for doing something sexual. Something sexual could
include talking about sex, undressing, sharing sexual
pictures, or having some type of sexual contact.

Yes
No

Yes
No

3

RAND Wrkplc_v3
44565

Draft

Mail Survey

Please read the following special instructions before continuing the survey.
Questions in this next section ask about unwanted experiences of an abusive, humiliating, or sexual nature.
These types of unwanted experiences vary in severity. Some of them could be viewed as an assault. Others could
be viewed as hazing or some other type of unwanted experience.
They can happen to both women and men.
Some of the language may seem graphic, but using the names of specific body parts is the best way to
determine whether or not people have had these types of experiences.
When answering these questions, please include experiences no matter who did it to you or where it happened.
It could be done to you by a male or female, Service member or civilian, someone you knew or a stranger.
Please include experiences even if you or others had been drinking alcohol, using drugs, or were intoxicated.
The following questions will ask you about events that happened AFTER 9/1/2013.
Remember, all the information you share will be kept confidential. RAND will not give your identifiable answers
to the DoD.
Yes No

21. Since 9/1/2013, did you have any unwanted
experiences in which someone put his penis into
your anus or mouth?

26. They threatened you (or someone else)
in some other way.
For example, by using their position
of authority, by spreading lies about
you, or by getting you in trouble with
authorities.

Yes
No g Go to Question 31, page 5
The following statements are about things that might
have happened to you when you had this experience.
In these statements, ‘they’ means the person or
people who did this to you.

27. They did it when you were passed out,
asleep, or unconscious.
28. They did it when you were so drunk,
high, or drugged that you could not
understand what was happening or
could not show them that you were
unwilling.

Please indicate which of the following happened.
Yes No
22. They continued even when you told
them or showed them that you were
unwilling.

29. They tricked you into thinking that
they were someone else or that they
were allowed to do it for a professional
purpose (like a person pretending to be
a doctor).

23. They used physical force to make you
comply. For example, they grabbed
your arm or used their body weight to
hold you down.
24. They physically injured you.

30. Did you answer "Yes" to any question from 22 to 29?

25. They threatened to physically hurt you
(or someone else).

Yes g Go to End of Survey, page 10
No g Go to Question 31, page 5

4

RAND Wrkplc_v3
44565

Draft

193

194

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

31. Since 9/1/2013, did you have any unwanted
experiences in which someone put any object or
any body part other than a penis into your anus
or mouth? The body part could include a finger,
tongue, or testicles.

The following statements are about things that might
have happened to you when you had this experience.
In these statements, ‘they’ means the person or
people who did this to you.
Please indicate which of the following happened.

Yes
No g Go to Question 41, page 6

Yes No
32. They continued even when you told
them or showed them that you were
unwilling.

q
31a. Was this unwanted experience (or any
experiences like this if you had more than
one) abusive or humiliating, or intended to
be abusive or humiliating? If you aren’t sure,
choose the best answer.

33. They used physical force to make you
comply. For example, they grabbed your
arm or used their body weight to hold
you down.

Yes
No

34. They physically injured you.
35. They threatened to physically hurt you
(or someone else).

31b. Do you believe the person did it for a sexual
reason? For example, they did it because
they were sexually aroused or to get sexually
aroused. If you aren’t sure, choose the best
answer.

36. They threatened you (or someone else)
in some other way.
For example, by using their position
of authority, by spreading lies about
you, or by getting you in trouble with
authorities.

Yes
No

37. They did it when you were passed out,
asleep, or unconscious.

31c. Did you answer "Yes" to either Question 31a or
31b?

38. They did it when you were so drunk,
high, or drugged that you could not
understand what was happening or
could not show them that you were
unwilling.

Yes g Continue to next column
No g Go to Question 41, page 6

39. They tricked you into thinking that
they were someone else or that they
were allowed to do it for a professional
purpose (like a person pretending to be
a doctor).
40. Did you answer "Yes" to any question from 32 to 39?
Yes g Go to End of Survey, page 10
No g Go to Question 41, page 6

5

RAND Wrkplc_v3
44565

Draft

Mail Survey

The following statements are about things that might
have happened to you when you had this experience.
In these statements, ‘they’ means the person or
people who did this to you.

41. Since 9/1/2013, did anyone make you put any part
your body or any object into someone’s mouth,
vagina, or anus when you did not want to? A part of
the body could include your penis, testicles, tongue,
or fingers.

Please indicate which of the following happened.

Yes
No g Go to Question 51, page 7

Yes No
42. They continued even when you told
them or showed them that you were
unwilling.

q
41a. Was this unwanted experience (or any
experiences like this if you had more than
one) abusive or humiliating, or intended to
be abusive or humiliating? If you aren’t sure,
choose the best answer.

43. They used physical force to make you
comply. For example, they grabbed your
arm or used their body weight to hold
you down.

Yes
No

44. They physically injured you.
45. They threatened to physically hurt you
(or someone else).

41b. Do you believe the person did it for a sexual
reason? For example, they did it because
they were sexually aroused or to get sexually
aroused. If you aren’t sure, choose the best
answer.

46. They threatened you (or someone else)
in some other way.
For example, by using their position
of authority, by spreading lies about
you, or by getting you in trouble with
authorities.

Yes
No

47. They did it when you were passed out,
asleep, or unconscious.

41c. Did you answer "Yes" to either Question 41a or
41b?

48. They did it when you were so drunk,
high, or drugged that you could not
understand what was happening or
could not show them that you were
unwilling.

Yes g Continue to next column
No g Go to Question 51, page 7

49. They tricked you into thinking that
they were someone else or that they
were allowed to do it for a professional
purpose (like a person pretending to be
a doctor).
50. Did you answer "Yes" to any question from 42 to 49?
Yes g Go to End of Survey, page 10
No g Go to Question 51, page 7

6

RAND Wrkplc_v3
44565

Draft

195

196

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

51. Since 9/1/2013, did you have any unwanted
experiences in which someone intentionally touched
private areas of your body (either directly or through
clothing)?

The following statements are about things that might
have happened to you when you had this experience.
In these statements, ‘they’ means the person or
people who did this to you.

Private areas include buttocks, inner thigh, breasts,
groin, anus, vagina, penis, or testicles.

Please indicate which of the following happened.
Yes No

Yes
No g Go to Question 61, page 8

52. They continued even when you told
them or showed them that you were
unwilling.

q
51a. Was this unwanted experience (or any
experiences like this if you had more than
one) abusive or humiliating, or intended to
be abusive or humiliating? If you aren’t sure,
choose the best answer.

53. They used physical force to make you
comply. For example, they grabbed your
arm or used their body weight to hold
you down.
54. They physically injured you.

Yes
No

55. They threatened to physically hurt you
(or someone else).

51b. Do you believe the person did it for a sexual
reason? For example, they did it because
they were sexually aroused or to get sexually
aroused. If you aren’t sure, choose the best
answer.

56. They threatened you (or someone else)
in some other way.
For example, by using their position
of authority, by spreading lies about
you, or by getting you in trouble with
authorities.

Yes
No

57. They did it when you were passed out,
asleep, or unconscious.

51c. Did you answer "Yes" to either Question 51a or
51b?

58. They did it when you were so drunk,
high, or drugged that you could not
understand what was happening or
could not show them that you were
unwilling.

Yes g Continue to next column
No g Go to Question 61, page 8

59. They tricked you into thinking that
they were someone else or that they
were allowed to do it for a professional
purpose (like a person pretending to be
a doctor).
60. Did you answer "Yes" to any question from 52 to 59?
Yes g Go to End of Survey, page 10
No g Go to Question 61, page 8

7

RAND Wrkplc_v3
44565

Draft

Mail Survey

61. Since 9/1/2013, did you have any unwanted
experiences in which someone made you touch
private areas of their body or someone else’s body
(either directly or through clothing)? This could
involve the person putting their private areas on
you.

The following statements are about things that might
have happened to you when you had this experience.
In these statements, ‘they’ means the person or
people who did this to you.
Please indicate which of the following happened.

Private areas include buttocks, inner thigh, breasts,
groin, anus, vagina, penis, or testicles.

Yes No
62. They continued even when you told
them or showed them that you were
unwilling.

Yes
No g Go to Question 71, page 9

63. They used physical force to make you
comply. For example, they grabbed your
arm or used their body weight to hold
you down.

q
61a. Was this unwanted experience (or any
experiences like this if you had more than
one) abusive or humiliating, or intended to
be abusive or humiliating? If you aren’t sure,
choose the best answer.

64. They physically injured you.
65. They threatened to physically hurt you
(or someone else).

Yes
No

66. They threatened you (or someone else)
in some other way.

61b. Do you believe the person did it for a sexual
reason? For example, they did it because
they were sexually aroused or to get sexually
aroused. If you aren’t sure, choose the best
answer.

For example, by using their position
of authority, by spreading lies about
you, or by getting you in trouble with
authorities.
67. They did it when you were passed out,
asleep, or unconscious.

Yes
No

68. They did it when you were so drunk,
high, or drugged that you could not
understand what was happening or
could not show them that you were
unwilling.

61c. Did you answer "Yes" to either Question 61a or
61b?
Yes g Continue to next column
No g Go to Question 71, page 9

69. They tricked you into thinking that
they were someone else or that they
were allowed to do it for a professional
purpose (like a person pretending to be
a doctor).
70. Did you answer "Yes" to any question from 62 to 69?
Yes g Go to End of Survey, page 10
No g Go to Question 71, page 9

8

RAND Wrkplc_v3
44565

Draft

197

198

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

71. Since 9/1/2013, did you have any unwanted
experiences in which someone attempted to put a
penis, an object, or any body part into your anus or
mouth, but no penetration actually occurred?

The following statements are about things that might
have happened to you when you had this experience.
In these statements, ‘they’ means the person or
people who did this to you.

Yes
No g Go to End of Survey, page 10

Please indicate which of the following happened.
Yes No

q
71a. Was this unwanted experience (or any
experiences like this if you had more than
one) abusive or humiliating, or intended to
be abusive or humiliating? If you aren’t sure,
choose the best answer.

72. They continued even when you told
them or showed them that you were
unwilling.
73. They used physical force to make you
comply. For example, they grabbed your
arm or used their body weight to hold
you down.

Yes
No

74. They physically injured you.

71b. Do you believe the person did it for a sexual
reason? For example, they did it because
they were sexually aroused or to get sexually
aroused. If you aren’t sure, choose the best
answer.

75. They threatened to physically hurt you
(or someone else).
76. They threatened you (or someone else)
in some other way.
For example, by using their position
of authority, by spreading lies about
you, or by getting you in trouble with
authorities.

Yes
No
71c. Did you answer "Yes" to either Question 71a or
71b?

77. They did it when you were passed out,
asleep, or unconscious.

Yes g Continue to next column
No g Go to End of Survey, page 10

78. They did it when you were so drunk,
high, or drugged that you could not
understand what was happening or
could not show them that you were
unwilling.
79. They tricked you into thinking that
they were someone else or that they
were allowed to do it for a professional
purpose (like a person pretending to be
a doctor).

9

RAND Wrkplc_v3
44565

Draft

Mail Survey

End of Survey
This information will help improve the climate and safety of the U.S. military. You may
have found that the questions did not completely cover your experiences. Nonetheless,
the answers you provided are very important to this study.
Sometimes answering questions like the ones on this survey can be upsetting. If you feel
you need support or would like to talk to someone, you can call:
•
•
•

DoD Safe Helpline number (877-995-5247)
Military Crisis Line (1-800-273-8255)
RAINN (1-800-656-HOPE)

A SAFE helpline counselor can also explain how to report a sexual assault and how to
find out the current status of a sexual assault report.

Thank you for completing the survey.
Please return your survey using the enclosed postage-paid
envelope. No postage is needed.
If your return envelope has been misplaced, please mail your survey to:
2014 RAND Military Workplace Study
Westat 6236.02.14
1600 Research Blvd, RW 2634
Rockville, Maryland 20850-9973
Westat Survey Helpdesk toll free number: 1-855-365-5914
(OCONUS please call collect: 240-453-2620)

10

199

200

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

2014 RAND Military Workplace Study

PRIVACY ADVISORY
The Defense Manpower Data Center has provided certain information about you to allow
RAND to conduct this survey. Your name and contact information have been used to send
you notifications and information about this survey. The Defense Manpower Data Center has
provided certain demographic information to reduce the number of questions in the survey
and minimize the burden on your time. Your response and demographic data are linked by
RAND to allow for a thorough analysis of the responses by the demographics. RAND has not
been authorized by DoD to identify or link survey response and demographic information
with your name and contact information. The resulting reports will not include analysis of
groupings of less than 15.

RCS#: DD-P&R(QD)1947
Expires: 7/25/2015

Mail Survey

201

Before you begin this survey, please read the informed consent statement that follows.
INFORMED CONSENT STATEMENT
Introduction: The RAND Corporation and Westat are conducting a survey that asks about whether or not you have experienced
harassment, discrimination, or inappropriate sexual behavior. We need your responses whether or not you have had these
experiences. RAND is a private, nonprofit organization that conducts research and analysis to help improve public policy and
decision making. RAND’s research partner is Westat, an internationally known research and statistical survey organization.
Purpose: The Department of Defense (DoD) and Congress are working to understand the full extent of harassment and assault in the
military and whether current efforts to reduce them are helping. The DoD has funded RAND to conduct an independent assessment
of the military work environment during the past year. You and other Service members, including all women and approximately 25%
of men, are being urged to participate in order to ensure that DoD and Congress have a full understanding of Service member
experiences. The survey results will have a direct impact on training, military justice, and services that affect you and other Service
members.
Survey Length: This survey will take about 5 minutes to complete.
Incentive: To thank you in advance for participating in our mail survey, we enclosed $4.00 in cash with the first survey packet we
mailed you. This is a small token of our appreciation for your time and support.
Voluntary Participation: Your participation is completely voluntary, and you may stop at any time. You can skip any question you
don't want to answer.
Privacy: RAND will not give the DoD information about who participated in the study, nor will RAND link your individual responses
on this survey with your name or identity. DoD has agreed to this condition to protect your privacy. RAND also received a federal
“Certificate of Confidentiality” that provides RAND with additional protection against any attempt to subpoena confidential survey
records. This Certificate helps ensure the confidentiality of your information by protecting the researchers from being forced to
release information that might identify you, even under a court order or subpoena. However, the protections of a Certificate of
Confidentiality are not absolute. If you tell us that a child or elderly person is being abused, or that you intend to harm yourself or
someone else, the researchers may report it to the authorities. The Certificate of Confidentiality is not an endorsement of the
project by the U.S. Department of Health and Human Services.
Added Protection Procedures: Only members of the RAND-Westat study team will have access to your individual responses, and we
will take great care to protect your privacy and data. For example, RAND will collapse some categories or ranges of potentially
identifying variables to prevent identification by inference. Study staff members have been trained to deidentify data to protect your
identity and are subject to civil penalties for violating your confidentiality. Our research team has a number of safeguarding
procedures in place to ensure that survey data are protected from accidental disclosure.
Risks of Participation: For most respondents, the survey involves no risks of participation. However, if you have ever experienced
sexual harassment or assault, some questions may cause discomfort or distress. Some questions may be explicit; therefore, you may
prefer to take the survey in a private setting.
Reporting Harassment or Assault: It is important to note that this survey is not a means of making a formal complaint or report that
you wish to have DoD act upon. The survey will not collect the identity of any perpetrators of assault or harassment. Instead, we
provide information below and at the end of the survey about how you can make a formal report of harassment or assault.
Resources Available to You: If you need resources or assistance, the DoD Safe Helpline (https://www.safehelpline.org/) provides
worldwide live, confidential support, 24/7. You can initiate a report or search for your nearest Sexual Assault Response Coordinator
(SARC). You can find links to Service-specific reporting resources and access information about the prevention of and response to
sexual assault on their website or by calling the hotline at 1-877-995-5247.
Some questions in the survey may ask about upsetting experiences. If you feel distressed, for confidential support and consultation
you can contact the Military Crisis Line (http://veteranscrisisline.net/ActiveDuty.aspx) or call them at 1-800-273-8255 (then press 1).
Who do you contact if you have questions or concerns about the survey?
x RAND for questions about the overall study: Contact the RAND team by email at [email protected] or go to the RAND
website www.WGRS2014.rand.org
x Westat Survey Helpdesk for computer, technical, or survey questions: By telephone toll free at 1-855-365-5914 (OCONUS call
collect 240-453-2620) or by email at [email protected].
x Questions about your rights as a participant in this study: Contact the RAND Human Subjects Protection Committee at 310393-0411, ext. 6369, in Santa Monica, California.
x Questions about the licensing of the survey: Information about DoD surveys can be found at
http://www.dtic.mil/whs/directives/corres/intinfocollections/iic_search.html. This survey’s RCS is DD-P&R(QD)1947, Expiration
July 25, 2015.
If you would like to participate in this survey, fill out the survey booklet and mail it to us in the enclosed pre-addressed privacy
envelope. No postage is needed if you use the return envelope provided.

202

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

2014 RAND Military Workplace Study

Survey Introduction: Some of the questions in this survey will be personal. Please answer each
question thoughtfully and truthfully. This will allow us to provide an accurate picture of the
different experiences of today’s military members. If you prefer not to answer a specific question
for any reason, just leave it blank. For your privacy, you may want to take this survey where other
people won’t see your answers. Thank you for agreeing to participate in this important study.
Marking Instructions:
u

Please use a black or blue pen to complete this form.

u

Mark
to indicate your answer. If you want to change your answer, darken the box
the correct answer.

and mark

Start Here
3.
Please try to think of any important events in your
life that occurred near 9/1/2013 such as birthdays,
weddings, or family activities. These events can help
you remember which things happened before 9/1/2013
and which happened after as you answer the rest of the
survey questions.

Yes
No
Do not remember
4.

Do you currently live in the same house or building
that you did on 9/1/2013?

5.

Were you married or dating someone on 9/1/2013?
Yes
No
Do not remember

Yes
No
Do not remember
2.

Were you on vacation or leave on 9/1/2013?
Yes
No
Do not remember

The following questions will help you think about your life
one year ago.
1.

Are you in the same military occupation today as you
were on 9/1/2013?

Are you the same rank today that you were on
9/1/2013?
Yes
No
Do not remember

1

RAND Wrkplc_women_v3
39110

Draft

Mail Survey

In this section, you will be asked about several things that someone from work might have done to you that were
upsetting or offensive, and that happened AFTER 9/1/2013.
When the questions say "someone from work," please include any person you have contact with as part of your
military duties. "Someone from work" could be a supervisor, someone above or below you in rank, or a civilian
employee/contractor. They could be in your unit or in other units.
These things may have occurred on-duty or off-duty, on-base or off-base. Please include them as long as the
person who did them to you was someone from work.
Remember, all the information you share will be kept confidential.
6.

12. Since 9/1/2013, did someone from work make
repeated sexual comments about your appearance
or body that made you uncomfortable, angry, or
upset?

Since 9/1/2013, did someone from work repeatedly
tell sexual "jokes" that made you uncomfortable,
angry, or upset?
Yes
No

7.

Yes
No

Since 9/1/2013, did someone from work embarrass,
anger, or upset you by repeatedly suggesting that you
do not act like a woman is supposed to? For example,
by calling you "a dyke, or butch."

13. Since 9/1/2013, did someone from work either take
or share sexually suggestive pictures or videos of you
when you did not want them to?

Yes
No

Yes
No

g Go to Question 14
q
13a. Did this make you uncomfortable, angry,
or upset?

8. Since 9/1/2013, did someone from work repeatedly
make sexual gestures or sexual body movements
(for example, thrusting their pelvis or grabbing their
crotch) that made you uncomfortable, angry, or
upset?

Yes
No

Yes
No

14. Since 9/1/2013, did someone from work make
repeated attempts to establish an unwanted
romantic or sexual relationship with you? These
could range from repeatedly asking you out for
coffee to asking you for sex or a 'hook-up.'

9. Since 9/1/2013, did someone from work display,
show, or send sexually explicit materials like pictures
or videos that made you uncomfortable, angry, or
upset?

Yes
No

Yes
No

g Go to Question 15, page 3

q
14a. Did these attempts make you uncomfortable,
angry, or upset?

10. Since 9/1/2013, did someone from work repeatedly
tell you about their sexual activities in a way that
made you uncomfortable, angry, or upset?

Yes
No

Yes
No
11. Since 9/1/2013, did someone from work repeatedly ask
you questions about your sex life or sexual
interests that made you uncomfortable, angry, or
upset?
Yes
No

2

RAND Wrkplc_women_v3
39110

Draft

203

204

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

"Someone from work" includes any person you have contact with as part of your military duties. "Someone from
work" could be a supervisor, someone above or below you in rank, or a civilian employee/contractor. They could
be in your unit or in other units.
These things may have occurred off-duty or off-base. Please include them as long as the person who did them to
you was someone from work.
Remember, all the information you share will be kept confidential.

15. Since 9/1/2013, did someone from work intentionally
touch you in a sexual way when you did not want
them to? This could include touching your genitals,
breasts, buttocks, or touching you with their genitals
anywhere on your body.
Yes
No

18. Since 9/1/2013, has someone from work made you
feel like you would get punished or treated unfairly
in the workplace if you did not do something sexual?
For example, they hinted that they would give you a
bad evaluation/fitness report, a bad assignment, or
bad treatment at work if you were not willing to do
something sexual. This could include being unwilling
to talk about sex, undress, share sexual pictures, or
have some type of sexual contact.

g Go to Question 17

16. Since 9/1/2013, did someone from work repeatedly
touch you in any other way that made you
uncomfortable, angry, or upset?

Yes
No

This could include almost any unnecessary physical
contact including hugs, shoulder rubs, or touching
your hair, but would not usually include handshakes
or routine uniform adjustments.

19. Since 9/1/2013, did you hear someone from work say
that women are not as good as men at your particular
job, or that women should be prevented from having
your job?

Yes
No

Yes
No

17. Since 9/1/2013, has someone from work made you
feel as if you would get some workplace benefit in
exchange for doing something sexual?

20. Since 9/1/2013, do you think someone from work
mistreated, ignored, excluded, or insulted you because
you are a woman?

For example, they might hint that they would give
you a good evaluation/fitness report, a better
assignment, or better treatment at work in exchange
for doing something sexual. Something sexual could
include talking about sex, undressing, sharing sexual
pictures, or having some type of sexual contact.

Yes
No

Yes
No

3

RAND Wrkplc_women_v3
39110

Draft

Mail Survey

Please read the following special instructions before continuing the survey.
Questions in this next section ask about unwanted experiences of an abusive, humiliating, or sexual nature.
These types of unwanted experiences vary in severity. Some of them could be viewed as an assault. Others could
be viewed as hazing or some other type of unwanted experience.
They can happen to both women and men.
Some of the language may seem graphic, but using the names of specific body parts is the best way to
determine whether or not people have had these types of experiences.
When answering these questions, please include experiences no matter who did it to you or where it happened.
It could be done to you by a male or female, Service member or civilian, someone you knew or a stranger.
Please include experiences even if you or others had been drinking alcohol, using drugs, or were intoxicated.
The following questions will ask you about events that happened AFTER 9/1/2013.
Remember, all the information you share will be kept confidential. RAND will not give your identifiable answers
to the DoD.
Yes No

21. Since 9/1/2013, did you have any unwanted
experiences in which someone put his penis into your
vagina, anus, or mouth?

26. They threatened you (or someone else)
in some other way.
For example, by using their position
of authority, by spreading lies about
you, or by getting you in trouble with
authorities.

Yes
No g Go to Question 31, page 5
The following statements are about things that might
have happened to you when you had this experience.
In these statements, ‘they’ means the person or
people who did this to you.

27. They did it when you were passed out,
asleep, or unconscious.
28. They did it when you were so drunk,
high, or drugged that you could not
understand what was happening or
could not show them that you were
unwilling.

Please indicate which of the following happened.
Yes No
22. They continued even when you told
them or showed them that you were
unwilling.

29. They tricked you into thinking that
they were someone else or that they
were allowed to do it for a professional
purpose (like a person pretending to be
a doctor).

23. They used physical force to make you
comply. For example, they grabbed
your arm or used their body weight to
hold you down.
24. They physically injured you.

30. Did you answer "Yes" to any question from 22 to 29?

25. They threatened to physically hurt you
(or someone else).

Yes g Go to End of Survey, page 10
No g Go to Question 31, page 5

4

RAND Wrkplc_women_v3
39110

Draft

205

206

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

31. Since 9/1/2013, did you have any unwanted
experiences in which someone put any object or any
body part other than a penis into your vagina, anus,
or mouth? The body part could include a finger,
tongue or testicles.

The following statements are about things that might
have happened to you when you had this experience.
In these statements, ‘they’ means the person or
people who did this to you.
Please indicate which of the following happened.

Yes
No g Go to Question 41, page 6

Yes No
32. They continued even when you told
them or showed them that you were
unwilling.

q
31a. Was this unwanted experience (or any
experiences like this if you had more than
one) abusive or humiliating, or intended to
be abusive or humiliating? If you aren’t sure,
choose the best answer.

33. They used physical force to make you
comply. For example, they grabbed your
arm or used their body weight to hold
you down.

Yes
No

34. They physically injured you.
35. They threatened to physically hurt you
(or someone else).

31b. Do you believe the person did it for a sexual
reason? For example, they did it because
they were sexually aroused or to get sexually
aroused. If you aren’t sure, choose the best
answer.

36. They threatened you (or someone else)
in some other way.
For example, by using their position
of authority, by spreading lies about
you, or by getting you in trouble with
authorities.

Yes
No

37. They did it when you were passed out,
asleep, or unconscious.

31c. Did you answer "Yes" to either Question 31a or
31b?

38. They did it when you were so drunk,
high, or drugged that you could not
understand what was happening or
could not show them that you were
unwilling.

Yes g Continue to next column
No g Go to Question 41, page 6

39. They tricked you into thinking that
they were someone else or that they
were allowed to do it for a professional
purpose (like a person pretending to be
a doctor).
40. Did you answer "Yes" to any question from 32 to 39?
Yes g Go to End of Survey, page 10
No g Go to Question 41, page 6

5

RAND Wrkplc_women_v3
39110

Draft

Mail Survey

41. Since 9/1/2013, did anyone make you put any part of
your body or any object into someone's mouth,
vagina, or anus when you did not want to? A part of
the body could include your tongue or fingers.

The following statements are about things that might
have happened to you when you had this experience.
In these statements, ‘they’ means the person or
people who did this to you.

Yes
No g Go to Question 51, page 7

Please indicate which of the following happened.
Yes No

q
41a. Was this unwanted experience (or any
experiences like this if you had more than
one) abusive or humiliating, or intended to
be abusive or humiliating? If you aren’t sure,
choose the best answer.

42. They continued even when you told
them or showed them that you were
unwilling.
43. They used physical force to make you
comply. For example, they grabbed your
arm or used their body weight to hold
you down.

Yes
No

44. They physically injured you.

41b. Do you believe the person did it for a sexual
reason? For example, they did it because
they were sexually aroused or to get sexually
aroused. If you aren’t sure, choose the best
answer.

45. They threatened to physically hurt you
(or someone else).
46. They threatened you (or someone else)
in some other way.
For example, by using their position
of authority, by spreading lies about
you, or by getting you in trouble with
authorities.

Yes
No
41c. Did you answer "Yes" to either Question 41a or
41b?

47. They did it when you were passed out,
asleep, or unconscious.

Yes g Continue to next column
No g Go to Question 51, page 7

48. They did it when you were so drunk,
high, or drugged that you could not
understand what was happening or
could not show them that you were
unwilling.
49. They tricked you into thinking that
they were someone else or that they
were allowed to do it for a professional
purpose (like a person pretending to be
a doctor).
50. Did you answer "Yes" to any question from 42 to 49?
Yes g Go to End of Survey, page 10
No g Go to Question 51, page 7

6

RAND Wrkplc_women_v3
39110

Draft

207

208

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

51. Since 9/1/2013, did you have any unwanted
experiences in which someone intentionally touched
private areas of your body (either directly or through
clothing)?

The following statements are about things that might
have happened to you when you had this experience.
In these statements, ‘they’ means the person or
people who did this to you.

Private areas include buttocks, inner thigh, breasts,
groin, anus, vagina, penis, or testicles.

Please indicate which of the following happened.
Yes No

Yes
No g Go to Question 61, page 8

52. They continued even when you told
them or showed them that you were
unwilling.

q
51a. Was this unwanted experience (or any
experiences like this if you had more than
one) abusive or humiliating, or intended to
be abusive or humiliating? If you aren’t sure,
choose the best answer.

53. They used physical force to make you
comply. For example, they grabbed your
arm or used their body weight to hold
you down.
54. They physically injured you.

Yes
No

55. They threatened to physically hurt you
(or someone else).

51b. Do you believe the person did it for a sexual
reason? For example, they did it because
they were sexually aroused or to get sexually
aroused. If you aren’t sure, choose the best
answer.

56. They threatened you (or someone else)
in some other way.
For example, by using their position
of authority, by spreading lies about
you, or by getting you in trouble with
authorities.

Yes
No

57. They did it when you were passed out,
asleep, or unconscious.

51c. Did you answer "Yes" to either Question 51a or
51b?

58. They did it when you were so drunk,
high, or drugged that you could not
understand what was happening or
could not show them that you were
unwilling.

Yes g Continue to next column
No g Go to Question 61, page 8

59. They tricked you into thinking that
they were someone else or that they
were allowed to do it for a professional
purpose (like a person pretending to be
a doctor).
60. Did you answer "Yes" to any question from 52 to 59?
Yes g Go to End of Survey, page 10
No g Go to Question 61, page 8

7

RAND Wrkplc_women_v3
39110

Draft

Mail Survey

61. Since 9/1/2013, did you have any unwanted
experiences in which someone made you touch
private areas of their body or someone else’s body
(either directly or through clothing)? This could
involve the person putting their private areas on
you.

The following statements are about things that might
have happened to you when you had this experience.
In these statements, ‘they’ means the person or
people who did this to you.
Please indicate which of the following happened.

Private areas include buttocks, inner thigh, breasts,
groin, anus, vagina, penis, or testicles.

Yes No
62. They continued even when you told
them or showed them that you were
unwilling.

Yes
No g Go to Question 71, page 9

63. They used physical force to make you
comply. For example, they grabbed your
arm or used their body weight to hold
you down.

q
61a. Was this unwanted experience (or any
experiences like this if you had more than
one) abusive or humiliating, or intended to
be abusive or humiliating? If you aren’t sure,
choose the best answer.

64. They physically injured you.
65. They threatened to physically hurt you
(or someone else).

Yes
No

66. They threatened you (or someone else)
in some other way.

61b. Do you believe the person did it for a sexual
reason? For example, they did it because
they were sexually aroused or to get sexually
aroused. If you aren’t sure, choose the best
answer.

For example, by using their position
of authority, by spreading lies about
you, or by getting you in trouble with
authorities.
67. They did it when you were passed out,
asleep, or unconscious.

Yes
No

68. They did it when you were so drunk,
high, or drugged that you could not
understand what was happening or
could not show them that you were
unwilling.

61c. Did you answer "Yes" to either Question 61a or
61b?
Yes g Continue to next column
No g Go to Question 71, page 9

69. They tricked you into thinking that
they were someone else or that they
were allowed to do it for a professional
purpose (like a person pretending to be
a doctor).
70. Did you answer "Yes" to any question from 62 to 69?
Yes g Go to End of Survey, page 10
No g Go to Question 71, page 9

8

RAND Wrkplc_women_v3
39110

Draft

209

210

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

71. Since 9/1/2013 , did you have any unwanted
experiences in which someone attempted to put a
penis, an object, or any body part into your vagina,
anus, or mouth, but no penetration actually occurred?

The following statements are about things that might
have happened to you when you had this experience.
In these statements, ‘they’ means the person or
people who did this to you.

Yes
No g Go to End of Survey, page 10

Please indicate which of the following happened.
Yes No

q
71a. Was this unwanted experience (or any
experiences like this if you had more than
one) abusive or humiliating, or intended to
be abusive or humiliating? If you aren’t sure,
choose the best answer.

72. They continued even when you told
them or showed them that you were
unwilling.
73. They used physical force to make you
comply. For example, they grabbed your
arm or used their body weight to hold
you down.

Yes
No

74. They physically injured you.

71b. Do you believe the person did it for a sexual
reason? For example, they did it because
they were sexually aroused or to get sexually
aroused. If you aren’t sure, choose the best
answer.

75. They threatened to physically hurt you
(or someone else).
76. They threatened you (or someone else)
in some other way.
For example, by using their position
of authority, by spreading lies about
you, or by getting you in trouble with
authorities.

Yes
No
71c. Did you answer "Yes" to either Question 71a or
71b?

77. They did it when you were passed out,
asleep, or unconscious.

Yes g Continue to next column
No g Go to End of Survey, page 10

78. They did it when you were so drunk,
high, or drugged that you could not
understand what was happening or
could not show them that you were
unwilling.
79. They tricked you into thinking that
they were someone else or that they
were allowed to do it for a professional
purpose (like a person pretending to be
a doctor).

9

RAND Wrkplc_women_v3
39110

Draft

Mail Survey

End of Survey
This information will help improve the climate and safety of the U.S. military. You may
have found that the questions did not completely cover your experiences. Nonetheless,
the answers you provided are very important to this study.
Sometimes answering questions like the ones on this survey can be upsetting. If you feel
you need support or would like to talk to someone, you can call:
•
•
•

DoD Safe Helpline number (877-995-5247)
Military Crisis Line (1-800-273-8255)
RAINN (1-800-656-HOPE)

A SAFE helpline counselor can also explain how to report a sexual assault and how to
find out the current status of a sexual assault report.

Thank you for completing the survey.
Please return your survey using the enclosed postage-paid
envelope. No postage is needed.
If your return envelope has been misplaced, please mail your survey to:
2014 RAND Military Workplace Study
Westat 6236.02.14
1600 Research Blvd, RW 2634
Rockville, Maryland 20850-9973
Westat Survey Helpdesk toll free number: 1-855-365-5914
(OCONUS please call collect: 240-453-2620)

10

211

APPENDIX C

Supplementary Tables for Chapter Three

This appendix contains supplementary tables for Chapter Three.

213

P-Value

Joint Test

Adjusted
RRb

P-Value

Joint Test

Demographics
Age in years as of 08/01/14 c 8-year increase

28.6

30.8

1.32

<0.0001

 

1.16

<0.0001

 

 

 

 

 

<0.0001

 

<0.0001

<0.0001

Non-Hispanic white (ref)

49.0

53.5

–

–

 

 

 

 

Non-Hispanic black

27.1

25.2

0.85

<0.0001

 

0.91

<0.0001

 

Hispanic

12.8

10.9

0.78

<0.0001

 

0.95

<0.0001

 

Asian

4.5

4.8

0.97

0.0554

 

1.01

0.3998

 

Other

6.6

5.6

0.77

<0.0001

 

0.95

<0.0001

 

Marital status

 

 

 

 

<0.0001

 

<0.0001

<0.0001

Married (ref)

45.9

51.8

–

–

 

 

 

 

Never married

44.6

37.3

0.74

<0.0001

 

0.93

<0.0001

 

9.4

10.9

1.02

0.0191

 

0.92

<0.0001

 

0.9

1.0

1.09

<0.0001

 

1.01

<0.0001

 

 

 

 

 

<0.0001

 

<0.0001

<0.0001

High school or less (ref)

58.0

44.9

–

–

 

 

 

 

Some college

16.3

19.2

1.52

<0.0001

 

1.11

<0.0001

 

Bachelor’s degree

15.6

20.2

1.67

<0.0001

 

1.25

<0.0001

 

Graduate degree

10.1

15.6

1.99

<0.0001

 

1.27

<0.0001

 

Race-ethnicity

Divorced/separated/
other
Number of dependentsc
Education level

1 additional
dependent

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

Participant Characteristics

Unit Change
Population
for Continuous
and Full
Respondent Unadjusted
Variables
Sample Mean
Mean
RRa

214

Table C.1
Adjusted and Unadjusted Associations of Respondent Characteristics with Response for Active-Duty DoD Women

Table C.1—Continued

Participant Characteristics

Unit Change
Population
for Continuous
and Full
Respondent Unadjusted
Variables
Sample Mean
Mean
RRa

P-Value

Joint Test

Adjusted
RRb

P-Value

Joint Test

Military Career
Service branch

 

 

 

<0.0001

 

<0.0001

<0.0001

Air Force (ref)

30.0

40.1

–

–

 

 

 

 

Army

35.2

34.2

0.73

<0.0001

 

0.76

<0.0001

 

Navy

27.8

20.3

0.55

<0.0001

 

0.61

<0.0001

 

Marine Corps

7.0

5.4

0.58

<0.0001

 

0.68

<0.0001

 

 

 

 

 

<0.0001

 

<0.0001

<0.0001

E1–E3 (ref)

23.6

14.9

–

–

 

 

 

 

E4

20.6

16.1

1.24

<0.0001

 

1.19

<0.0001

 

E5–E6

28.3

29.7

1.67

<0.0001

 

1.55

<0.0001

 

E7–E9

8.0

11.8

2.34

<0.0001

 

2.14

<0.0001

 

W1–W5

0.9

1.3

2.41

<0.0001

 

2.44

<0.0001

 

O1–O3

12.5

16.4

2.08

<0.0001

 

1.91

<0.0001

 

O4–O6

6.1

9.7

2.53

<0.0001

 

2.29

<0.0001

 

Pay grade

18-percent
increase

60.1

62.0

1.12

<0.0001

 

1.12

<0.0001

 

Years of active military
servicec

7 additional
years

7.1

8.7

1.29

<0.0001

 

1.08

<0.0001

 

 

 

 

 

<0.0001

 

<0.0001

<0.0001

49.5

44.3

–

–

 

 

 

 

Deployment status
Never deployed (ref)

215

AFQT percentile (enlisted
only)c

Supplementary Tables for Chapter Three

 

Unit Change
Population
for Continuous
and Full
Respondent Unadjusted
Variables
Sample Mean
Mean
RRa

P-Value

Joint Test

Adjusted
RRb

P-Value

Joint Test

Months deployed since
9/11/2001

11 additional
months

11.6

12.5

1.11

<0.0001

 

1.05

<0.0001

 

Months deployed since
7/1/2013

3 additional
months

2.8

3.0

1.08

<0.0001

 

1.10

<0.0001

 

4.5

0.6

0.13

<0.0001

 

0.13

<0.0001

 

 

 

 

 

<0.0001

 

<0.0001

<0.0001

Infantry, guncrews, and
seamanship specialists

4.3

3.4

1.14

<0.0001

 

1.00

0.9038

 

Electronic equipment
repairers

5.7

5.0

1.28

<0.0001

 

1.20

<0.0001

 

Communications and
intelligence specialists

8.5

8.0

1.39

<0.0001

 

1.15

<0.0001

 

Health care specialists

12.4

14.0

1.65

<0.0001

 

1.34

<0.0001

 

Other technical and
allied specialists

2.3

2.5

1.56

<0.0001

 

1.25

<0.0001

 

Functional support and
administration (ref)

20.6

21.9

1.56

<0.0001

 

1.26

<0.0001

 

Electrical/mechanical
equipment repairers

10.6

7.2

–

–

 

 

 

 

Craftsworkers

2.0

1.4

0.98

0.4576

 

0.96

0.2282

 

Service and supply
handlers

11.1

7.8

1.03

0.0906

 

0.92

<0.0001

 

Nonoccupational

2.9

1.5

0.77

<0.0001

 

0.83

<0.0001

 

Separated/retired
DoD occupational area

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

Participant Characteristics

216

Table C.1—Continued

Table C.1—Continued

Participant Characteristics

Unit Change
Population
for Continuous
and Full
Respondent Unadjusted
Variables
Sample Mean
Mean
RRa

P-Value

Joint Test

Adjusted
RRb

P-Value

Joint Test

2.4

3.2

1.94

<0.0001

 

2.32

<0.0001

 

Intelligence officers

1.4

2.0

2.03

<0.0001

 

2.36

<0.0001

 

Engineering and
maintenance officers

1.7

2.6

2.15

<0.0001

 

2.55

<0.0001

 

Scientists and
professionals

1.2

2.0

2.42

<0.0001

 

2.77

<0.0001

 

Health care officers

7.7

11.0

2.09

<0.0001

 

2.48

<0.0001

 

Administrators

2.2

3.3

2.24

<0.0001

 

2.64

<0.0001

 

Supply, procurement,
and allied officers

1.9

2.7

2.05

<0.0001

 

2.45

<0.0001

 

Other officers (20, 21,
29)

0.8

0.7

1.30

<0.0001

 

1.73

<0.0001

 

 

 

 

 

 

 

 

 

Continental United
States (ref)

82.7

81.3

 

 

 

 

 

 

Outside the continental
United States

17.3

18.7

1.10

<0.0001

 

1.07

<0.0001

 

74.8

72.8

0.88

<0.0001

 

0.96

<0.0001

 

Unit location

Military Environment
Percentage male in
occupationc

15 additional
percentage
points

Supplementary Tables for Chapter Three

Tactical operations
officers

217

Sized of occupation groupc

Unit Change
Population
for Continuous
and Full
Respondent Unadjusted
Variables
Sample Mean
Mean
RRa

P-Value

Joint Test

Adjusted
RRb

P-Value

Joint Test

32,000
additional
people

31,083.4

28,031.0

0.90

<0.0001

 

0.98

<0.0001

 

Percentage male in unitc

11.2 additional
percentage
points

77.3

76.5

0.93

<0.0001

 

1.00

0.1151

 

Sized of unitc

500 additional
people

404.9

281.8

0.83

<0.0001

 

0.91

<0.0001

 

Percentage male in
installation (zip code)c

6.4 additional
percentage
points

82.2

81.9

0.95

<0.0001

 

0.99

<0.0001

 

10,000
additional
people

9,864.3

9,051.6

0.92

<0.0001

 

0.95

<0.0001

 

Change in assigned unit zip
since 08/01/2013

28.1

24.6

0.83

<0.0001

 

1.02

0.0101

 

Change in assigned unit zip
since 04/01/2014

19.9

14.6

0.69

<0.0001

 

0.72

<0.0001

 

Change of mailing address
since 04/01/2014

32.6

27.1

0.77

<0.0001

 

0.84

<0.0001

 

No valid mailing address

2.0

1.4

0.70

<0.0001

 

0.84

<0.0001

 

No valid email address

4.1

0.8

0.19

<0.0001

 

0.23

<0.0001

 

Mailing 1 is postal
nondeliverable

15.0

8.9

0.55

<0.0001

 

0.71

<0.0001

 

Sized in installationc

Fieldwork Indicators

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

Participant Characteristics

218

Table C.1—Continued

Table C.1—Continued

Participant Characteristics

Unit Change
Population
for Continuous
and Full
Respondent Unadjusted
Variables
Sample Mean
Mean
RRa

Marine Corps sent email
Percentage of emails
bouncedc

10.4
percentage
points

P-Value

Joint Test

Adjusted
RRb

P-Value

Joint Test

0.5

0.2

0.39

<0.0001

 

0.57

<0.0001

 

8.6

1.0

0.78

<0.0001

 

0.81

<0.0001

 

NOTE: P-values from individual tests of significance are shown in the “P-Value” columns, while the p-values for a joint test come from a chi-square
score test; shown in the “Joint Test” columns. Variables marked “ref” are the reference variables in their categories.
a The binary variable is coded so that 1 = responded and 0 = did not respond, so that a risk ratio greater than 1 indicates a higher likelihood of
responding for that category compared to the reference.
b The adjusted risk ratio comes from a model that includes race/ethnicity (indicated levels), service branch, and pay grade.
c Indicates variables entered as continuous, for which the risk ratio corresponds to one standard deviation change in the variable (standard deviations

are listed in the column labeled “unit change for continuous variables”).
d Size measured by number of people.

Supplementary Tables for Chapter Three
219

Unit Change
Population
for Continuous
and Full
Respondent Unadjusted
Variables
Sample Mean
Mean
RRa

P-Value

Joint Test

Adjusted
RRb

P-Value

Joint Test

8-year increase

<0.0001

 

1.26

<0.0001

 

Demographics
Age in years as of
08/01/2014 c
Race-ethnicity

29.1

32.9

 

1.56
 

<0.0001

<0.0001
 

Non-Hispanic white (ref)

65.5

68.9

–

–

 

Non-Hispanic black

14.6

13.4

0.87

<0.0001

 

0.97

0.0031

 

Hispanic

11.7

10.1

0.82

<0.0001

 

1.03

0.0119

 

Asian

3.7

4.2

1.09

<0.0001

 

1.18

<0.0001

 

Other

4.5

3.5

0.73

<0.0001

 

0.96

0.0131

 

Marital status

 

 

<0.0001

<0.0001
 

Married (ref)

57.9

72.6

–

–

 

Never married

38.6

23.3

0.48

<0.0001

 

0.79

<0.0001

 

Divorced/separated/
other

3.6

4.1

0.92

<0.0001

 

0.83

<0.0001

 

1.5

2.0

1.19

<0.0001

 

1.05

<0.0001

 

Number of dependentsc
Education level

1 additional
dependent

 

 

<0.0001

<0.0001
 

High school or less (ref)

67.7

48.7

–

–

 

Some college

12.1

17.3

1.98

<0.0001

 

1.20

<0.0001

 

Bachelor’s degree

12.3

18.3

2.08

<0.0001

 

1.29

<0.0001

 

Graduate degree

7.9

15.7

2.77

<0.0001

 

1.34

<0.0001

 

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

Participant Characteristics

220

Table C.2
Adjusted and Unadjusted Associations of Respondent Characteristics with Response for Active-Duty DoD Men

Table C.2—Continued

Participant Characteristics

Unit Change
Population
for Continuous
and Full
Respondent Unadjusted
Variables
Sample Mean
Mean
RRa

P-Value

Joint Test

Adjusted
RRb

P-Value

Joint Test

Military Career
Service Branch

 

 

<0.0001

<0.0001
 

Air Force (ref)

22.8

34.1

–

–

 

Army

38.7

37.4

0.65

<0.0001

 

0.67

<0.0001

 

Navy

23.1

18.1

0.53

<0.0001

 

0.55

<0.0001

 

Marine Corps

15.4

10.4

0.45

<0.0001

 

0.57

<0.0001

 

Pay grade

 

 

<0.0001

<0.0001
 

23.0

9.5

–

–

 

E4

19.4

11.6

1.45

<0.0001

 

1.39

<0.0001

 

E5–E6

29.9

31.6

2.57

<0.0001

 

2.43

<0.0001

 

E7–E9

10.2

17.6

4.19

<0.0001

 

3.96

<0.0001

 

W1–W5

1.6

2.7

4.26

<0.0001

 

4.43

<0.0001

 

O1–O3

9.2

13.6

3.61

<0.0001

 

3.37

<0.0001

 

O4–O6

6.7

13.4

4.91

<0.0001

 

4.47

<0.0001

 

AFQT percentile (enlisted
only)c

18-percent
increase

63.9

65.5

1.09

<0.0001

 

1.09

<0.0001

 

Years of active military
servicec

7 additional
years

7.8

11.0

1.50

<0.0001

 

1.16

<0.0001

 

Deployment status

40.0

 
28.2

–

–

<0.0001

<0.0001

 

 

221

Never deployed (ref)

 

Supplementary Tables for Chapter Three

E1–E3 (ref)

Unit Change
Population
for Continuous
and Full
Respondent Unadjusted
Variables
Sample Mean
Mean
RRa

P-Value

Joint Test

Adjusted
RRb

P-Value

Joint Test

Deployed before
08/01/2013

48.3

61.5

1.81

<0.0001

 

1.04

<0.0001

 

Deployed after
08/01/2013

11.6

10.3

1.26

<0.0001

 

0.87

<0.0001

 

Months deployed since
9/11/2001c

11 additional
months

14.6

16.3

1.14

<0.0001

 

1.06

<0.0001

 

Months deployed since
7/1/2013c

3 additional
months

3.0

3.1

1.04

0.0013

 

1.10

<0.0001

 

4.4

0.7

0.16

<0.0001

 

0.17

<0.0001

 

Separated/retired
DoD occupational area

 

 

<0.0001

<0.0001

Infantry, guncrews, and
seamanship specialists

15.7

8.6

0.63

<0.0001

 

0.71

<0.0001

 

Electronic equipment
repairers

8.1

7.5

1.06

<0.0001

 

1.10

<0.0001

 

Communications and
intelligence specialists

8.7

7.6

1.00

0.8353

 

0.99

0.4665

 

Health care specialists

5.2

5.5

1.22

<0.0001

 

1.33

<0.0001

 

Other technical and
allied specialists

2.6

3.0

1.32

<0.0001

 

1.16

<0.0001

 

Functional support and
administration (ref)

8.9

11.0

1.41

<0.0001

 

1.27

<0.0001

 

Electrical/mechanical
equipment repairers

17.9

15.6

–

–

 

Craftsworkers

3.1

2.7

1.00

0.9541

 

 
1.01

0.4596

 

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

Participant Characteristics

222

Table C.2—Continued

Table C.2—Continued

Participant Characteristics

Unit Change
Population
for Continuous
and Full
Respondent Unadjusted
Variables
Sample Mean
Mean
RRa

P-Value

Joint Test

Adjusted
RRb

P-Value

Joint Test

9.3

7.4

0.90

<0.0001

 

0.92

<0.0001

 

Nonoccupational

3.1

1.3

0.47

<0.0001

 

0.89

0.0001

 

Tactical operations
officers

7.1

11.6

1.86

<0.0001

 

4.08

<0.0001

 

Intelligence officers

1.1

1.9

1.91

<0.0001

 

4.16

<0.0001

 

Engineering and
maintenance officers

2.5

4.7

2.14

<0.0001

 

4.71

<0.0001

 

Scientists and
professionals

1.1

2.2

2.32

<0.0001

 

4.75

<0.0001

 

Health care officers

2.0

3.6

2.03

<0.0001

 

4.30

<0.0001

 

Administrators

1.0

2.0

2.27

<0.0001

 

4.93

<0.0001

 

Supply, procurement,
and allied officers

1.4

2.5

2.03

<0.0001

 

4.46

<0.0001

 

Other officers (20, 21,
29)

1.1

1.3

1.31

<0.0001

 

3.19

<0.0001

 

Unit location

 

 

 

 

 

 

Continental United
States (ref)

82.4

80.5

 

Outside the continental
United States

17.6

19.5

1.13

<0.0001

 

1.10

<0.0001

 

Supplementary Tables for Chapter Three

Service and supply
handlers

223

Unit Change
Population
for Continuous
and Full
Respondent Unadjusted
Variables
Sample Mean
Mean
RRa

P-Value

Joint Test

Adjusted
RRb

P-Value

Joint Test

Military Environment
Percentage of occupation
group that is malec

15 additional
percentage
points

86.1

84.8

0.85

<0.0001

 

0.88

<0.0001

32,000
additional
people

39,198.9

31,088.0

0.82

<0.0001

 

0.93

<0.0001

Percentage male in unitc

11.2 additional
percentage
points

86.4

84.6

0.82

<0.0001

 

0.90

<0.0001

 

Sized of unitc

500 additional
people

393.7

281.8

0.81

<0.0001

 

0.94

<0.0001

 

Percentage male in
installation (zip code)c

6.4 additional
percentage
points

85.2

84.1

0.84

<0.0001

 

0.94

<0.0001

 

10,000
additional
people

11,847.4

9,852.0

0.86

<0.0001

 

0.92

<0.0001

 

Change in assigned unit zip
since 08/01/2013

26.3

21.8

0.78

<0.0001

 

1.05

<0.0001

 

Change in assigned unit zip
since 04/01/2014

19.6

15.6

0.76

<0.0001

 

0.79

<0.0001

 

Change of mailing address
since 04/01/2014

29.9

24.6

0.76

<0.0001

 

0.84

<0.0001

 

No valid mailing address

2.2

1.3

0.58

<0.0001

 

0.89

<0.0001

 

Sized of occupation groupc

Sized of installationc

Fieldwork Indicators

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

Participant Characteristics

224

Table C.2—Continued

Table C.2—Continued

Participant Characteristics

Unit Change
Population
for Continuous
and Full
Respondent Unadjusted
Variables
Sample Mean
Mean
RRa

P-Value

Joint Test

Adjusted
RRb

P-Value

Joint Test

No valid email address

6.2

1.0

0.14

<0.0001

 

0.24

<0.0001

 

Mailing 1 is postal
nondeliverable

16.5

7.6

0.41

<0.0001

 

0.62

<0.0001

 

Marine Corps sent email

1.8

0.4

0.20

<0.001

0.44

<0.0001

10.8

1.2

0.78

<0.0001

0.83

<0.0001

Percentage of emails that
bouncedc

10.4
percentage
points

 

 

NOTE: P-values from individual tests of significance are shown in the “P-Value” columns, while the p-values for a joint test come from a chi-square
score test; shown in the “Joint Test” columns. Variables marked “ref” are the reference variables in their categories.
a The binary variable is coded so that 1 = responded and 0 = did not respond, so that a risk ratio greater than 1 indicates a higher likelihood of
responding for that category compared to the reference.

are listed in the column labeled “unit change for continuous variables”).
d Size measured by number of people.

Supplementary Tables for Chapter Three

b The adjusted risk ratio comes from a model that includes race/ethnicity (indicated levels), service branch, and pay grade.
c Indicates variables entered as continuous, for which the risk ratio corresponds to one standard deviation change in the variable (standard deviations

225

226

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

Table C.3
Design Effect of RMWS Weights for Key Reporting Categories
RMWS Weights
Gender

Sample Size
 

Mean
 

Standard Deviation

Design Effect

 

 

Male

62,161

1.63

2.10

2.66

Female

53,598

0.33

0.22

1.43

Service

 

 

Army

41,678

1.09

1.76

3.57

Navy

22,159

1.28

2.14

3.79

Air Force

42,572

0.67

0.59

1.78

1.81

Marine Corps

 

9,350

 

2.71

3.25

 

 

 

29,687

1.72

3.00

4.04

55,271

0.89

0.77

1.75

O1–O3

17,259

0.67

0.53

1.63

O4–O6

13,542

0.58

0.37

1.40

Pay grade

 

E1–E5
E6–E9

Males
Service

 

 

Army

23,286

1.69

2.17

2.64

Navy

11,290

2.07

2.75

2.76

Air Force

21,103

1.10

0.59

1.28

2.42

3.06

2.61

 

 

 

Marine Corps

 

6,482

 

Pay grade

 

E1–E5

13,105

3.29

3.98

2.46

E6–E9

32,214

1.32

0.75

1.33

O1–O3

8,515

1.09

0.46

1.18

O4–O6

8,327

0.81

0.28

1.12

Females
Service

 

 

Army

18,392

0.34

0.18

1.28

Navy

10,869

0.46

0.36

1.60

Air Force

21,469

0.25

0.05

1.04

0.44

0.25

1.32

Marine Corps

 

2,868

 

 

 

Pay grade

 

 

E1–E5

16,582

0.48

0.33

1.48

E6–E9

23,057

0.29

0.09

1.09

O1–O3

8,744

0.26

0.06

1.05

O4–O6

5,215

0.21

0.03

1.02

Supplementary Tables for Chapter Three

227

Table C.4
Design Effect of WGRA Weights for Key Reporting Categories
WGRA Weights
Gender

Sample Size
 

Mean
 

Standard Deviation
 

Design Effect
 

Male

62,161

1.63

1.53

1.87

Female

53,598

0.33

0.17

1.27

Service

 

 

Army

41,678

1.09

1.47

2.79

Navy

22,159

1.28

1.49

2.35

Air Force

42,572

0.67

0.46

1.47

1.81

1.94

2.14

Marine Corps

 

9,350

Pay grade

 

 

 

 

 

E1–E5

29,687

1.72

2.24

2.69

E6–E9

55,271

0.89

0.63

1.50

O1–O3

17,259

0.67

0.45

1.46

O4–O6

13,542

0.58

0.30

1.27

Males
Service

 

 

 

Army

23,286

1.69

1.74

2.06

Navy

11,290

2.07

1.73

1.70

Air Force

21,103

1.10

0.25

1.05

2.42

2.05

1.72

 

 

 

Marine Corps

 

6,482

Pay grade

 

E1–E5

13,105

3.29

2.62

1.63

E6–E9

32,214

1.32

0.48

1.13

O1–O3

8,515

1.09

0.24

1.05

O4–O6

8,327

0.81

0.10

1.01

Females
Service

 

 

Army

18,392

0.34

0.16

1.22

Navy

10,869

0.46

0.25

1.30

Air Force

21,469

0.25

0.03

1.02

0.44

0.18

1.16

 

 

Marine Corps

 

2,868

 

 

Pay grade

 

E1–E5

16,582

0.48

0.24

1.26

E6–E9

23,057

0.29

0.07

1.06

O1–O3

8,744

0.26

0.03

1.01

O4–O6

5,215

0.21

0.02

1.01

228

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

Table C.5
Balance of Weighted Respondents Relative to the DoD Active-Duty Population Mean of
Proxy Variables
Full Sample Mean
Respondent Mean
with Design Weights with RMWS Weights
(Percentage)
(Percentage)

Respondent Mean
with WGRA Weights
(Percentage)

Proxy Variable
for

Variable Name

Discrimination

p_any_disc

3.15

3.14

3.11

Quid pro quo

p_any_quid

0.39

0.39

0.37

Hostile work
environment

p_any_host

8.21

8.30

7.83

Sexual assault
penetration

p_any_sa_pen

0.46

0.46

0.43

Sexual assault
non-penetrative

p_any_sa_con

0.77

0.77

0.69

Sexual assault
attempted

p_any_sa_att

0.02

0.02

0.02

APPENDIX D

Supplementary Tables for Chapter Seven

Table D.1
Changes in Top-Line MEO Violation Estimates as a Result of Programming Error, for Men by
Service
MEO Violation

With Coding Error Error Corrected

Change

Sexually Hostile Work Environment
Army

7.65%
(6.81–8.56)

7.65%
(6.81–8.56)

+0.0059%

Navy

8.34%
(7.02–9.81)

8.34%
(7.02–9.81)

0.0000%

Air Force

3.26%
(2.80–3.77)

3.26%
(2.80–3.77)

0.0000%

Marine Corps

6.11%
(4.76–7.70)

6.11%
(4.76–7.70)

0.0000%

Coast Guard

3.74%
(2.94–4.68)

3.74%
(2.93–4.68)

0.0000%

Army

7.67%
(6.83–8.58)

7.67%
(6.83–8.59)

+0.0059%

Navy

8.37%
(7.05–9.84)

8.37%
(7.05–9.84)

0.0000%

Air Force

3.29%
(2.82–3.80)

3.29%
(2.82–3.80)

0.0000%

Marine Corps

6.11%
(4.76–7.70)

6.11%
(4.76–7.70)

0.0000%

Coast Guard

3.75%
(2.94–4.69)

3.7472%
(2.94–4.69)

0.0000%

Army

8.53%
(7.67–9.45)

8.5290%
(7.67–9.45)

0.0000%

Navy

9.61%
(8.25–11.11)

9.6063%
(8.25–11.11)

0.0000%

Air Force

3.84%
(3.36–4.37)

3.8398%
(3.36–4.37)

0.0000%

Marine Corps

6.65%
(5.28–8.25)

6.6538%
(5.28–8.25)

0.0000%

Coast Guard

4.51%
(3.60–5.57)

4.5122%
(3.60–5.57)

0.0000%

Sexual Harassment

Any MEO Violation

229

230

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

Table D.2
Changes in Top-Line MEO Violation Estimates as a Result of Programming Error, for Women
by Service
MEO Violation

With Coding Error Error Corrected

Change

Sexually Hostile Work Environment
Army

22.87%
(21.92–23.84)

22.92%
(21.97–23.89)

+0.0472%

Navy

27.72%
(26.21–29.26)

27.73%
(26.23–29.27)

+0.0163%

Air Force

12.32%
(11.72–12.95)

12.35%
(11.74–12.98)

+0.0269%

Marine Corps

27.19%
(24.68–29.80)

27.19%
(24.68–29.80)

0.0000%

Coast Guard

19.15%
(17.05–21.39)

19.27%
(17.16–21.52)

+0.1227%

Army

23.07%
(22.12–24.05)

23.12%
(22.16–24.09)

+0.0473%

Navy

27.82%
(26.31–29.36)

27.84%
(26.33–29.38)

+0.0163%

Air Force

12.43%
(11.82–13.07)

12.46%
(11.84–13.09)

+0.0271%

Marine Corps

27.30%
(24.79–29.92)

27.30%
(24.79–29.92)

0.0000%

Coast Guard

19.19%
(17.09–21.43)

19.32%
(17.21–21.56)

+0.1229%

Army

28.62%
(27.61–29.64)

28.65%
(27.64–29.68)

+0.0325%

Navy

32.16%
(30.62–33.72)

32.17%
(30.63–33.74)

+0.0163%

Air Force

15.66%
(14.99–16.35)

15.69%
(15.02–16.38)

+0.0269%

Marine Corps

31.43%
(28.85–34.11)

31.43%
(28.84–34.11)

0.0000%

Coast Guard

23.32%
(21.10–25.66)

23.44%
(21.22–25.79)

+0.1228%

Sexual Harassment

Any MEO Violation

Supplementary Tables for Chapter Seven

231

Table D.3
Changes in Top-Line MEO Violation Estimates as a Result of Programming Error, for Men by
Pay Grade
MEO Violation

With Coding Error Error Corrected

Change

Sexually Hostile Work Environment
E1–E4

9.66%
(8.54–10.87)

9.66%
(8.54–10.87)

0.0000%

E5–E9

4.65%
(4.25–5.08)

4.65%
(4.25–5.08)

+0.0055%

O1–O3

4.48%
(3.82–5.22)

4.48%
(3.82–5.22)

0.0000%

O4–O6

2.06%
(1.65–2.52)

2.06%
(1.65–2.52)

0.0000%

E1–E4

9.68%
(8.56–10.90)

9.68%
(8.56–10.90)

0.0000%

E5–E9

4.67%
(4.27–5.10)

4.68%
(4.28–5.11)

0.0055%

O1–O3

4.52%
(3.85–5.27)

4.52%
(3.85–5.27)

0.0000%

O4–O6

2.06%
(1.66–2.53)

2.06%
(1.66–2.53)

0.0000%

E1–E4

10.37%
(9.24–11.59)

10.37%
(9.24–11.59)

0.0000%

E5–E9

5.62%
(5.18–6.08)

5.62%
(5.18–6.08)

0.0000%

O1–O3

5.26%
(4.56–6.04)

5.26%
(4.56–6.04)

0.0000%

O4–O6

3.15%
(2.65–3.71)

3.15%
(2.65–3.71)

0.0000%

Sexual Harassment

Any MEO Violation

NOTE: Too few warrant officers were included in the sample to break them out as a separate pay grade.
For the purposes of this table, warrant officers have been included with the E5–E9 category.

232

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

Table D.4
Changes in Top-Line MEO Violation Estimates as a Result of Programming Error, for Women
by Pay Grade
MEO Violation

With Coding Error Error Corrected

Change

Sexually Hostile Work Environment
E1–E4

26.35%
(25.20–27.53)

26.39%
(25.24–27.57)

+0.0361%

E5–E9

18.08%
(17.35–18.82)

18.10%
(17.38–18.85)

+0.0295%

O1–O3

19.70%
(18.52–20.93)

19.72%
(18.54–20.95)

+0.0185%

O4–O6

9.43%
(8.38–10.56)

9.43%
(8.38–10.56)

0.0000%

E1–E4

26.53%
(25.38–27.71)

26.57%
(25.41–27.75)

+0.0363%

E5–E9

18.21%
(17.48–18.96)

18.24%
(17.51–18.99)

+0.0296%

O1–O3

19.85%
(18.66–21.08)

19.87%
(18.67–21.10)

+0.0185%

O4–O6

9.48%
(8.43–10.62)

9.48%
(8.43–10.62)

0.0000%

E1–E4

29.62%
(28.43–30.82)

29.65%
(28.47–30.85)

+0.0361%

E5–E9

23.20%
(22.42–24.01)

23.22%
(22.43–24.02)

+0.0155%

O1–O3

25.29%
(24.01–26.60)

25.31%
(24.02–26.62)

+0.0185%

O4–O6

17.78%
(16.41–19.23)

17.78%
(16.41–19.23)

0.0000%

Sexual Harassment

Any MEO Violation

NOTE: Too few warrant officers were included in the sample to break them out as a separate pay grade.
For the purposes of this table, warrant officers have been included with the E5–E9 category.

Supplementary Tables for Chapter Seven

233

Table D.5
Changes in Top-Line MEO Violation Estimates as a Result of Programming Error, for ReserveComponent Service Members by Gender
MEO Violation

With Coding Error Error Corrected

Change

Sexually Hostile Work Environment
Men
Women

5.97%
(4.62–7.58)

5.97%
(4.62–7.58)

13.53%
(11.95–15.23)

13.55%
(11.98–15.26)

5.98%
(4.62–7.58)

5.98%
(4.62–7.58)

13.62%
(12.04–15.32)

13.64%
(12.06–15.35)

6.68%
(5.30–8.29)

6.68%
(5.30–8.29)

18.12%
(16.35–19.99)

18.14%
(16.38–20.02)

0.0000%
+0.0267%

Sexual Harassment
Men
Women

0.0000%
+0.0268%

Any MEO Violation
Men
Women

0.0000%
+0.0268%

Abbreviations

AFQT
CI
DMDC
DoD
GBM
MEO
MSE
NA
NR
OSD
PTSD
RMWS
RR
SAPR
SAPRO
UCMJ
WGRA
WGRR

Armed Forces Qualifying Test
confidence interval
Defense Manpower Data Center
Department of Defense
Generalized Boosted Model
military equal opportunity
mean squared error
not applicable
not reportable
Office of the Secretary of Defense
posttraumatic stress disorder
RAND Military Workplace Study
risk ratio
Sexual Assault Prevention and Response
Sexual Assault Prevention and Response Office
Uniform Code of Military Justice
Workplace and Gender Relations Survey of Active Duty Members
Workplace and Gender Relations Survey of Reserve Component
Members

235

References

Acree, M., M. Ekstrand, T. J. Coates, and R. Stall, “Mode Effects in Surveys of Gay Men: A WithinIndividual Comparison of Responses by Mail and by Telephone,” Journal of Sex Research, Vol. 36,
No. 1, 1999, pp. 67–75.
American Association for Public Opinion Research, Standard Definitions: Final Dispositions of Case
Codes and Outcome Rates for Surveys, 7th ed., 2011. As of November 25, 2014:
http://aapor.org/Content/NavigationMenu/AboutAAPOR/StandardsampEthics/
StandardDefinitions/StandardDefinitions2011.pdf
American Psychiatric Association, Diagnostic and Statistical Manual of Mental Disorders: Fifth
Edition, Washington, D.C., 2013.
Andersen, R., J. Kasper, and M. Frankel, Total Survey Error: Applications to Improve Health Surveys,
San Francisco, Calif.: Jossey-Bass, 1979.
Aquilino, W. S., “Interview Mode Effects in Surveys of Drug and Alcohol Abuse: A Field
Experiment,” Public Opinion Quarterly, Vol. 58, No. 2, 1994, pp. 210–240.
Bradburn, N., “Response Effects,” in P. Rossi, J. Wright, and A. Anderson, eds., Handbook of Survey
Research, San Diego, Calif.: Academic Press, 1983, pp. 289–328.
Bureau of Justice Statistics, “Rates of Rape/Sexual Assaults by Sex, 2003–2013,” generated using the
NCVS Victimization Analysis Tool, undated. As of March 7, 2015:
http://www.bjs.gov/index.cfm?ty=nvat
Cantor, D., “Substantive Implications of Longitudinal Design Features: The National Crime Survey
as a Case Study,” in D. Kasprzyk, G. Duncan, G. Kalton, and M. P. Singh, eds., Panel Surveys, New
York: John Wiley, 1989, pp. 25–51.
Cook, Sarah L., Christine A. Gidycz, Mary P. Koss, and Megan Murphy, “Emerging Issues in the
Measurement of Rape Victimization,” Violence Against Women, Vol. 17, No. 2, 2011, pp. 201–218.
Defense Manpower Data Center, 2012 Workplace and Gender Relations Survey of Active Duty
Members: Nonresponse Bias Analysis Report, Washington, D.C.: Department of Defense, Report
No. 2013-059, 2014.
———, “Summary Paper: Calculation of ‘Active Refusal’ Disposition Code for the 2012 Workplace
and Gender Relations Survey of Active Duty Members,” unpublished report, 2015.
DMDC—See Defense Manpower Data Center.
Elliott, Marc N., and Amelia Haviland, “Use of a Web-Based Convenience Sample to Supplement
and Improve the Accuracy of a Probability Sample,” Survey Methodology, Vol. 33, 2007, pp. 211–215.
Falk, Eric T., “Improving Response Rates in Military Surveys,” presentation at Joint Statistical
Meetings (JSM), San Diego, Calif., July 28–August 2, 2012.

237

238

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

Finkelhor, David, Jennifer Vanderminden, Heather Turner, Sherry Hamby, and Anne Shattuck,
“Upset Among Youth in Response to Questions About Exposure to Violence, Sexual Assault and
Family Maltreatment,” Child Abuse & Neglect, Vol. 38, No. 2, 2014, pp. 217–223. As of February 16,
2015:
http://www.unh.edu/ccrc/pdf/CV296Revised-Published.pdf
Fitzgerald, L. F., S. Swan, and K. Fischer, “Why Didn’t She Just Report Him? The Psychological and
Legal Implications of Women’s Responses to Sexual Harassment,” Journal of Social Issues, Vol. 51,
No. 1, 1995, pp. 117–138.
Fitzgerald, L. F., S. Swan, and V. J. Magley, “But Was It Really Sexual Harassment? Legal,
Behavioral, and Psychological Definitions of the Workplace Victimization of Women,” in William
O’Donohue, ed., Sexual Harassment: Theory, Research, and Treatment, Needham Heights, Mass.:
Allyn & Bacon, 1997.
Friedman, J. H., “Greedy Function Approximation: A Gradient Boosting Machine,” Annals of
Statistics, Vol. 29, No. 5, 2001, pp. 1189–1232.
Galea, Sandro, Arijit Nandi, Jennifer Stuber, Joel Gold, Ron Acierno, Connie L. Best, Mike
Bucuvalas, Sasha Rudenstine, Joseph A. Boscarino, and Heidi Resnick, “Participant Reactions to
Survey Research in the General Population After Terrorist Attacks,” Journal of Traumatic Stress,
Vol. 18, No. 5, 2005, pp. 461–465.
Goodwin, D. W., E. Othmer, J. A. Halika, et al. “Loss of Short–Term Memory as a Predictor of the
Alcoholic ‘Black–out,’” Nature, Vol. 227, 1970, pp. 201–202.
Groves, Robert M., “Nonresponse Rates and Nonresponse Bias in Household Surveys,” Public
Opinion Quarterly, Vol. 70, No. 5, 2006, pp. 646–675.
Heeringa, Steven G., Brady T. West, and Patricia A. Berglund, Applied Survey Data Analysis, Boca
Raton, Fla.: CRC Press, April 5, 2010.
Hussain, Nasir, Sheila Sprague, Kim Madden, Farrah Naz Hussain, Bharadwaj Pindiprolu, and
Mohit Bhandari, “A Comparison of the Types of Screening Tool Administration Methods Used
for the Detection of Intimate Partner Violence: A Systematic Review and Meta-Analysis,” Trauma,
Violence, & Abuse, Vol. 16, No. 1, 2013, pp. 60–69.
Kilpatrick, Dean G., Heidi S. Resnick, Kenneth J. Ruggiero, Lauren M. Conoscenti, and Jenna
McCauley, Drug-Facilitated, Incapacitated, and Forcible Rape: A National Study, Charleston, S.C.:
Medical University of South Carolina, National Crime Victims Research & Treatment Center, 2007.
Kish, Leslie, Survey Sampling, Oxford, UK: Wiley, 1965.
Kohut, A., S. Keeter, C. Doherty, M. Dimock, and L. Christian, Assessing the Representativeness of
Public Opinion Surveys, Washington, D.C.: Pew Research Center, 2012.
Koss, M. P., “Detecting the Scope of Rape: A Review of Prevalence Research Methods,” Journal of
Interpersonal Violence, 1993, pp. 198–222.
Kreuter, Frauke, Stanley Presser, and Roger Tourangeau, “Social Desirability Bias in CATI, IVR,
and Web Surveys: The Effects of Mode and Question Sensitivity,” Public Opinion Quarterly, Vol. 72,
No. 5, 2008, pp. 847–865.
Lehnen, R., and W. Skogan, The National Crime Survey: Working Papers, Volume 2: Methodological
Studies, Washington, D.C.: U.S. Department of Justice, Bureau of Justice Statistics, 1984.
Little, Roderick J., and Donald B. Rubin, Statistical Analysis with Missing Data, 2nd ed., New York:
Wiley-Interscience, 2002.
Little, Roderick J., and Sonya Vartivarian, “Does Weighting for Nonresponse Increase the Variance
of Survey Means?” Survey Methodology, Vol. 31, No. 2, 2005, pp. 161–168.

References

239

McCabe, S. E., M. P. Couper, J. A. Cranford, and C. J. Boyd, “Comparison of Web and Mail
Surveys for Studying Secondary Consequences Associated with Substance Use: Evidence for
Minimal Mode Effects,” Addictive Behaviors, Vol. 31, 2006, pp. 162–168.
McCaffrey, D. F., G. Ridgeway, and A. R. Morral, “Propensity Score Estimation with Boosted
Regression for Evaluating Causal Effects in Observational Studies,” Psychological Methods, Vol. 9,
2004, pp. 403–425.
Morral, Andrew R., Kristie L. Gore, and Terry L. Schell, eds., Sexual Assault and Sexual Harassment
in the U.S. Military: Volume 1. Design of the 2014 RAND Military Workplace Study, Santa Monica,
Calif.: RAND Corporation, RR-870/1-OSD, 2014. As of March 2, 2015:
http://www.rand.org/pubs/research_reports/RR870z1.html
———, Sexual Assault and Sexual Harassment in the U.S. Military: Volume 2. Estimates for
Department of Defense Service Members from the 2014 RAND Military Workplace Study, Santa
Monica, Calif.: RAND Corporation, RR-870/2-OSD, 2015a. As of March 2, 2015:
http://www.rand.org/pubs/research_reports/RR870z2.html
———, Sexual Assault and Sexual Harassment in the U.S. Military: Annex to Volume 2. Tabular
Results from the 2014 RAND Military Workplace Study for Department of Defense Service Members,
Santa Monica, Calif.: RAND Corporation, RR-870/3-OSD, 2015b. As of March 2, 2015:
http://www.rand.org/pubs/research_reports/RR870z3.html
———, Sexual Assault and Sexual Harassment in the U.S. Military: Volume 3. Estimates for Coast
Guard Service Members from the 2014 RAND Military Workplace Study, Santa Monica, Calif.:
RAND Corporation, RR-870/4-OSD, 2015c. As of March 2, 2015:
http://www.rand.org/pubs/research_reports/RR870z4.html
———, Sexual Assault and Sexual Harassment in the U.S. Military: Annex to Volume 3. Tabular
Results from the 2014 RAND Military Workplace Study for Coast Guard Service Members, Santa
Monica, Calif.: RAND Corporation, RR-870/5-OSD, 2015d. As of March 2, 2015:
http://www.rand.org/pubs/research_reports/RR870z5.html
National Defense Research Institute, Sexual Assault and Sexual Harassment in the U.S. Military: TopLine Estimates for Active-Duty Service Members from the 2014 RAND Military Workplace Study, Santa
Monica, Calif.: RAND Corporation, RR-870-OSD, 2014a. As of January 6, 2016:
http://www.rand.org/pubs/research_reports/RR870.html
———, Sexual Assault and Sexual Harassment in the U.S. Military: Top-Line Estimates for Active-Duty
Coast Guard Members from the 2014 RAND Military Workplace Study, Santa Monica, Calif.: RAND
Corporation, RR-944-USCG, 2014b. As of January 6, 2016:
http://www.rand.org/pubs/research_reports/RR944.html
National Research Council, “Estimating the Incidence of Rape and Sexual Assault,” in Kruttschnitt,
C., W. D. Kalsbeek, and C. C. House, eds., Panel on Measuring Rape and Sexual Assault in Bureau of
Justice Household Surveys, Washington, D.C.: The National Academies Press, 2014.
Office of the Deputy Assistant Secretary of Defense (Military Community and Family Policy), 2013
Demographics: Profile of the Military Community, Washington, D.C.: Department of Defense, 2014.
As of January 6, 2016:
http://www.militaryonesource.mil/12038/MOS/Reports/2013-Demographics-Report.pdf
Office of Management and Budget, Standards and Guidelines for Statistical Surveys, Washington,
D.C., September 2006. As of October 3, 2014:
http://www.whitehouse.gov/sites/default/files/omb/inforeg/statpolicy/
standards_stat_surveys.pdf

240

Sexual Assault and Sexual Harassment in the U.S. Military: Volume 4

Parks, K. A., A. M. Pardi, and C. M. Bradizza, “Collecting Data on Alcohol Use and AlcoholRelated Victimization: A Comparison of Telephone and Web-Based Survey Methods,” Journal of
Studies on Alcohol, Vol. 67, No. 2, 2006, pp. 318–323.
Ridgeway, Greg, “The State of Boosting,” in K. Berk and M. Pourahmadi, eds., Computing Science
and Statistics, Fairfax, Va.: Interface Foundation of North America, 1999, pp. 172–181.
———, Generalized Boosted Models: A Guide to the GBM Package, May 23, 2012. As of January 19,
2016:
http://gradientboostedmodels.googlecode.com/git/gbm/inst/doc/gbm.pdf
Ridgeway, Greg, and Daniel F. McCaffrey, “Comment: Demystifying Double Robustness: A
Comparison of Alternative Strategies for Estimating a Population Mean from Incomplete Data,”
Statistical Science, Vol. 22, No. 4, 2007, pp. 540–543.
Schafer, Joseph L., and John W. Graham, “Missing Data: Our View of the State of the Art,”
Psychological Methods, Vol. 7, No. 2, 2002, pp. 147–177.
Schenck, Lisa M., “Informing the Debate About Sexual Assault in the Military Services: Is the
Department of Defense Its Own Worst Enemy?” Ohio State Journal of Criminal Law, Vol. 11, 2014.
Stark, S., O. S. Chernyshenko, A. R. Lancaster, F. Drasgow, and L. F. Fitzgerald, “Toward
Standardized Measurement of Sexual Harassment: Shortening the SEQ-DoD Using Item Response
Theory,” Military Psychology, Vol. 14, No. 1, 2002.
Tourangeau, Roger, and Ting Yan, “Sensitive Questions in Surveys,” Psychological Bulletin, Vol. 133,
No. 5, 2007, pp. 859–883.
Turner, C. F., L. Ku, S. M. Rogers, L. D. Lindberg, J. H. Pleck, and F. L. Sonenstein, “Adolescent
Sexual Behavior, Drug Use, and Violence: Increased Reporting with Computer Survey Technology,”
Science, Vol. 280, No. 5365, 1998, pp. 867–873
Under Secretary of Defense for Personnel and Readiness, Department of Defense Military Equal
Opportunity (MEO) Program, Department of Defense Directive 1350.2, August 18, 1995 (Certified
Current as of November 2003).
Weathers, F., B. Litz, D. Herman, J. Huska, and T. Keane, “The PTSD Checklist (PCL): Reliability,
Validity, and Diagnostic Utility,” paper presented at the Annual Convention of the International
Society for Traumatic Stress Studies, San Antonio, Tex., October 1993.
White, A. M., P. E. Simson, and P. J. Best, “Comparison Between the Effects of Ethanol
and Diazepam on Spatial Working Memory in the Rat,” Psychopharmacology, Vol. 133, 1997,
pp. 256–261.
White House Task Force to Protect Students from Sexual Assault, Climate Surveys: Useful Tools
to Help Colleges and Universities in Their Efforts to Reduce and Prevent Sexual Assault, Not Alone:
Together Against Sexual Assault, 2014. As of May 5, 2015:
https://www.notalone.gov/assets/ovw-climate-survey.pdf
Zajac, Kristyn, Kenneth J. Ruggiero, Daniel W. Smith, Benjamin E. Saunders, and Dean G.
Kilpatrick, “Adolescent Distress in Traumatic Stress Research: Data from the National Survey of
Adolescents‐Replication,” Journal of Traumatic Stress, Vol. 24, No. 2, 2011, pp. 226–229. As of
May 5, 2015:
http://onlinelibrary.wiley.com/doi/10.1002/jts.20621/pdf
Zou, G., “A Modified Poisson Regression Approach to Prospective Studies with Binary Data,”
American Journal of Epidemiology, Vol. 159, No. 7, 2004.

In early 2014, the Department of Defense Sexual Assault Prevention and
Response Office asked the RAND National Defense Research Institute to conduct
an independent assessment of the rates of sexual assault, sexual harassment, and
gender discrimination in the military—an assessment last conducted in 2012 by
the Department of Defense using the Workplace and Gender Relations Survey
of Active Duty Members. The resulting RAND Military Workplace Study invited
close to 560,000 U.S. service members to participate in a survey fielded in August
and September of 2014. This volume presents the results of methodological
investigations into sources of potential bias in the survey estimates for activeand reserve-component service members. It includes evaluations of follow-up
studies of survey nonrespondents and the efficacy of sampling weights to correct
nonresponse bias, an assessment of total survey error using an administrative
records benchmark, estimates of potential under- and overcounting of service
members exposed to sexual assault, comparisons of events identified by
prior survey forms and the RAND forms, analysis of survey nonconsent and
breakoff, and evaluation of service member tolerance of the RAND forms. In
the final chapter, the report draws conclusions and recommendations for future
administrations of sexual assault and harassment surveys in the military.

N AT I ONAL DEFENSE R ESEAR C H IN S TITUTE
$49.50

www.rand.org

ISBN-10 0-8330-9279-0
ISBN-13 978-0-8330-9279-3
54950

RR- 870/6 - OSD

9

780833 092793


File Typeapplication/pdf
File TitleSexual Assault and Sexual Harassment in the U.S. Military: Volume 4. Investigations of Potential Bias in Estimates from the 2014
SubjectThis volume presents the results of methodological investigations into sources of potential bias in the 2014 RAND Military Workp
AuthorEdited by Andrew R. Morral, Kristie L. Gore, Terry L. Schell
File Modified2017-08-24
File Created2016-03-02

© 2024 OMB.report | Privacy Policy