Download:
pdf |
pdfNational Survey of
Youth in Custody
Report on the Development and Testing of the
Study Design
Authors:
David Cantor
Timothy Smith
Andrea Sedlak
John Hartge
Prepared for:
Prepared by:
Bureau of Justice Statistics
810 Seventh Street, NW
Washington, DC 20531
WESTAT
1650 Research Boulevard
Rockville, Maryland 20850
Aknowledgements
This project could not have been completed without the 7 states and 16 facilities that participated during
the pilot tests. Their flexibility and collaboration were essential to the success of this project. Their
willingness to participate in the study is greatly appreciated.
We would also like to thank the young people and their guardians who participated in this research. Their
insightful comments shaped much of the development and testing described in this report.
i
TABLE OF CONTENTS
Page
Chapter
1.
2.
INTRODUCTION........................................................................................
DEVELOPMENT OF THE SURVEY METHODOLOGY ...........................
1
2
2.1
Sample Design.................................................................................
2
2.1.1
2.1.2
2.1.3
3
5
Defining the sample frame ................................................
Selecting Facilities ............................................................
Number of Youth Sampled Per Facility and Estimated
Precision...........................................................................
Minimum Sample Sizes to Publish Estimates ....................
6
7
Addressing Human Subject Issues....................................................
Development of the Questionnaire ...................................................
8
14
2.3.1
2.3.2
Rationale for the Questionnaire Design .............................
Cognitive testing...............................................................
14
15
PILOT TESTING.........................................................................................
20
Features of the two-phase implementation........................................
Survey operations ............................................................................
22
22
3.2.1
3.2.2
3.2.3
3.2.4
3.2.5
Consent.............................................................................
Assent and Response Rates ...............................................
Mandatory reporting .........................................................
Counseling........................................................................
Facility effects and burden ................................................
22
25
28
29
30
Questionnaire administration............................................................
31
3.3.1
Confidentiality..................................................................
31
Question tailoring ............................................................................
32
3.3.3
3.3.4
Administration time ..........................................................
Youth reactions.................................................................
33
34
Data Analysis...................................................................................
35
Prevalence rates ................................................................
Data quality analysis .........................................................
36
37
CONCLUSIONS..........................................................................................
40
2.1.4
2.2
2.3
3.
3.1
3.2
3.3
3.3.2
3.4
3.4.1
3.4.2
4.
ii
TABLE OF TABLES
Page
Table
1.
Proposed Sample Frame for the National Survey of Youth in Custody………
4
2.
Whether or Not Estimate Will Be Statistically Different From 0
By Size of Estimate and Sample Size……………………………………..
7
3.
Sample Frame and Proposed Sample Sizes by Facility Size…………………..
7
4.
Questionnaire Content by Treatment Group………………………………….
12
5.
Characteristics of Facilities and Youth Participating in Cognitive
Interview Activities………………………………………………………..
15
6.
Final Status of Parent/Guardian Permission……………………………………
16
7.
Characteristics of Facilities and Youth Participating in Pilot Test…………….
20
8.
Parent/Guardian Consent Rates (Phase 2)……………………………………..
23
9.
Response Rates (Phase 2)……………………………………………………...
26
10.
Percentage Distribution of Sampled and Interviewed Youth, by Youth
Characteristic (Phase 2)…………………………………………………….
27
Number of Youth Providing Inconsistent or Extreme Reports, by
Type of Report…………………………………………………………….
39
11.
iii
NATIONAL SURVEY OF YOUTH IN CUSTODY
REPORT ON THE DEVELOPMENT AND TESTING OF THE STUDY DESIGN
1.
Introduction
On September 4, 2003, President George W. Bush signed into law the Prison Rape
Elimination Act of 2003 (PREA) (Public Law 108-79). The law was passed, in part, to overcome a
shortage of available research on the incidence and prevalence of sexual violence within correctional
facilities.
Due to the sensitive nature of violent victimization and potential reluctance to report sexual
assault, the Bureau of Justice Statistics (BJS) will collect multiple measures on the incidence and
prevalence of sexual assault. To implement the Act developed the National Prison Rape Statistics
Program, which includes five separate data collection efforts: the National Survey of Youth in Custody
(NSYC), the Survey on Sexual Violence, the National Inmate Survey, the Former Prisoner Survey, and a
medical surveillance project to track medical and behavioral indicators of sexual violence.
Through a cooperative agreement with BJS, Westat designed the survey methodology that
will be used to collect data for the NSYC. The NSYC will collect self-reports of sexual assault by youth
in juvenile facilities. The surveys will be administered using Audio Computer-Assisted Self-Interview
(ACASI) methodology. This involves youth responding to a computer questionnaire using a touchscreen, following audio instructions delivered via headphones.
To develop the methodology for the study, BJS and Westat completed a pilot project which
assessed the feasibility of conducting the study of this complexity and sensitivity. This involved two
activities. The first was to develop the survey design and procedures. This activity had to meet a number
of challenges, including: 1) designing a sample to provide facility-level estimates, 2) obtaining parental
consent for underage youth, 3) protecting the emotional well-being and physical safety of youth, 4) fully
informing youth about the conditions and consequences of participation and 5) collecting quality
information at the level of detail required under PREA. The second set of activities was working with
administrators of juvenile facilities to develop procedures to address concerns with collecting data on
such a sensitive topic. These two activities were interrelated. Many of the challenges listed above (e.g.,
protecting the youth from harm, obtaining consent, collecting quality data) were naturally concerns for
1
administrators as well. Consequently, many of the survey design decisions were influenced by feedback
received from the administrators during the development process.
This report summarizes the activities associated with the development process. Section 2
focuses on the development of the basic survey methodology. Section 3 summarizes the results of the
two pilot tests that were used to test and refine the methodology. Conclusions are discussed in Section 4.
An appendix of the PowerPoint presentation delivered at the PREA Workshop for Juveniles (Washington,
DC, August 28, 2007) is attached.
2.
Development of the Survey Methodology
The survey methodology includes all aspects of the procedures used to collect the data,
including the sample design, addressing human subject concerns and development of the questionnaire.
This section summarizes the design process for each of these design components.
2.1
Sample Design
An initial decision on the sample design was to restrict the population to youth who were
adjudicated. This excluded youth who were in a facility because they were awaiting a court hearing or
determination of placement. This restriction was made for practical reasons related to obtaining parental
consent for youth under 18 years old. Most of the youth in custody are under 18 years old. It was
expected that for the large majority of the youth, there would be a requirement to obtain consent from
parents or a guardian. Youth in residence, because they were waiting to be adjudicated (i.e., in detention),
would likely be out of the facility before consent could be obtained. Different methods to obtain consent
were considered before making this decision. The possibility of obtaining consent when the youth
entered the facility was considered. However, experience from other surveys (e.g., Survey of Youth in
Residential Placement) found that this was not an effective methodology. Parents are not typically with
the youth at the time they are admitted. Other arrangements were considered, including the use of an
advocate who could act in lieu of a parent or guardian. However, this was not seen as practical given the
large number of facilities that had to be sampled, as well as other human subject requirements that could
not be immediately addressed.
2
As will be discussed below, the study faced a number of human subject challenges, many of
which had never been carried out on a national survey with incarcerated youth on such a sensitive topic.
Given this, it was decided to restrict the study to adjudicated youth. If the study succeeded for this group,
and there was a desire to conduct a second survey, procedures to include non-adjudicated youth could be
considered by adding to the methods developed as part of the current effort.
2.1.1
Defining the sample frame
The PREA required the sample meet two basic requirements1:
1. Generate reliable facility-level estimates. This is related to the objective of ranking
facilities according to their reported rates of sexual assault. Those facilities with the
highest and lowest rates would be identified from the ranking.
2. Draw at least one facility from each of the 50 states and the District of Columbia.
With these two goals in mind, two additional requirements were added to the decision
process:
3. Oversample females. This was considered because of the relatively small number of
girls in juvenile facilities and the desire to collect information on their exposure and
risks. Oversampling would allow generation of national estimates for this group.
4. Generate national estimates. While the primary goal of PREA was to generate facilitylevel estimates, the survey would be collecting data in all parts of the country. It was
important to take advantage of this by designing the sample so an estimate for the entire
nation would be produced from the survey.
The requirement to generate facility-level estimates poses two problems. Both relate to the
relatively small sizes of most juvenile facilities. Compared to adult correctional institutions, youth
facilities are relatively small. Many are less than 100 youth, with a large proportion being less than 50.
One issue related to size was setting a minimum sample size to produce estimates that are statistically
reliable. With the expectation that rates of reporting would be less than 10%, perhaps even less than 5%,
small samples would produce highly unreliable estimates unless the number of interviews was sufficiently
large. With response rates expected to be around 50%, primarily because of the requirement to obtain
1
The PREA had a number of other requirements (e.g., sample at least 10% of facilities). In this section we discuss those that were the most
important when developing the final sample design.
3
parental consent, sample sizes were an issue. It was therefore important to maximize the number of large
facilities to meet the first objective of PREA.
A second issue was that small sample sizes could
jeopardize the confidentiality of the information. For example, with a rate of 5% in a facility and a
sample size of 60 respondents, there would be 3 respondents who would have reported a positive incident.
To maximize the number of facilities for which it would be possible to generate facility-level
estimates, the final sample plan included large facilities with certainty and smaller facilities were sampled
with probability proportional to size.
A further restriction was made to restrict the universe to state facilities. In order to collect
data on other types of institutions, the frame also included large non-state facilities (e.g., county,
municipal, and other) that housed adjudicated youth. The restriction to state-facilities was made to
minimize the number of entities that would have to be part of the negotiation process. It was anticipated
that each facility would require extensive discussions and negotiations on the type of parental consent that
would be obtained, as well as how the interviewing would be supported (e.g., provision of counselors;
procedures to make mandatory reports). State facilities have a centralized point of contact to make many
of these arrangements. This limited the amount of effort to carry out this part of the survey, although
discussions would still have to be conducted with at least the 51 governmental units in the survey.
Including all state-run facilities also allowed the study to generate national estimates for a logical
universe. A large proportion of large facilities with adjudicated youth are state-run. As a practical matter,
therefore, a national sample of state facilities was not overly restrictive for a population of adjudicated
youth who reside in large facilities. The addition of the other large facilities to the frame provided a way
to cover other facilities that could be reported and meet the PREA mandate of generating facility-level
estimates.
The sample frame for the survey would be the 2005 Census of Juveniles in Residential
Placement (CJRP) conducted by the Office of Juvenile Justice and Delinquency Prevention. This is the
most complete frame that has the requisite information on individual youth within correctional facilities.
Table 1 provides the distribution of facilities on the sample frame by size and type of facility.
4
Table 1. Proposed Sample Frame for the National Survey of Youth in Custody
Size
less than 20
21 - 50
51 - 89
90 - 120
121 - 150
151 - 250
251 - 350
350+
Total
2.1.2
State
111
183
69
29
15
35
27
13
482
Facilities
County
Other
10
10
2
0
22
8
11
1
6
26
Total
111
183
69
29
33
56
30
19
530
Adjudicated Youth
State
County Other
1,626
5,671
4,773
2,943
1,987
1,349
1,103
6,869
1,823
2,157
8,083
644
263
5,975
0
3,433
38,272
3,816
6,956
Total
1,626
5,671
4,773
2,943
4,439
10,849
8,990
9,408
48,699
Selecting Facilities
The final sample design included all State facilities with 90 or more adjudicated youth with
certainty. This ensured that the largest facilities would be included and a significant proportion of youth
who reside in these correctional facilities were covered.
For the states without a certainty facility, one facility would be selected. These would be
selected with probability proportional to the number of adjudicated youth in the facility.
All remaining facilities would consist of those in states with relatively few adjudicated youth
and from other states which were not eligible for selection in prior steps. These remaining facilities
would be grouped into strata without regard to state. The variables used to form the strata would be
percent female, percent Hispanic, type of facility (i.e., state or non-state), region, and facility size. Strata
may vary in size. For strata with relatively smaller numbers of adjudicated youth, one facility would be
selected with probability proportional to the number of adjudicated youth. For larger strata, more than one
facility would be selected, also with probability proportional to number of adjudicated youth.
5
2.1.3
Number of Youth Sampled Per Facility and Estimated Precision
Sampling youth within facilities would be with equal probability, except that most if not all
females would be included in sample. Based on the calculations described below, the design specifies
that all youth would be selected who are in facilities with populations under 165. In larger facilities, 165
youth would be selected within each facility. The number of youth sampled in each facility is designed to
maximize the number of facilities for which estimates can be published. In deciding how many youth
would be sampled within a particular facility, it is assumed that the standard error for a facility estimate is
approximately:
SEE = Deff*p*(1-p)/n
Where: SEE is the standard error of the estimate for the facility
p is the proportion of youth reporting nonconsensual contact
Deff is the design effect
The sample of youth would be a simple random sample and would generally not result in a
large design effect. However, it is anticipated that nonresponse adjustment factors would not generally
be the same for all youth in a facility. This would increase the design effect.
In deciding on the sampling rate, consideration was given to drawing a large enough number
of youth to maximize the chances that the survey estimate would have a 95% confidence interval that
does not cover 0. Table 2 illustrates one set of calculations used in this design decision. This table shows
which combinations of sample size and rates of nonconsensual sexual contact for which the confidence
interval would not cover zero. These calculations assume a design effect of 1.5, a response rate of 60
percent, and 90 percent of the interviews would be used to generate estimates of nonconsensual contacts.2
Using these assumptions, samples of 111 would be adequately large if the estimated rate of
nonconsensual sexual contact is at least 10 percent. For a sample size of 56, the rate is 20 percent or
greater. For the largest sample size of 165, a rate of 7 percent or more would have a confidence interval
that does not cover 0.
2
For reasons discussed below, 10% of the sample was not administered the questions on sexual assault.
6
Table 2. Whether or Not Estimate Will Be Statistically Different From 0
By Size of Estimate and Sample Size+
Sample Size
56
111
165
Rate = 5%
No
No
No
Rate = 7%
No
No
Yes
Rate = 10%
No
Yes
Yes
Rate = 15%
No
Yes
Yes
Rate = 20%
Yes
Yes
Yes
+ Assumes: 1) 95% confidence interval, 2) design effect of 1.5, 3) 60% response rate and 4) 90% of the interviews will ask
about sexual assault
Using the design described above, Table 3 provides estimates of designated sample sizes by
facility size. The additional sample of female youth, resulting from selecting all females is not included in
the table.
Table 3. Sample Frame and Proposed Sample Sizes
by Facility Size
Facility Size
10 - 20
21 - 50
51 - 89
90 - 120
121 - 150
151 - 164
165 - 250
251 - 350
350 or more
Total
2.1.4
Sample Frame
Facilities
Youth
111
1,626
183
5,671
69
4,773
29
2,943
33
4,439
11
1,712
45
9,137
30
8,990
19
9,408
530
48,699
Proposed Sample
Facilities
Youth
13
190
40
1,141
35
2,546
29
2,943
33
4,439
11
1,712
45
7,425
30
4,950
19
3,135
256
28,481
Minimum Sample Sizes to Publish Estimates
NSYC would make public estimates for individual facilities, except where confidentiality
may be breached. To protect confidentiality and to obscure the goal of collecting data on nonconsensual
7
contacts, a random 10 percent of the sample would be assigned a questionnaire on drug and alcohol use
rather than on sexual contact. Thus, no one other than the youth answering the questions wouldl know
whether a given youth was asked to complete the questionnaire on sexual contact.
Although it would be impossible to determine individual survey responses, publication of
rates based on a small number of reports of nonconsensual sexual contact could significantly increase the
risk of identifying a respondent. For this reason, the sample is designed to be able to obtain estimates for
larger facilities, where there would be more youth interviewed and a greater chance of meeting the criteria
for publishing a facility-level estimate. For planning purposes, we have used a rule that there has to be at
least 3 respondents that report nonconsensual contact to publish an estimate. A rule on the absolute
minimum number of interviews required to publish an estimate would be developed at a later time.
2.2
Addressing Human Subject Issues
Conducting a survey on sexual assault with incarcerated youth poses numerous issues related
to human subject concerns. One issue is providing informed consent. For youth under 18, this involves
getting consent from the youth’s guardian and getting the youth’s assent at the time of the interview. To
obtain consent, the Westat IRB approved three different methods:
The state or facility consents in loco parentis;
A parent or guardian consents in writing or verbally; or
A youth is considered an adult and self-consents.
The first method involves the state and facility acting as the youth’s guardian and providing
consent on an individual basis. This procedure does not require any additional effort by the administrator
or the project team. For the second method parents or guardians consent in writing or verbally. The
process of getting the consent could be carried out by either (or both) the facility and the survey team at
Westat. This method begins by mailing out the consent form to the guardians and following up by
telephone those that do not return the form. The Westat IRB approved a procedure that waived the
requirement to get consent of both parents because it was impractical, given that many youth do not live
with both parents. The Westat IRB also waived the requirement to get written documentation of consent.
This waiver was provided because the study was considered as minimal risk. As a precaution, if consent
was verbally provided over the telephone, the conversation would be recorded whenever possible. These
recordings were reviewed to ensure that the consent was properly administered.
8
The third option was used for youth who were 18 years and older, as these individuals could
self-consent.
Given the time required to get consent from parents or guardians, it would be necessary to
draw the sample several weeks prior to arriving at the facility. Enough time would have to be allocated to
carry out the activities associated with getting consent (i.e., mailing to guardians and following up by
telephone) to insure an adequate number of youth could be approached to participate in the survey. An
important question addressed on the pilot was whether it would be possible to contact enough guardians
and get enough consents to maintain a minimum response rate for the facility.
A second human subject challenge was protecting the youth from emotional harm due to
participation in the survey. The interview included a number of questions about victimization which
might traumatize the youth. The PREA legislation has a very explicit definition of sexual assault. This
led to designing questionnaire items that used specific, graphic, language. In the preliminary pre-testing
of the questionnaire, youth indicated that it was better to use explicit language when referencing different
sexual activities. General language (e.g., “having sex”; “anal sex”) was not commonly understood by
young people. Feedback from youth indicated they were not overly sensitive to the explicit language. In
fact, youth thought the survey would not be taken seriously if the language was not direct and to the point.
Using euphemisms and/or vague language would send a message to youth that they were not being treated
with respect. This sentiment was echoed by several counselors and sexual abuse investigators who
reviewed the questionnaire.
In contrast to the youth, the counselors and sexual abuse investigators, administrators were
very concerned with exposing youth to the language. Their primary concern was youth would be
traumatized. This feedback was obtained through several different channels. One was through informal
conversations through outreach activities with selected states (e.g., when asking for participation on the
pilot study). Formal meetings were also held with groups of administrators.
One was at a meeting of
administrators held in December of 2005 which asked for feedback on the initial design of the study. A
series of follow-up meetings with selected administrators underscored this point. One of these meetings
was at the American Correctional Association meetings in May, 2006. Two other meetings were held in
New Jersey with representatives of the Council of Juvenile Correctional Administrators.
9
Based on this feedback, the questionnaire was redesigned to minimize exposure to the most
explicit cues, especially for the youngest children participating on the survey. This was done in several
different ways:
1. The initial screening items were consolidated. The initial draft of the questionnaire
exposed youth to two sets of screening items -- one for victimization by youth and one
for victimization by staff. The revision consolidated these items so youth would only be
asked these initial screening items once, covering both youth and staff.
2. A less explicit version of the questionnaire was developed for youth less than 15 years
old. This version did not refer to specific body parts in the screening items or explicit
acts involving sexual activity. If a youth answered affirmatively to a screening item,
they were followed up with more explicit questions if the incident was considered a
sexual assault. 3 Since it was expected that the vast majority of youth would not report a
sexual assault, very few of the youngest respondents would be exposed to the explicit
language.
3. The interview was structured to minimize the number of times youth were asked about
different types of sexual activities. Once a youth reported a particular type of activity,
follow-up questions (e.g., if it was done as a result of force) asked about all activities in
a single question, rather than asking follow-ups for each of the activities. This greatly
reduced the number of follow-up questions while maintaining the desired level of detail.
A third human subject challenge concerned adhering to state laws requiring survey
interviewers to report child abuse and neglect. This requirement directly conflicted with the goals of
keeping the interview anonymous. Without a promise of anonymity, it was generally believed that youth
would not be candid in their reporting of victimization. A second, unrelated concern was that reporting
child abuse and neglect was in direct conflict with federal statues under BJS authorizing legislation which
prohibited revealing the identity of survey respondents.
These two concerns were addressed separately. First, a procedure was developed that had
interviewers reporting any abuse or neglect that youth verbally expressed to them. While youth were not
directly asked by interviewers about sexual abuse or neglect, youth might self-disclose during the assent
process or at any other time the youth had an opportunity to speak with the interviewer. If this occurred,
interviewers were instructed to report the incident to the State authorities. On the other hand, once a
report of sexual abuse or neglect was entered into the computer, it was anonymous and could not be
linked to the respondent’s name. The survey was set up so interviewers were not privy to the individual
3
If the incident was with a youth, it was considered an assault if it was reported as occurring by force. All incidents reported as occurring with
staff were considered an assault, regardless of whether force was reported.
10
responses. Interviewers were trained to avoid being in a position to see what the youth was reporting.
The youth was informed at the time of the assent that anything they report directly to the interviewer
would be reported. However, anything that was entered into the computer would remain anonymous.
With respect to the conflict with the BJS authority, the PREA legislation was amended to
include a clause that allowed interviewers to report child abuse and neglect, as required by the individual
state statutes.
One other precaution was taken to preserve the confidentiality of the interview. Youth were
randomly assigned to one of two questionnaire treatment groups: NSYC or NSYC-A. The NSYC
contains questions on sexual assault, while the NSYC-A substituted questions on alcohol and drug use.
The random assignment to questionnaire treatment group serves to "mask" which questions an individual
are asked, thereby helping to protect the confidentiality of the interview. All youth are asked questions on
their background and perceptions of life in the facility. Youth assigned to the NSYC questionnaire group
are asked questions on sexual activity within the facility. Those assigned to the NSYC-A questionnaire
group are not asked about sexual activity; they are presented questions on drug and alcohol use prior to
admission, treatment received before and since admission, and family and peer background (Table 4).
11
Table 4. Questionnaire Content by Treatment Group
Survey content
NSYC
NSYC-A
Background (e.g., reason for admission, education,
race/ethnicity, orientation)
YES
YES
Facility perceptions (e.g., perceptions of staff, environment
safety)
YES
YES
Sexual activity within facility (e.g., with other youth, with staff,
coercion)
YES
NO
Detail on sexual assault by other youth (e.g., type of coercion,
location and time of assault, reports of incidents)
YES
NO
Detail on sexual assault by staff (e.g., type of coercion,
location and time of assault, reports of incidents)
YES
NO
Drug use prior to admission (e.g., lifetime and 30 day use by
type of drug, abuse, dependency)
If needed
YES
Alcohol use prior to admission (e.g., lifetime and 30 day use,
abuse, dependency)
If needed
YES
Treatment (e.g., before and during admission)
If needed
YES
Family and peer background (e.g., household composition,
familial drug/alcohol abuse)
If needed
YES
In conjunction with the NSYC_A, an additional precaution was instituted for youth who
completed the interview very quickly (by responding “No” to all of the sexual activity screener
questions). This used the NSYC-A questions to standardize the time each respondent spends answering
the survey. No matter how a youth assigned to the NSYC group answers the screening questions, the
computer application was programmed to continue to present questions until a 30-minute threshold was
reached. This was achieved by “shifting” from the NSYC questionnaire to portions of the NSYC-A
questionnaire. Therefore, the scope of the questions posed to youth assigned to the NSYC group was
time-dependent and unknown to anyone but the respondents.
12
A fourth challenge confronting the study was providing support to youth who became upset
from the interview. This was done by arranging for youth to have access to either a counselor within the
facility or to someone who worked outside the facility. Youth were informed of the availability of
counselors at the beginning and during the interview (through computer screens that reminded them of
this opportunity). Youth were offered the opportunity to speak with counselors inside the facility if they
were more comfortable talking with someone with whom they had developed a relationship. If, however,
they felt uncomfortable talking to someone inside the facility, the youth was offered the option of talking
to someone who did not work in the facility. Interviewers were trained to look for any signs that youth
may need someone to talk to and to offer these options whenever they felt the youth needed assistance.
As described above, there were a number of steps taken to protect the youth. These different
measures created their own challenges to fully inform the youth at the time of getting assent to conduct
the interview. Critical elements of the assent included:
Name of the project sponsor and project goals;
Voluntary nature of participation, and the options to skip any questions or stop the
interview at any time without repercussions;
Random assignment to one of two survey instruments;
Explanation of the ACASI survey administration methodology;
Confidentiality of survey responses and mandatory reporting of any verbal indications of
abuse/neglect; and
Availability of counseling services.
The number and complexity of these conditions (e.g., when reports would be filed with the
state; what questions would be administered) made it difficult to effectively inform youth about the
conditions for participating on the survey.
To develop an assent that communicated these conditions, several rounds of pre-testing were
completed with the assent form to simplify the language. To make sure youth understood the conditions,
the interviewer used a scripted version of the assent form that had questions to assess the youth’s
comprehension of the information. If a youth answered any of the questions incorrectly, the monitor read
13
the relevant text again and paraphrased the information to help the youth understand. A youth who failed
to understand a question a second time was not asked to complete the survey.
2.3
Development of the Questionnaire
In the first section, the rationale for the design of the questionnaire is briefly described. The
second section reviews the testing activities that were used to refine the instrument prior to the pilot tests.
2.3.1
Rationale for the Questionnaire Design
The content and structure of the questionnaire reflects several different requirements. One
requirement was to use direct and explicit language when asking about sexual assault. The sexual assault
portion of the questionnaire begins with a series of “screener” items that describe particular activities. If
the respondent reports engaging in any of these activities, follow-up questions are asked about relevant
details, including whom it occurred with and whether force was involved. This strategy was based on the
desire to provide respondents with a concrete definition of the target activities. It was partly modeled
from recent victimization surveys, including the National Crime Victimization Survey4 and the National
Violence Against Women Survey5, which have found this strategy to produce better measures.
A related requirement was to collect information that identified acts defined as an assault
under the PREA. The act was quite explicit in terms of what qualified as a sexual assault (see section 10
of PREA).
This requirement provided the types of acts that were ultimately included on the
questionnaire.
A competing requirement was the need to avoid language that may be too harsh or graphic
for young children. As noted in the discussion above (see section 2.2), administrators contacted during
4
Kindermann, C., Lynch, J.P. and D. Cantor (1997). The effects of the redesign on victimization estimates. NCJ 164381. Washington, D.C.: U.S.
Department of Justice, Bureau of Justice Statistics.
5
Tjaden, P. and N. Thoennes (1998) Stalking in America: Findings from the National Violence Against Women Survey. Research in Brief, NCJ
169592. Washington D.C.: U.S. Department of Justice, National Institute of Justice and U.S. Department of Health and Human Services,
Centers for Disease Control and Prevention.
14
the development process were concerned that the use of graphic language would trigger emotional
reactions by the youth. The actions taken to respond to these requirements are described in Section 2.2
above. They included the development of a separate instrument for youth under 15 years old and
combining questions to reduce the number questions to which respondents would be exposed.
A final requirement of the design was to collect details about sexual assaults that would
provide information on the circumstances related to its occurrence. To meet this requirement, separate
sections on the details related to the events were included on the questionnaire. In addition, a series of
questions about the climate related to the facility were placed at the beginning of the questionnaire.
2.3.2
Cognitive testing
Cognitive testing was conducted to refine the survey assent process and survey
questionnaire. Testing of the assent process assessed the youths’ ability to comprehend the study goals
and procedures and their rights as research participants. Testing of the questionnaire gauged the reaction
of the youth to the language of the questionnaire and identified survey terminology that might cause
difficulty for respondents. Testing occurred in the fall of 2006. Youth in four juvenile correctional
facilities participated in the interviews: 14 males from two all-male facilities and eight females from two
all-female facilities. Table 5 provides information on the facilities and youth who participated in the
testing.
Table 5. Characteristics of Facilities and Youth Participating in Cognitive Interview Activities
Facility
1
2
3
4
Size
Gender
Permission
type
Medium
Small
Small
Small
Male
Female
Male
Female
ILP
P/G
P/G
ILP
Total
Number of
eligible
youth
Number of
youth with
permission
Number of
completed
interviews
8
6
9
6
8
2
8
6
8
2
6
6
29
24
22
Permission type: ILP=in loco parentis ; P/G=parent/guardian consent
Staff from each facility identified eligible youth prior to the visit by Westat. Facility 1 and
Facility 4 used in loco parentis to provide permission for the selected youth to participate; staff from the
15
other two facilities contacted parents/guardians to obtain permission for their children to be interviewed.6
These two facilities initially mailed permission forms to the parents/guardians and asked that they be
returned to the facility. This strategy resulted in limited success and required substantial efforts by
facility staff to follow-up to get written consent. Table 6 shows the final permission status at each
facility.
Table 6. Final Status of Parent/Guardian Permission
Facility
Number of
identified
minors
Granted
Refused
Nonresponse
8
6
9
6
8
2
8
6
0
1
1
0
n/a
3
0
0
24
2
3
1
2
3
4
Permission status
Total
Note: Facility 1 provided consent in loco parentis , therefore there could be no nonresponse.
Cooperation among youth selected for cognitive interviews was very high; 22 of the 24
youth asked to participate agreed.
Assent Form
The assent form that was initially tested described the purpose of the study, the burden and
activities required during participation, the topics that would be addressed, and the voluntary nature of
participation. The text also described how the data would be used and that their responses would remain
confidential. Due to state mandatory reporting requirements, the text also indicated that verbal statements
to the study staff about abuse or neglect would have to be reported.
Key findings from the testing of the assent procedure indicated the following:
6
All the participants understood that they would have the right to refuse to take part in the
survey.
Most youth understood that they would record their answers on a computer laptop and
not to the interviewer.
One participant was 18 years old and therefore able to self-consent.
16
Many participants did not understand the description of facility-level reports (that would
be produced in the National Survey).
Many participants did not understand that statements they made to the interviewer
suggesting abuse or neglect would be reported to state or local authorities, while answers
that they recorded on the laptop would be kept confidential.
Most youth did not understand that respondents would be randomly assigned to answer
questions either about sexual experiences or alcohol/drug use.
Youth did not pay attention as the assent text was read. About half way through the
delivery, many participants appeared to loose interest (e.g., staring out of the windows or
at the floor).
These findings led to numerous changes of the form. Part of the reason why youth were not
paying attention was because they were required to read along on the assent form. Interviewers noted that
youth had a hard time following the text. As a result, the format of the text was switched from a series of
relatively dense paragraphs to a series of bulleted statements which the interviewer read.
In terms of the actual text, the changes included the following.
Facility-level reports: This text was significantly shortened in order to simplify the
discussion. In the original version, nine sentences were used to describe how “all of the
answers from this place will be combined into one report.” The text went on to explain
that staff and residents would be able to read the report and might suspect they knew
how individuals answered the survey questions.
In the revised version, this text was replaced by three sentences. The text still stated that
a single report will be made about the facility. But there is no longer any discussion
about potential readers of the reports or about the possibility that they might think they
could identify respondents.
Mandatory reporting: The original assent text indicated that information recorded on the
computer would remain anonymous, but any verbal statement made to the study staff
suggesting abuse or neglect would be reported. To highlight this difference in the final
version, bulleted text formatting was used to draw attention to the different ways the
information would be treated. In addition, a summary statement was added – again to
highlight the distinction between information recorded on the computer and that
conveyed to the study staff.
Questionnaire topics: The original version explained that the computer would decide
which questions would be asked, including whether respondents (“some people”) would
be asked questions about sexual contact or drug and alcohol use. No other survey topics
were mentioned.
17
In the final version, the text refers to “you” (instead of “some people”) and mentions other
survey topics, including questions about the staff, other youth, and health services.
The result of these changes was a shortened assent form which provided the conditions in a
series of bulleted statements. In addition, questions related to the assent were inserted after each section
of the assent. The final design was to have the interviewer administer these items to the respondent. If
the respondent did not get it correct, the interviewer would re-explain that portion of the assent. If, after
the second explanation, the respondent did not understand that part of the assent, the interviewer would
terminate the interview.
Questionnaire
For the testing of the survey questions, youth were instructed not to answer the questions,
but only to comment on the meaning of the questions and to answer follow-up questions posed by the
interviewer.
This avoided situations where the interviewer would have to report an event to state
authorities under the mandatory reporting laws. Given the personal nature of the questions, it was also
felt that respondents may provide more instructive feedback if they were not concerned with revealing
such information directly to the interviewer.
One finding from these interviews was that youth were very receptive to the study and the
questionnaire. They did not think that youth would be upset by the language. They also thought that use
of more direct, “clinical”, language was the best approach. Findings about specific questions or question
terminology are described below.
Definition of “resident.” Generally, youth interpreted “residents” as all youth who live
at the facility. However, one youth was not familiar with the term and several also
included staff that lived at the facility.
Definition of “staff.”
o
Most youth understood that core members of the staff would be included:
counselors, guards, etc. They seemed to consider those in positions of direct
authority over youth when deciding whether or not to define individuals as
“staff.”
o
Youth applied differing strategies when thinking of other types of staff (i.e., those
not in positions of direct authority over youth), such as adults who provided
services off-site (e.g., teachers at public schools and medical providers at area
18
clinics) and facility support staff (e.g., cafeteria staff, maintenance workers, and
drivers).
Definition of “private parts” and “sexual things/contact.” Definitions of “sexual things”
(younger respondent version), “sexual contact” (older respondent version), and “private
parts” were presented in each version of the questionnaire.
o
The definition of “private parts” (i.e., “areas covered by a bathing suit”) was
useful. Several female participants said they would not have considered breasts as
private parts without that definition.
o
Participants generally shared a common understanding of the term “sexual
things/contact.” They said that the terms would include having vaginal
intercourse with or touching other young men or young women at the facility,
having oral sex, and making sexual comments, or looking at someone in a “sexual
way.”
Definition/Description of sexual activities. All versions of the questionnaire used
similar terminology to define or describe specific sexual activities (e.g., “oral sex”). The
version of the questionnaire for older respondents also used slang terms for some types
of sexual contacts (e.g., blowjob; hand job). Generally, participants thought this took
away from the seriousness of the questionnaire, in addition to using the words in
different ways. Some of the males said that using terms such as “oral sex” and
describing the contact (e.g., “rubbing another person’s penis with their hand”) would
better convey the seriousness of the study.
Questions on perceived sexual activities. Participants were presented three questions
about whether they thought other youths had engaged in sexual activities with youth and
with staff at the facility. Participants expressed doubt that youth would be able to
answer this type of question.
Wording of questions on coercion. Participants were shown three questions asking
about various types of coercion: physical force or threat of physical force; in return for
money, favors, protection, or other special treatment; and pressure “in another way…”
Participants had a clear understanding of what should be considered physical force or
threat of physical force, but they had difficulty distinguishing between “pressure” and
the other types of coercion.
In response to these findings, changes were made to terms used throughout the
questionnaire; examples were added to help define terms used less frequently in the questionnaire.
Changes were also made to simplify the question wording. For example:
The word “residents” was substituted with the word “youth”
The reference period was revised from “Since you arrived at this place” to “Since you
got here.”
19
Several phrases were merged to simplify the coercion phrases. The question phrases
“because of physical force or threat of physical force” and “because you were forced in
some other way” into “because you were forced to do it.
Younger version --- sexual intercourse. The wording of the question on sexual
intercourse was too vague (“…put any part of your body inside anyone else’s body)
to a phrase that more explicitly involved private parts (“have you put any part of your
body inside someone else’s private parts”).
One change to the content resulting from the cognitive testing was the elimination of
questions about youth perceptions of sexual activity among other residents at the facility. The original
questionnaire included questions addressing youth perceptions of youth-with-youth and youth-with-staff
contacts.
The cognitive interview findings indicated that respondents could not provide useful
information on these topics, therefore the questions were deleted.
3.
Pilot testing
Pilot testing of the NSYC survey methodology occurred in two phases between fall 2006
and spring 2007. Twelve juvenile correctional facilities, located in five states, participated in the pilot
tests. Overall, 741 youth completed interviews. Characteristics of the facilities and youth participating in
the pilot test are presented in Table 7.
Table 7. Characteristics of Facilities and Youth Participating in Pilot Test
Phase and
facility
Permission
type
Number of
eligible
youth
Number of
sampled
youth
Number of
youth with
permission
Number of
completed
interviews
Size
Gender
%
Female
Phase 1
1
2
3
Medium
Medium
Small
Female
Male
Male
100%
0%
0%
ILP
PGC
PGC
123
127
57
65
127
57
65
48
31
39
37
20
Phase 2
1
2
3
4
5
6
7
8
9
Medium
Small
Small
Large
Medium
Small
Small
Large
Large
Male
Female
Male
Male
Male
Male
Female
Male
Co-Gender
0%
100%
0%
0%
0%
0%
100%
0%
17%
PGC
PGC
PGC
PGC
PGC
PGC
PGC
ILP
ILP
145
54
78
223
140
91
49
488
369
145
53
78
223
140
91
49
165
126
101
21
41
94
82
52
29
165
126
83
21
39
79
73
48
28
158
116
1,944
1,319
855
741
Total
Size: Small=50-99 youth; Medium=100-199 youth; Large=200 or more youth
Permission type: ILP=in loco parentis ; P/G=parent/guardian consent
20
The pilot test focused on three broad areas of methodology and addressed several
research questions within each area.
Survey operations:
o
Can an acceptable level of consent to ask youth to participate in the study be
achieved?
o
Can the conditions of the study be explained to youth so they can give informed
consent/assent?
o
Can the study simultaneously comply with state and local mandatory reporting
requirements and protect youth confidentiality?
o
Can youth be provided adequate counseling services in the event they become
upset as a result of participating in the survey?
o
How will the study affect facility staff and their operations?
Questionnaire administration:
o
Can the ACASI methodology protect the confidentiality of youth as they complete
the survey?
o
Can the ACASI methodology be used to tailor the questionnaire based on age and
experience of the respondents?
o
Can the ACASI methodology successfully govern the time youth spend answering
the survey questions?
o
How will youth react to the survey experience?
Prevalence rates and data quality:
o
What rates of nonconsensual sexual contact will youth report?
o
How do the rates vary by facility?
o
What is the quality of the survey responses?
The remainder of this section focuses on the implementation of the pilot tests and findings
related to each of the questions listed above.
21
3.1
Features of the two-phase implementation
The NSYC pilot testing was conducted in two phases. Phase 1 was designed as a small-scale
evaluation of the survey procedures. The primary goal was to investigate youths’ reactions to the survey
– especially in terms of their experiences answering the sensitive questions and using the ACASI
methodology. Phase 1 also provided an opportunity to consider alternative logistical procedures to obtain
parent/guardian consent and conduct the interviews. Senior project staff from Westat’s home office
traveled to the sites to meet with facility staff and interview about 100 youth.
Phase 2 was designed as a more rigorous test of sampling and field data collection
procedures. Youth rosters were provided and a random sample was selected from each facility. Nine
professional interviewers participated in a 4-day training session. Data were collected from about 650
youth during this phase of the pilot test.
3.2
Survey operations
Project staff got approval from the Westat’s Institutional Review Board (IRB) and from
state-appointed liaisons for specific procedures to obtain consent to interview youth and assent from the
youth themselves. Procedures were also developed to ensure the confidentiality of the survey data, and to
comply with state and local mandates for reporting abuse and neglect. Arrangements were also made to
provide youth with counseling services in the event they requested such support.
3.2.1
Consent
In accordance with federal regulations, permission to interview a sampled minor was
required either from the parent/guardian, the state, or the facility. Project staff and staff from nine
facilities worked together to obtain parent/guardian consent to interview youth.
The state gave
permission to interview youth (i.e., in loco parentis) at three sites participating in the pilot test.
Obtaining consent from the state or facility in loco parentis was approved by Westat’s IRB
based on the safeguards in place to protect the youth as research participants. These safeguards included
measures such as the random assignment of youth to different questionnaires, using ACASI rather than
22
asking youth to report experiences to an interviewer, and ensuring that youth have access to counseling if
they request or demonstrate the need for services (see section 2.2).
When parent/guardian consent was required, a packet of study materials was mailed to the
household. The packet contained a cover letter highlighting the study goals and procedures, a study
brochure describing this information in greater detail, a consent form, and a return envelope. At some
facilities, telephone and face-to-face contacts were attempted when parents/guardians did not return a
completed form.7
Westat conducted the consent operations if the state/facility provided contact information
(i.e., the name, address, and telephone number of the parent/guardian, and the name of the youth).
Otherwise, the facility was responsible for obtaining completed forms.
Additional logistical
arrangements, such as where completed form would be mailed (i.e., to the facility or to Westat) were
negotiated with the states/facilities.
Full-scale efforts to obtain consent for each sampled youth were used only in Phase 2 of the
pilot test. Table 8 shows the outcome of the efforts.
Table 8. Parent/Guardian Consent Rates (Phase 2)
Outcome
Youth discharged during contact period
State ward (no contact attempted)
Parent/Guardian consented
Parent/Guardian refused
Forthcoming consent indicated (not received)
Other consent nonresponse
Late roster entry/No time to contact
Undeliverable mail
No telephone number provided
Nonworking telephone number
Wrong telephone number
Answering machine
Repeated callbacks without reaching parent/guardian
Facility efforts, no details provided
Other
Total
Percent of
sampled youth
Percent of
eligible youth
16.5
1.0
31.9
7.0
1.6
42.0
NA
1.2
38.3
8.4
1.9
50.3
12.6
3.9
4.2
6.7
2.9
1.9
3.4
4.2
2.3
100.0
42.0
15.1
4.6
5.0
8.1
3.4
2.2
4.1
5.0
2.7
100.0
50.3
NA = not applicable; efforts were not made or were discontinued once a youth was discharged from a facility.
7
The consent form included spaces for parents/guardians to indicate whether they consented or refused. Therefore, a completed form was
expected from every household.
23
Key findings from these efforts include the following.
A significant percentage of youth (16%) that were initially selected to participate were
discharged by the time of the field visit. (These youth became ineligible.)
It was difficult to obtain consent from parents/guardians. Overall, 38% of the
parents/guardians of eligible youth agreed to allow contact with their children; 8 percent
refused.
Attempted contact with 50 percent of the parents/guardians of eligible youth resulted in
consent nonresponse.
o
Fifteen percent of the nonresponse was due to the late arrival of the youth at the
facility. With less than two weeks between arrival and the field visit, there was
insufficient time to obtain completed forms for these youth.
o
Twenty-one percent of the nonresponse was due to the lack of a current address or
telephone number for the parent/guardian.
There were a number of lessons related to gaining consent from both phases of the pilot
tests. First, it was clear that some of the problems related to gaining consent were from difficulties
working with the facility or state liaison. In some cases, the facility liaison was a relatively high-level
administrator who did not have the time to conduct these activities. As a result, the consent process was
not as successful as it could have been. For example, in several facilities, the facility liaison did not
regularly update the roster information during the contacting period. As a result, there were a large
number of youth who were added to the sample just prior to the facility visit. This partly explains the
relatively large percentage of youth who did not have consent because they were a “late roster entry”
(15% -- see Table 8 above). Many of these youth had entered the facility a number of weeks prior to the
facility visit, but the roster had not been updated until it was too late to get consent. This lesson
emphasized the importance of encouraging the state to appoint a facility liaison that had sufficient time to
work with the study to draw the sample and support the consent operations.
A related lesson was to make sure the project staff asked about competing activities when
scheduling visits with the facility. In one instance a visit was scheduled during a time when the facility
was undergoing its annual audit. In another instance a facility was scheduled when many of the youth
were to be temporarily placed in another facility. Given this experience, the procedure for scheduling
24
visits was revised to have project staff specifically probe the facility representative about possible
activities that might interfere with the visit.
Finally, this experience led to a re-definition of the population universe. The universe for
the pilot study was defined as youth who are resident at the facility at the time of the visit. This gives
everyone at the facility at the time of the interviews a probability of selection. It can be managed by
finalizing the sample at the time the interviewer visits the facility to conduct the interview.
However,
the results of the pilot study indicated that the ability to obtain consent for those individuals who are
recent entrants (e.g., within last 4 weeks) to the institution was quite low. It took too long to obtain
consent for a significant portion of these individuals.
As a practical matter, therefore, the samples in
facilities that required parent/guardian consent severely underrepresented youth who were new entrants to
the facility.
Given this result, it was decided to define the population universe as all individuals who
were in the facility as of four weeks prior to the time of the interview. This change had two advantages.
The first was that it would allow time for the study to obtain an updated roster prior to the time the
interviewers were to visit the facility. This simplified the task the interviewers had to carry out once they
got to the facility. If there were any changes at the time of the visit, it would consist of persons leaving
the facility. There wouldn’t be a need to add anyone to the roster. The second advantage was that it
reduced the burden on the facility by eliminating the need to obtain consent for the late entrants.
The disadvantage of this approach was that it missed a small portion of the population that
was at risk of being sexually assaulted (i.e., those residents who entered within four weeks of the visit).
However, the results of the pilot test indicated that this was a relatively small proportion of the
population. The survey instrument included questions on when the first victimization occurred relative to
entrance into the facility. Tabulation of these items would provide a method to investigate whether this
was true.
3.2.2
Assent and Response Rates
Building on the lessons learned from the cognitive testing of the assent procedures, an
interactive process was used to inform the youth about the study goals and procedures. Key elements
needed to make an informed decision were conveyed in a script that field staff read to the youth: survey
25
topics, use of ACASI, voluntary participation, confidentiality (with the exception of verbal reports of
abuse or harm), and the availability of counseling services.
The script included five questions to assess youth comprehension of the key elements. For
example, after the field staff read the text about the voluntary nature of participation, the youth were
asked if they thought they “had to do the survey” or if it was their choice. If the youth erred in response
to this (or any of the five questions), the field staff would review the text with the youth and paraphrase
the text to convey its meaning. During both phases of the pilot test, one youth failed to comprehend a key
element at the end of this procedure; he was not interviewed.
Because Phase 1 did not focus on sampling and recruitment procedures, special attention
was paid to these aspects of the methodology during Phase 2. Table 9 shows the percentage of sampled
youth that agreed to participate and the percentage of youth that refused or did not participate for some
other reason.
Table 9. Response Rates (Phase 2)
Number
All youth
Percent
Minors
18 +
Completed Interviews
645
60.3
51.7
88.1
Nonresponse
No parent/guardian consent
Facility refused access
Youth refused to come to survey
Youth refused before assent
Youth refused after assent
Youth failed assent screener
Youth unavailable
Youth discharged from facility
Youth did not complete survey
Other
359
4
6
9
29
1
4
3
6
3
33.6
0.4
0.6
0.9
2.7
0.1
0.4
0.3
0.6
0.3
43.9
0.5
0.4
0.5
1.7
0
0.6
0.1
0.6
0.2
0
0
1.2
2.4
6.0
0.4
0.4
0.8
0.4
0.4
Total Non-Respondents
425
39.7
48.3
11.9
1,070
0
818
252
Final outcome
Total Sample Size
Key findings related to youth response rates include the following.
Of the 1,070 youth eligible to participate in Phase 2, 645 completed interviews (60%)
and 44 refused (4%).
26
The response rate was higher among those youth aged 18 or older than among minors
(88% and 52%, respectively); this was due primarily to the lack of parent/guardian
consent.
Most of the youth who refused did so after the assent procedure had been completed.
The refusal rate was higher among those youth aged 18 or older than among minors
(10% and 3%, respectively).
In facilities where in loco parentis consent procedures were used, response rates for both
age groups were between 94 and 95 percent. In facilities where parent/guardian consent
procedures were in place, the response rate for the older cohort was 86 percent,
compared with 35 percent for the younger cohort.
There were some differences between the sampled and interviewed youth. Compared to the
sample, the youth who were interviewed under-represented African Americans and those aged 15 to 17.
Hispanics and those aged 18 or older were over-represented.
Characteristics of the sampled and
interviewed youth are presented in Table 10.
Table 10. Percentage Distribution of Sampled and Interviewed Youth, by Youth Characteristic
(Phase 2)
Youth characteristic
Sampled
Interviewed
Race
White
African American
Hispanic
Other
34.5
48.6
13.9
3.1
36.0
40.9
20.6
2.6
Age
11-14
15-17
18+
8.6
67.9
23.5
9.0
56.3
34.6
Gender
Male
Female
88.6
11.4
89.8
10.2
Type of offense
Crime against person
Crime against property
Other
55.0
21.9
23.1
57.5
19.7
22.8
27
The pilots indicated that the assent process was working as intended. Respondents were
generally attentive during the process and understood the essential components of the study. While only
one youth failed to pass the assent, a number of respondents were provided some follow-up by the
interviewers to make sure they understood the conditions for participating. Interviewers reported that this
process went smoothly.
The main source of non-response was from failure to get consent from the parents/guardians
of youth. As noted in Table 9, approximately 87% of the nonresponse was due to not getting consent
from parents/guardians (33%/39% = 87%). Youth refusing to do the interview was relatively rare.
Approximately half of the consent non-response was because the facility or project staff was not able to
get in touch with parents/guardians (Table 8). This emphasized the importance for the study to work with
the facility liaison to get timely updates on the roster information. In particular, the project will need to
set up a process so that contact information that is found not to be accurate can get updated in a timely
way. This would allow the project to use this new information when trying to get consent.
3.2.3
Mandatory reporting
An important requirement of the study was to protect youth who participated in the NSYC
pilot test. One study procedure related to this requirement was adherence to state and local requirements
for reporting cases of abuse and neglect. Westat staff discussed these requirements with state liaisons as
plans were made to conduct the survey in the state. The Westat staff worked with state and facility
administrators to set up a procedure to report incidents in a way that followed state law. Since the
requirements for reporting incidents varies across states, especially on how to report incidents in a
correctional facility, the procedure had to be tailored for each state that was in the pilot test.
A second requirement of the study was to keep the information youth reported on the survey
confidential. This requirement conflicted, in some way, with the mandatory reporting requirement.
Procedures were implemented that met both of these in way that youth understood and thought was
meaningful (see Section 2.2).
In Section 3.3.4, youth reactions to the survey experience are described – including their
thoughts about the confidentiality of their survey responses.
These data suggest that most youth
understood and trusted the confidential nature of their survey data. Overall, 90 percent of the Phase 2
28
participants agreed with the statement “No one here at this place will ever know my answers to the
survey.” Nearly 80 percent agreed with the statement “No one else, outside of this place, will ever know
my answers to this survey.”
At the same time, youth seemed willing to tell field staff about perceived experiences of
abuse and neglect.
Mandatory reports were made by youth at five of the nine facilities in Phase 2; the
number of reports at these facilities ranged from one to 10.
Twenty-five Phase 2 respondents made statements to the field staff that fell within the
mandatory reporting requirements. Nineteen respondents alleged that they were
personally harmed or threatened. Three of the 25 respondents alleged that others were
harmed or threatened. Details on the other three reports were not available.
Among the 19 allegations of personal harm
behavior, 10 involved youth behavior, and one
Allegations involving staff behavior focused
altercation; none involved sexual contact.
included three cases of sexual harm or threat
threat.
or threat, eight involved facility staff
involved both staff and youth behavior.
on excessive force during or after an
Allegations involving youth behavior
and seven cases of physical assault or
By all indications, the process for reporting mandatory incidents and preserving
confidentiality worked. A valuable lesson learned from the pilot was a greater appreciation for the
differences in the laws that govern mandatory reporting. As a result of this experience, it was planned for
the national study to have a staff member whose job would be to review the state laws that govern
sampled facilities and work out procedures with the facilities when discussions are held on the proper
survey procedures. This should make the study less reliant on the facility liaison’s interpretation of the
reporting requirements.
3.2.4
Counseling
States and facilities were asked to identify counselors that could be made available in the
event a youth became upset as a result of participating in the survey. Youth were told during the assent
process that they could talk with a counselor from the facility or from outside the facility if they wished.
All the facilities that participated in the pilot test made these arrangements prior to the start of data
29
collection at the site. In most cases, the counselors from outside the facility were drawn from the state
central office (i.e., Department of Juvenile Justice) or from a nearby facility.
During the visits to the facilities, 22 youth asked to speak to a counselor. Seventeen of these
youth asked to speak to a counselor from outside the facility. Three of the youth asked to speak to a
facility counselor and one youth asked to speak to both types of counselors. In order to protect the
privacy of the youth, field staff did not ask them to describe the reason for the request; nor did the staff
ask the facility for this information.
Since some youth might have become upset after the field staff left the site, the facilities
were asked to keep the counseling staff available for a “reasonable time” after the visit. About two weeks
after the field staff left each facility, project staff from Westat telephoned the facility to talk with the
survey coordinator and other facility staff. One topic discussed in these calls was youth reactions after the
visit. None of the facilities reported any additional request for counseling services during that time.
3.2.5
Facility effects and burden
During telephone followup with facility coordinators and other facility staff, Westat asked
about the burden that study procedures placed on them and about how the survey procedures affected
facility operations. The facilities offered several useful suggestions for the National Study. For example,
some recommended providing facilities with additional written materials related to the responsibilities
that their staff would have for study activities (e.g., escorting youth to the interview rooms).
Other comments that the staff made during the followup calls included the following.
It was easy to work with the Westat project team, noting that the flexibility of the team
was particularly important.
The study procedures were clearly communicated by Westat staff, and easily integrated
into the overall facility operations.
Some special efforts were required, for example, to update parent/guardian contact
information.
The primary recommendation that came out of these debriefings was to develop auxiliary
material that could be provided to the facility liaison. These materials should provide a detailed picture of
30
the type of effort the project would entail. This comment pertained to all of the processes discussed to
this point, including sampling, updating the rosters, the consent process and the final interview.
3.3
Questionnaire administration
The NSYC study design calls for the survey to be administered on a laptop computer, using
ACASI methodology. This data collection technique was chosen because of the sensitive nature of many
of the survey questions.
ACASI provided higher level of privacy to respondents and helped to overcome literacy
problems commonly associated with self-administration on paper questionnaires. Using
ACASI, respondents were not asked to report answers to another person; rather, they
were presented the survey questions aurally through headphones and indicated responses
using a touch-screen feature on the laptop.
ACASI also allowed for extensive programming of the questionnaire. The programming
helped to control which youth were presented some of the most sensitive questions
based on their age and on their responses to other questions.
Further detail on the features of the ACASI questionnaire, the time required by youth to
complete the survey, and their reactions to the questions is provided in the remainder of this section.
3.3.1
Confidentiality
Three methods were used to help protect youth confidentiality during the survey
administration. First, ACASI allowed the youth to answer the questions without the direct involvement of
an interviewer. As noted above, by interacting with the laptop computer, no one other than the youth
knew what questions were presented or how the youth answered.
The second method used to protect confidentiality during survey administration was the use
of two study questionnaires (see also section 2.2).
NSYC questionnaire – the primary study questionnaire that included questions on sexual
contact and assaults
31
NSYC-A questionnaire – an alternate questionnaire that focused on drug and alcohol use
prior to admission and treatment since admission; no questions on assault were included.
Youth were randomly assigned to complete one of these two questionnaires.
The
assignment was made without the knowledge of the field staff member or anyone from the facility.
Without knowledge of the questionnaire assignment, no one could know which youth might have reported
an assault.
The third method used to protect confidentiality was to standardize the time each respondent
spent answering questions. No matter how a respondent answered individual questions (e.g., reporting an
assault or reporting drug use), the computer application continued to present questions until a 30-minute
threshold was reached. The NSYC-A questionnaire included a sufficient number of questions to prevent
a typical respondent from completing it in under 30 minutes.
The NSYC questionnaire could be
completed in less time if the respondent reported no sexual contacts. In such cases, at the conclusion of
the NSYC questionnaire, the ACASI system automatically shifted the respondents to the NSYC-A
questionnaire and presented questions until the 30-minute threshold was reached.
3.3.2
Question tailoring
ACASI methodology was used to tailor question content and wording. For example:
Content varied by gender of the respondent.
Wording varied by time since the youth was admitted to the facility.
Content varied by the types of sexual activities respondent reported.
In the NSYC-A, content varied by type of drugs used and by type of treatment received.
The ability to tailor content and wording was an especially important feature when asking
questions about sexual contacts and drug and alcohol use. The use of two versions of the NSYC
questionnaire was one of the most significant examples of tailoring. The first two sections of each
version were identical; these sections asked questions on the youths’ background (e.g., education) and
perceptions of the facility (e.g., interactions between staff and youth). The third section of each version
introduced the topics of sexual contact and sexual assault. This section was followed by the last two
sections – one asking about sexual assault by other youth and one asking about sexual assault by staff.
32
These last two sections were only asked of youth that reported one or more assaults when completing the
third section.
Tailoring in the third section was based on the age and experience of the respondent. In both
versions of this section, youth were asked a short set of questions to determine if they had been assaulted.
Examples of these “screener” questions for older respondents include:
Since you got here, have you rubbed another person’s penis with your hand or has
someone rubbed your penis with their hand?
Since you got here, have you put your penis, finger, or something else inside someone
else’s rear end or has someone put their penis, finger, or something else inside your rear
end?
Respondents who were under 15 years old were asked screening questions that used less
explicit terminology. Examples include the following two questions.
Since you got here, have you touched anyone’s private parts with your hand or has
anyone touched your private parts with their hand?
Since you got here, have you put any part of your body inside anyone else’s private
parts?
Respondents who answered any of these screening questions affirmatively were then asked
follow-up questions that asked about more specific sexual activities.
Youth in both age groups who reported assault in either version of the NSYC questionnaire
were then routed to the follow-up questions in the last two sections of the questionnaire. ACASI was
used to tailor the content of the questionnaire based on the experiences of the respondents.
3.3.3
Administration time
As mentioned above, the ACASI system was programmed to control the time youth spent
completing the questionnaire. The goal was to limit administration time to approximately 30 minutes.
The ACASI system monitored the time from the start of the survey through the completion of each
section of the questionnaire. When a section was finished, if the “clock” registered under 25 minutes
(total administration time), the system would initiate another section. This process would continue until
the 30 minute threshold was reached.
33
Among all Phase 2 pilot test respondents, both the mean and median administration times
were 30 minutes. The time to complete the two different questionnaires was slightly different. For the
NSYC, the mean and median time was 30 minutes. The median administration time among the youth
assigned to the NSYC-A questionnaire was 32 minutes.
3.3.4
Youth reactions
Overall, respondents found participation to be a positive experience. This assessment is
based on findings from a set of questions administered (via ACASI) at the conclusion of the survey
instrument. Youth were asked specific questions about:
the value of surveys like the NSYC and NSYC-A;
the likelihood that they would have participated once they learned what it would be like;
whether they found any of the survey questions upsetting or confusing; and
whether they felt their answers would remain confidential.
In addition, interviewers were asked to observe whether the youth showed any kind of signs
of being upset, either by saying something or through any non-verbal communication.
Youth were asked separate questions about the value of conducting surveys 1) “about the
way things are in places like this;” 2) “about their sexual experiences in a survey like this;” and 3) “about
their experiences with drugs in a survey like this.” Ninety-six percent expressed support for asking about
“the way things are.” Large majorities also expressed support for the other two statements (80% and
82%, respectively).
Eighty-seven percent of the youth said they would have participated in the survey had they
known beforehand what the experience would be like. This opinion was shared by most youth regardless
of age or questionnaire assignment (i.e., NSYC or NSYC-A).
The survey did not overly upset youth. Seventy-five percent of the respondents disagreed
with the statement “some of the survey questions made me upset”. Of the 25% of youth who agreed with
this statement, 80% said that they would choose to participate in the study again. Similar observations
34
were made by the interviewers. Only 13 of the 645 Phase 2 respondents were observed by interviewers as
showing any type of emotional upset.
Other findings related to youth opinions about the survey include the following.
Fewer than ten percent of respondents found questions on any particular survey topic to
be “hard to understand.”
Twenty-two percent thought that the survey was “too long.”
Eighty-four percent said that they would prefer using the ACASI methodology to
complete the survey rather than to have been asked the questions by an interviewer.
Youth felt that the survey data would remain confidential, with 90 percent agreeing that
“no one here at this place will ever know my answers to this survey” and 79 percent
agreeing that “no one else, outside of this place, will ever know my answers to this
survey.”
Thirty percent of respondents thought that young people would not tell the truth when
answering the survey questions.
Overall these data indicated that procedures put in place for communicating the conditions of
the study, for safeguarding the confidentiality of the respondents and for minimizing emotional trauma on
the youth were working.
3.4
Data Analysis
The questions for the data analysis revolved around three issues:
What rates of nonconsensual sexual contact will youth report?
How do the rates vary by facility?
What is the quality of the survey responses?
The analysis of the pilot data addressed these questions by analyzing both Phase 1 and Phase
2 data. In the first section below, the first two questions are addressed (rates that are reported and
variation across facilities). The second section discusses the analysis of data quality.
35
Across both Phase 1 and Phase 2 facilities, 742 youth participated in the pilot test;
approximately 669 completed the NSYC questionnaire and portions of the NSYC-A questionnaire
(depending on total survey administration time) and approximately 73 completed only the NSYC-A
questionnaire. In the analyses below, the survey data were not weighted or adjusted for non-response. 8
Analyses that combine across facilities were not adjusted for size of facility or for clustering within
facilities.
As mentioned earlier in this report, the facilities that participated in the pilot tests were
chosen by the states that agreed to support the pilot test. As such, the facilities do not represent a sample
of any particular universe. Respondents in the pilot are much more serious offenders when compared to
the national population universe. Fifty-Five percent of the youth sampled for Phase 2 were adjudicated
for crimes against persons, compared with 39 percent of adjudicated youth in the population universe.
Perhaps most relevant is that approximately 26% of the sample were adjudicated for a sex offense
compared to 9% for the national population universe.
3.4.1
Prevalence rates
For the purposes of this analysis, nonconsensual sexual contact included:
Sexual contact with reported coercion. This is sexual contact with either another youth
or a staff member where at least one of the following types of coercion were reported
o
Physical force or threat of physical force, or
o
Force or pressure in some other way, or
o
In return for money, favors, protection, or other special treatment.
Sexual contact without reported coercion. This is any type of sexual contact with a staff
member that did not involve one of the three types of coercion shown above.
Across all 12 facilities, 19 percent of interviewed youth reported some type of
nonconsensual sexual contact.
8
Non-response adjusted prevalence rates were computed for each facility. However, the weights did not affect the relative rankings of facility
estimates.
36
Thirteen percent reported sexual contact with coercion. This included 7 percent
reporting contact with another youth at the facility and 8 percent reporting contact with a
staff member.
o
6% indicated that they were physically forced to take part in the activity.
o
9% percent indicated there was some other type of force or pressure.
o
9% indicated that they participated in exchange for favors or protection.
Eight percent alleged contact with a staff member without reported coercion.
Note that youth could have reported more than one type of contact and more than one type of
coercion.
With respect to facility level rates, four of the 12 facilities had rates of nonconsensual
contact which was not statistically different than 0. At the other eight facilities, rates ranged from about
15 percent to 28 percent. These results indicated that rates of alleged nonconsensual sexual contact do
vary by facility. However, very few of the rates for facilities were statistically different from each other,
primarily because of the small sample sizes for many of the facilities.
The type of coerced contact were much different for incidents involving other youth and
staff. For incidents involving youth, the rate of incidents was uniformly distributed across the major
categories included on the questionnaire (touching, mouth, anal, vaginal and other). For reports involving
staff, the majority of the reports involved vaginal penetration.
3.4.2
Data quality analysis
One of the concerns related to collecting self-reports of sexual assault from incarcerated
juveniles is the quality of the information. Given the sensitive and personal nature of the topic, one
concern is that youth would not report incidents. On the other hand, a number of administrators were
concerned that youth would over-report in order to put the facility in a bad light.
On the Pilot, data quality was assessed by examining the extent youth reported in a
consistent and logical way. Two types of analyses were done with the Phase 2 data.
Examination of “outliers” – Were there cases of logically inconsistent reports (e.g.,
males reporting vaginal contact with another youth in an all male facility)? Were there
37
cases of extreme reports (e.g., report a large number of forced sexual assaults since
admission)?
Examination of construct validity – Are the relationships between variables consistent
with expectations (e.g., receiving medical care for an injury and seeing a doctor because
of an injury)? Are youth consistent in reporting issues that should be correlated with
whether or not they report being sexually assaulted (e.g., reported assault by facility staff
and negative assessment of staff)?
Table 11 shows the results of the “outlier” analysis. Logical inconsistencies were detected in
the responses from 101 youth. The majority of these inconsistencies (68) were based on answers to
questions about receiving medical care. Two questions in the survey focused on care provided as a result
of an injury: “…which, if any, of the following conditions have you received medical care for...an illness,
an injury,” and “Did you see a doctor, nurse, or other health care person for any of these [reported]
injuries?” Sixty-eight youth gave contradictory answers to these questions. These provided the largest
number of inconsistencies examined.
The other 33 cases with logical inconsistencies contained answers to questions on sexual
contacts that would be highly unlikely. Examples of such responses included 7 male respondents who
reported vaginal contact with another youth in an all male facility, and 14 youth who reported not
worrying about assaults by staff even though they reported being forced to have sexual contact with staff.
Thirty-three youth provided answers to survey questions that were defined as extreme:
reporting forced sexual contact by both staff and youth; spending less than 20 minutes to complete the
interview even though an assault was reported;9 and reporting a large number of forced sexual assaults
since admission. 10
9
When a respondent reported an assault, additional questions on the characteristics of the assault were presented. In this case, it is unlikely that a
youth would be able to consider and answer the entire battery of questions in less than 30 minutes.
10
For this analysis, a “large” number was defined as a function of time since admission and number of assaults.
If they reported 12 or more forced sexual assaults and were admitted within the last month
If they reported 27 or more forced sexual assaults and were admitted within the last 2 months
If they reported 56 or more forced sexual assaults and were admitted within the last 3 months
If they reported 69 or more forced sexual assaults and were admitted within the last 6 months
38
Table 11. Number of Youth Providing Inconsistent or Extreme Reports, by Type of Report
Number
of youth
Inconsistent reporting
Males reporting vaginal contact with another youth in an all male
facility
Males reporting vaginal contact when perpetrator was identified
as a male
Report positive assessment of environment and report forced
sexual contact
Report not worrying about being assaulted by staff and report
forced sexual contact with staff
Report not worrying about being assaulted by youth and report
forced sexual contact with youth
Inconsistent reports about seeing a doctor
7
3
5
14
4
68
Extreme reporting
Report having forced sexual contact by both staff and youth
Report an assault and spend less than 20 minutes completing
the questionnaire
Report a large number of forced sexual assaults since
admission
Report more than one physical assault per day by staff or youth
since admission
10
6
3
14
Removing the “outlier” cases from the Phase 2 data set reduced the estimate of reported
coerced sexual contact. Whereas the estimate based on the full set of Phase 2 respondents was about 12
percent, once the “outliers” are removed, the estimate dropped to about 7 percent.
To examine the construct validity of the survey data, comparisons were made between the
responses to four sets of survey items provided by youth who reported coerced sexual contact at the
facility and those that did not report such contact. Statistically significant differences were found for each
comparison.
Youth that reported coerced contact since admission were more likely to report being
sexually assaulted prior to admission (27% and 8%, respectively)
The more favorable a youth’s opinion of the facility environment, the less likely he/she
was to report coerced sexual contact since admission. The percent reporting coerced
39
contacts ranged from 5 percent among those with the most favorable opinion to 29
percent among those with the least favorable opinion.11
4.
Youth opinion of facility staff was negatively related to likelihood reporting coerced
sexual contact since admission, ranging from 7 percent among youth reporting the most
favorable opinion of the staff to 23 percent among those reporting the least favorable
opinion.12
There were positive relationships between reported fear of being assaulted in the facility
by staff or by youth, and reports of being coerced to engage in sexual contacts with staff
or with youth.13
o
Fifteen percent of youth reporting fear of assault by staff also reported coerced
contact with staff since admission to the facility. Four percent of youth reporting
no fear of assault reported coerced contact.
o
Eleven percent of youth reporting fear of assault by youth also reported coerced
contact with youth, compared with 2 percent of youth that did not report fear of
assault.
Conclusions
The efforts to design a methodology that can be used to capture data on the prevalence and
characteristics of sexual assault in juvenile correctional facilities were successful. With the assistance of
many state and local corrections agencies and with support from parents/guardians and the young
respondents, questionnaires and data collection procedures were developed that protected the
confidentiality of participants, abided by state and local mandatory reporting requirements, and resulted in
meaningful data.
Facility staff helped identify important steps to take during the implementation of the NSYC.
For example, it is critical that procedural requirements related to rostering youth and contacting
parents/guardians be communicated to the facilities early in the process so that arrangements can be made
to ensure accuracy and timeliness of the information provided. But overall, facilities that participated in
the pilot test reported that NSYC procedures were flexible and minimized burden as much as possible.
11
Examples of items used to assess opinion of facility environment include statements such as “youth are punished when they don’t do anything
wrong,” and “facility staff use force when they don’t really need to.”
12
Examples of items used to assess opinion of facility staff include statements such as “staff are good role models,” and “staff are mean.”
13
Survey questions used to assess fear of assault asked if the youth worried “about being hit, punched, or assaulted by staff/other youth here.”
40
Obtaining consent, either from parents/guardians or from states in loco parentis, will be key
to the success of the study. Working closely with Institutional Review Boards and other state research
committees is an important step in determining the best approach to achieve an adequate consent rate.
Likewise, collaboration with the state liaisons and facility coordinators to contact parents/guardians and
document consent will be essential. Pilot test findings suggest that successful strategies can be identified
and implemented.
The clarity of the assent materials and the process used to communicate key elements of
assent to youth will require an interactive approach between the youth and the study representative.
Reliance on written materials will not be sufficient to convey to youth their rights as research participants.
However, the findings from the cognitive testing identified methods that were successful in the pilot tests.
Overall, the youth who participated in the tests were very positive about their experience and felt that the
survey was important.
Finally, the data generated by the survey will produce meaningful estimates of alleged
sexual assault. Although the prevalence rates were higher than expected, this might be partly explained
by the nature of the facilities and youth that participated in the pilot tests. In addition, it appeared that
some youth reported extreme response or inconsistent data; these “outliers” could be defined and removed
from the data set. Similar procedures could be used in the national study, and in fact, additional methods
to assess these conditions could be built into the questionnaire.
41
File Type | application/pdf |
File Title | Pretest report |
Author | Timothy Smith |
File Modified | 2008-02-21 |
File Created | 0000-00-00 |