Test Pilot

OMB Memo - CJRP- JRFC Pilot Test-OMB_Revised (1).docx

Generic Clearance for Cognitive, Pilot, and Field Studies for The Office of Juvenile Justice and Delinquency Prevention Data Collection Activities

Test Pilot

OMB: 1121-0360

Document [docx]
Download: docx | pdf

U.S. Department of Justice

Office of Justice Programs

National Institute of Justice
______________________________________________________________________________
Washington, DC 20531

Shape1

MEMORANDUM



To: Joe Nye, Policy Analyst

Office of Information and Regulatory Affairs

Office of Management and Budget


Through: Melody Braswell, Department Clearance Officer, Justice Management Division


From: Benjamin Adams, Social Science Analyst, NIJ


Date: January 19, 2021


Re: NIJ Request for OMB Generic Clearance for Pilot Testing for the JRFC and CJRP under NIJ Generic Clearance Agreement (OMB #1121-0360)



The National Institute of Justice (NIJ), in coordination with the Office of Juvenile Justice and Delinquency Prevention (OJJDP) seeks generic clearance approval to pilot test the redesigned survey instrumentation and data collection protocols for future studies of the Juvenile Residential Facility Census (JRFC) and Census of Juveniles in Residential Placement (CJRP). This package supports a data collection mandated by Congress (Public Law No. 115-385). NIJ is submitting to OMB for approval of developmental materials associated with both JRFC and CJRP data collections.


Background


Since 1971, the Department of Justice (the Department) has taken a strong interest in juveniles in custody, the operation of the facilities in which they are located, and the services available to them while in custody. In 1971, the Department began a census of juveniles in custody known as the Children in Custody (CIC) Census (more formally: The Census of Public and Private Juvenile Detention, Correctional, and Shelter Facilities). OJJDP took over the operations of this census in 1974, upon authorization of the Juvenile Justice and Delinquency Prevention Act. In 1993, OJJDP began a broad, long-term examination and revision of its data collection efforts covering juveniles in custody. This effort included extensive consultation with experts interested in the data produced, discussions with respondents, and extensive testing of questions and methodologies. In 1997, OJJDP conducted the first Census of Juveniles in Residential Placement (CJRP) replacing the population component of the former the CIC data collection. Concurrently, development of the Juvenile Residential Facility Census (JRFC) commenced in 1996. The testing phase was completed in 1999 when the final report on the October 1998 field test was provided to OJJDP. CJRP collects individual level data on youth being held in residential placement resulting from contact (i.e., arrest, probation revocation, etc.) with the justice system. The CJRP has been conducted in every odd year from 1997 to 2019 (OMB Control No. 1121-0218). The JRFC routinely collects data on how facilities operate and the services they provide. It includes questions on facility ownership and operation, security, capacity and crowding, and injuries and deaths in custody. As the complement to the CJRP, the JRFC has been collected during the even number years from 2000 to 2020 (OMB Control No. 1121-0219).


OJJDP is authorized to conduct this data collection under the JJDP Act of 1974, as amended (see Appendix A). The JJDP Act was reauthorized in December 2018 through the Juvenile Justice Reform Act of 2018 (Public Law No. 115-385). In fiscal year 2019, the Department transferred OJJDP’s research, evaluation, and statistical functions, activities, and staff to NIJ, including the management of the JRFC and CJRP. As such, NIJ is working in collaboration with OJJDP to elevate and advance this work for the juvenile justice community. NIJ is authorized to conduct this data collection under the Omnibus Crime Control and Safe Streets Act of 1968. Copies of the relevant sections of the NIJ authorizing language are included in Appendix A of this OMB package.


Purpose of Proposed Research


The juvenile justice environment has shifted significantly over the past 20 years since the original surveys were developed. Juvenile arrest rates, numbers of youth in custody, and the number of facilities have all declined. Between 1997 and 2017, the number of youth in residential placement decreased 59% to 43,580, its lowest level since the data collection began in 1997 (Hockenberry 2020). Similarly, juvenile arrests have been on the decline for more than a decade; with the number in 2018 reflecting 73% below the 1996 peak (Puzzanchera 2020). As the understanding of juvenile justice evolved, larger, state-run correctional facilities declined in popularity and smaller local facilities emerged. The CJRP and JRFC no longer fully reflect the current understanding of juvenile justice research, policy, and practice; after 20 years, a redesign of these two collections is needed.


As part of this redesign effort, a pilot test will be conducted with both the JRFC and CRJP to test feasibility of new questions, topics, and methods. Eight cognitive interviews were conducted to review the recommended additions and changes, and changes were made based on the feedback received. The updated additions and changes will be tested in the pilot test and results of the pilot test will inform the full administration of the CJRP and JRFC instruments in future data collection cycles. This memo details the proposed plan for the 2021 Pilot Test of the JRFC and CJRP.


Pilot Test Overview


Respondent Universe and Sample Design

The 2021 CJRP and JRFC Pilot Tests will utilize the same respondent universe as the 2020 JRFC Study. Those facilities who have not yet completed the currently active 2020 JRFC will be excluded from the 2021 Pilot Test so that the 2021 JRFC or 2021 CJRP request does not overlap with ongoing requests for the 2020 JRFC effort. The universe will be stratified by size of facility (small, medium, large) and region (U.S. Census regions: Northeast, South, Midwest, West). Facilities with 8 or fewer beds are classified as small, 9-22 beds are classified as medium, and 23 or more beds are classified as large. Facilities with unknown size will be imputed using hot deck imputation within their state for stratification.


200 sampled facilities will be selected to participate in the 2021 CJRP Pilot Test and 200 sampled facilities will be selected to participate in the 2021 JRFC Pilot Test. The sample size will be allocated proportionally to the number of facilities in the strata. Within each stratum, the facilities will be selected with a simple random sample (SRS). Half of the facilities selected in each strata will be assigned to the 2021 CJRP Pilot Test and the other half will be assigned to the 2021 JRFC Pilot Test.


Additionally, sampled facilities will be randomly assigned to receive one of two questionnaire versions (see tables 5 and 6 for more details). Therefore, most analyses will be among proportional outcomes comparing the percent of facilities selecting responses on different questionnaire versions. Some of these analyses will be done with both CJRP and JRFC participants while others will only include participants from the CJRP or JRFC. As will be discussed later, an 80 percent response rate is assumed, thus there will be either 80 facilities per questionnaire version compared if only using participants from one instrument or 160 facilities per questionnaire version if using participants from both instruments. The minimum detectable difference varies by prevalence and values for a level-of-significance of 5% and power of 80% is included in Table 1.


Table 1. Minimum detectable differences for proportional outcome prevalence

Prevalence

Minimum detectable difference

N=80

(CJRP or JRFC only)

N=160

(Both CJRP and JRFC)

10%

9.7%

7.5%

20%

14.5%

11.0%

30%

17.7%

13.2%

40%

19.9%

14.6%

50%

21.2%

15.3%

60%

19.9%

14.6%

70%

17.7%

13.2%

80%

14.5%

11.0%

90%

9.7%

7.5%


Data Collection Procedures

The typical JRFC and CJRP data collections fall within a 10-to-12-month window and have an October survey reference date. The Pilot Test will introduce a survey reference date in March with a 3-to-4-month data collection window. The JRFC and CRJP collections have primarily used an October reference date in past data collection cycles, however, a primary objective of the Pilot Test is to implement a proposed protocol where the survey form is shared in advance of the reference date, which lends to the March 2021 timeline. Future JRFC and CJRP collections may consider moving the reference date to February or March, with the goal of publishing data annually by the end of each year.


The original design project for the CJRP asked a sample of respondents to describe fluctuations in their facility populations over the course of the year (Schwede and Ott 1995). Many respondents associated fluctuations with the school year, indicating low populations during the winter holidays and summer vacations and higher numbers when children are in school. These fluctuations were reported consistently across facility types and geographic areas. Weekend days as well as Friday and Monday were also indicated as potentially producing skewed counts because of home visits. Ultimately, the study team selected an October reference date, but concluded from the interviews that “comparable counts could be obtained if the reference day were scheduled sometime between late February and early April.” In more recent debriefing interviews, facility representatives confirmed this assessment, indicating that a February reference date is both acceptable to respondents and that facility populations are unlikely to be impacted by seasonal fluctuations during this month (Scott et al. 2019). There is no current monthly facility data available to examine population fluctuations that may exist outside of weekend/weekday and holiday differences indicated by respondents. Thus, using a middle of the week (Tuesday-Thursday) reference date not near a holiday is expected to produce similar estimates as the October reference date that has been used in the past. A specific evaluation of the changes in population counts by month is not within the scope of this redesign effort.


The current JRFC and CJRP data collections operate on a 12-month timeline. Response rates for both collections are typically between 83 and 95 percent from a frame of approximately 2,100 facilities (2019 CJRP and 2020 JRFC OMB package). Between months three and four of data collection, the response rate typically reaches 75 percent. Targeted data collection efforts to accommodate late responders and improve low response rates in specific states are what drives the timeline much further. The current JRFC and CJRP protocol includes the communications outlined in table 2. However, as NIJ is considering moving the reference date to February with the goal of publishing data annually by the end of each year, a 12-month data collection timeline will no longer be feasible.


Table 2. Current data collection outreach protocol


Month

Planned Communication

1

2

3

4

5

6

7

8

9

10

11

12

Prenotice

X












Invitation


X











Reminder Email



X










Reminder Email





X








Nonresponse Phone Calls





X

X

X

X

X




Nonresponse Mailing






X







Critical Items Phone Calls







X

X

X




Targeted Nonresponse










X

X

X

Close Data Collection












X


Instead, a 3-to-5-month data collection timeline would be more realistic and is what is tested in this pilot study. Contact materials and timing of outreach are adjusted to fit the shorter data collection window and to inform sample facilities that they are participating in a pilot test. Additionally, any sample facilities that traditionally report for multiple facilities (e.g., a state-level reporter for all facilities within a given state) will only be asked to report for facility(s) selected in the pilot test. Sample facilities will be contacted in the usual avenues as the standard JRFC and CJRP collections: mail, e-mail, and phone. Completed paper surveys will be accepted by mail, fax, and email, though respondents will be encouraged to submit using the web survey. The web survey allows for a more timely and secure receipt of data and results in higher quality data due to the included missing prompts and validation checks. All contact materials will reference web submission as the primary mode, but still include instructions for submitting a paper survey for those respondents that prefer it. Additionally, the paper survey will be accessible for download on the survey homepage.


Nonresponders will receive prompts by mail, email or phone to complete the survey approximately every 2-3 weeks, which will be much more frequent than in the standard JRFC and CJRP collections (see Tables 2 and 3). If a more aggressive prompting strategy results in a quicker return of data from the majority of facilities, this should free up resources sooner for targeted follow up with the typical late responding facilities, resulting in an overall close of data collection much sooner in the standard JRFC and CJRP collections. The planned timeline for outreach to facilities is detailed in Table 3.


Table 3: Planned outreach timeline


Month

Planned Communication

1

2

3

4

Prenotification Mail-Email

X




Invitation Mail-Email


X



Reminder 1 Mail


X



Reminder 2 Mail



X


Nonresponse Phone-Email



X

X

Reminder 3 Mail




X

End of Data Collection




X


The typical JRFC and CJRP data collections rely on critical items as an approach to address nonresponse, where facilities are offered the opportunity to respond to a brief set of critical items from the survey instrument in place of completing the entire form. In the Pilot Test, we will offer this option to half of remaining nonrespondents in the last contact within the data collection window (Reminder 3) to test its effectiveness at increasing response rates. Half of the nonresponders will be sent a letter with a paper version of the critical items form. These sample members will be offered the option to complete the full survey online or complete the critical items form by paper. The other half of the nonresponders will receive a letter asking them to complete the full survey online with no mention of the critical items. Planned data collection materials are in Appendix B.


Data collection will continue to be voluntary and there will be no offered incentive for participation. Facilities that refuse to participate will not be recontacted.


Instruments

A panel of 12 juvenile justice experts provided feedback about questions and topics that should both be removed from the current surveys or added to the questionnaires in future waves (see Appendix C). Additionally, two survey methodologists reviewed the questionnaires for wording, visual design, and other survey methodology best practices that should be incorporated. Eight cognitive interviews were conducted to review the recommended additions and changes. Table 4 displays the total number of questions added and removed from each questionnaire in the pilot tests.


Table 4. Number of questions added and removed from each questionnaire and topic and total question change for each questionnaire

Questionnaire and Topic

Number of Added Questions

Number of Removed Questions

Total Change

CJRP




Facility information

+11

-4

+7

Count of young persons

+2

-9

-7

Length of stay

+4

0

+4

Feasibility of individual level demographics

+4

0

+4




+8





JRFC




Facility information

+11

-10

+1

Count of young persons

+2

-9

-7

Activities available

+2

-4

-2

Staff training required and offered

+3

0

+3

Mental health professionals available

+1

0

+1

Medical services

+7

0

+7

Feasibility of individual level demographics

+4

0

+4




+7


The following sections of this memo first describe the major changes planned for the pilot test, followed by details of the removal of questions, and finally descriptions of new questions. Questions undergoing major changes are included in the count of added and removed questions in Table 4.


Questionnaire Changes. The pilot test includes several changes to both the CJRP and the JRFC instruments to bring the questionnaires in line with survey best practices. These changes are made to improve comprehension and consistency through the surveys, thus reduce respondent burden. Additionally, after review of open-ended responses from previous waves of the study, new response options are included to reduce the need for respondents to write-in responses to questions. All changes are documented in the instruments in Appendices D (CJRP instrument) and E (JRFC instrument).


In addition to the minor changes designed to meet survey best practices, four major changes are made to existing questions in both questionnaires. First, the questions in the first section in the JRFC are reordered to match the order of the questions in the CJRP. This will ensure consistent responses to these questions each year.


Second, the questions collecting the number of persons in the facility by age and reason for being assigned a bed are redesigned to streamline the collection of this information. All information is still being collected, and the instructions provided are the same, but by redesigning these questions, the number of questions is reduced from nine questions (S1_ANYBEDS, S1_TOTCOUNT, S1_GE21PERSONS, S1_LT21BEDS, S1_LT21PERSONS, S1_CHARGEANY, S1_CHARGECOUNT, S1_OTHEROFFENSES, S1_OTHERCOUNT) to two questions (S1_COUNT and S1_COUNTCATS). The new format collects information in a grid format, which allows respondents to see their totals and how the data points are related. The new design mirrors the way these data are collected in other DOJ surveys (e.g., Annual Survey of Jails—ASJ) and should improve data quality. NIJ will review all data collected to ensure the new method is consistent with trends for each facility.


Third, one question asks facilities to report when young persons are locked in their sleeping rooms (S1_LOCKREAS). The response options for the existing question are a mix of timepoints (e.g., at night) and situations (e.g., when they are out of control). The pilot test plans to split out the response options into two different questions (S1_LOCKSITS, S1_LOCKSCHED). The first question will ask about the situations when young persons are locked in their sleeping rooms. Respondents who select the response option “as part of a set schedule” will be asked a second question about what that schedule is. Splitting the response options into two questions should help reduce respondent burden as respondents will have a shorter list of response options to review that are more cohesive.


Fourth, one question asks facilities why outside doors to buildings with living/sleeping used are locked (S1_OUTDOORLOCKED_REAS). Cognitive interview participants indicated that they would all select both responses to this question. Based on this feedback, NIJ reviewed data on these items from the 2018 JRFC and discovered that 84% of facilities selected “to keep intruders out” and only 56% of facilities selected “to keep young persons inside this facility”. The pilot test will change this question to remove the “to keep intruders out” response option and only ask if the outside doors are locked to keep young persons inside the facility.


Additionally, in the CJRP a change is made that collects more information about the reasons for young person being at the facility. Specifically, the original questionnaire asked facilities to indicate the most serious offense for each young person in the facility (see question S2_INTRO item 7 in Appendix D). For the pilot test, the question will ask for the three most serious offenses and ask facilities to indicate if each offense was the result of a probation or parole violation.


Assessment of Questionnaire Changes. To determine the success of these changes, item nonresponse rates, response distributions, and help desk comments will be reviewed. Specifically, the rates of item nonresponse and response distributions will be compared with trends over time to determine if the changes are in line with expected trends. Additionally, item nonresponse will be compared between the two versions for changed questions only appearing on one version. If there are increases in item nonresponse or large changes to the response distributions, the questions will be reconsidered or reverted back to the original questions before fielding in future waves of the CJRP and JRFC.


The web instrument includes timers at the question (page) and section level. Question, section, and overall times will be compared between the versions. If there are significantly different times between questions, the questions will be reconsidered. This review and analysis will be done in conjunction with item nonresponse information.


Removal of Questions. Not including questions with major changes described above, three questions from the CJRP and 13 questions from the JRFC will be removed for the pilot test. In both the CJRP and the JRFC, questions about foster care, independent living arrangements, and overflow will be removed. Removal of these three questions will reduce burden for respondents and the information gained from these questions is no longer valuable. Specifically, 92% of facilities indicated “no” to if this facility provides foster care in 2018, 88% of facilities indicated “no” to if this facility provides independent living arrangements in 2018, and 98% of facilities indicated “no” to if this facility housed any overflow detention population in 2018.


In addition to these three questions, ten others will be removed from the JRFC. Five of these questions focus on the building or campus layout of the facilities. After consultation with expert panel members, NIJ agrees that these questions are no longer valuable and should be removed from the survey to reduce respondent burden. Additionally, the current JRFC has four questions about large muscle activity (i.e., exercise). Due to more recent mandates in most states about exercise requirements in facilities, cognitive interview participants indicated that these questions are irrelevant as most facilities now are required to provide some level of large muscle activity on a regular basis. Therefore, removal of these four questions will reduce respondent burden. The final question to be removed from the JRFC asked about the sleeping room arrangements/occupancy. Data gathered from this question are ambiguous as responses only indicate how many young persons are in a room and do not provide any indicator of what types of rooms are in the facility. This means that the responses could indicate above, at, or below maximum occupancy and the data cannot tease out which one. Removal of this question will reduce respondent burden as the question is not currently being used in any reports.


New Questions. In total, 17 new questions will be added to the CJRP and 26 new questions will be added to the JRFC. Nine of these new questions (added to both the CJRP and the JRFC) ask about specific attributes of facilities (S1_CLASSIFY_SCREENPROG, S1_CLASSIFY_SCREENLIV, S1_CLASSIFY_SCREENOTH, S1_CLASSIFY_SCREENCOMM, S1_CLASSIFY_POP, S1_CLASSIFY_CONTACT, S1_CLASSIFY_TREATPROG, S1_CLASSIFY_OUTDOOR, S1_CLASSIFY_JOBTRAIN). These questions are designed with the intention of being an alternative to the self-classification that facilities currently are asked to complete. For the pilot test, the instruments will include both the self-classification question and the nine attribute questions to see if the self-classification question can be removed in future waves. The new attribute questions are designed to be easier to read and understand (most are yes/no questions), with the goal of reducing respondent burden in the future if these questions can replace the longer more complex self-classification question.


In addition, four questions will also be added to both the CJRP and the JRFC for the pilot study to ask about the feasibility of collecting more detailed information on race, ethnicity, and gender identity of youth (CJRP: S2a_FEAS_ETHNICITY, S2a_FEAS_RACE, S2a_FEAS_RACEETH_NOW, S2a_FEAS_GENDERID; JRFC: S6_FEAS_ETHNICITY, S6_FEAS_RACE, S6_FEAS_RACEETH_NOW, S6_FEAS_GENDERID). Three of these questions are simple yes/no questions asking if facilities collect ethnicity separate from race, multiple races, and gender identity separate from sex for each young person in their facility. Expert panel members indicated that gathering this information on each young person would help with identifying subgroups for outcome measures in facilities. If these feasibility questions indicate that the majority of facilities collect this information, these data may be requested on future waves of the CJRP and JRFC for both rosters and deaths in the facilities. The fourth question asks about how race/ethnicity is determined for each young person to better understand the accuracy of the data collected. These questions are added here to better understand feasibility and will not be asked on future CJRP or JRFC surveys.


The final four new question in the CJRP are focused on the length of stay of individuals in the facility (see new Section 2a in Appendix D). Two questions ask about the number of young persons who were released from the facility in the 14 and 30 days prior to the reference date for the pilot test. These questions are aimed at better understanding how long youth remain in custody and will help guide how length of stay could be asked in future waves. Additionally, facilities will be requested to provide data on the last 20 young persons who were released from each facility including basic demographic information on race/ethnicity, gender, and age, along with the dates for the young person’s arrival and release from the facility and where the young person went after release. This information will be used by NIJ to calculate the average length of stay for facilities across the country by demographic subgroups. Finally, half of the respondents will be asked to calculate their facility’s average length of stay for the 30 days prior to the reference date for the pilot test. Facilities may not have the capability of calculating this, therefore it will only be asked of half of the respondents to reduce burden and will be used in conjunction with the detailed youth level data to determine the accuracy of this measure. If facilities can accurately calculate their own length of stay average, future waves of the study may choose to include that question instead of collecting detailed information. However, if the amount of missing data is large, or the estimates are inaccurate, future waves of the study may choose to continue to ask for the detailed individual level data.


In the JRFC, five new questions will be added to the first section of the survey. Two of these questions will ask about activities offered to young persons in facilities, aimed at gathering information about how the young persons spend their time in these facilities (S1_ACTIVITIES, S1_ACTIVITIES_OTHER). The first question asks facilities to indicate what types of activities they have out of a provided list. The second question then asks facilities to write in any additional activities that were not included in the list. The first question is intended to reduce burden on the respondents by providing a list the respondents can select from as open-ended questions are known to be more burdensome for respondents. However, NIJ wants to ensure that the list provided is comprehensive. Therefore, the second question will provide the project team with information about any activities missing from the list that should be included in future waves of the study.


Additionally, there are three new questions added to the first section of the survey on staff training (S1_STAFFTRAIN_REQ, S1_STAFFTRAIN_REQ_OTHER, S1_STAFFTRAIN_OFFER). The expert panel members indicated that training of staff is an important topic that should be considered in future waves of the study. The three new questions collect information on what training is required before staff can work in the facilities, and what training has been offered to all staff in the past year. Half of the respondents will receive a question asking them to select from a list of training types, followed by an open-ended question asking about other training types not included in the list. The other half of the respondents will only receive the open-ended question asking them to write in the required trainings. These questions are intended to help NIJ figure out what trainings are already required. The intention of only providing a list to half of the respondents is to avoid priming respondents to only think of trainings similar to those that are listed. All respondents will then receive the third question asking about additional the trainings offered to staff as optional trainings in the past year.


One new question is added to the mental health services section in the JRFC asking about the availability of mental health professionals (S2_MHPROVIDERS). This question asks if psychiatrists, psychologists, or licensed counselors are part of the facilities staff, contracted employees, or are available from the community if needed. This question will help inform how facilities are staffed and prepared to serve youth with mental health needs.


Finally, a new section on medical services offered in the facility is added with seven new questions in the JRFC (see new Section 2b in Appendix E). A previous version of the JRFC (2004) contained a section on medical services, but due to burden this section was removed. With the new requirement in the Juvenile Justice Reform Act of 2018 to collect information on pregnant females in facilities, and at the suggestion of the expert panel, NIJ developed seven new questions about the overall services at the facility including the availability of medical professionals, availability of medical exams, and number of pregnant females in the facility. These questions should be less burdensome for respondents to report on compared to the original medical section from 2004, which asked for more detailed information about medical tests and vaccines.


Assessment of New Questions. To determine the success of these new questions, item nonresponse rates, response distributions, and help desk comments will be reviewed for each new question. Specifically, rates of item nonresponse will be compared with the average rate of item nonresponse for unchanged questions. If the rates are consistent, one can infer that the new questions do not provide any unique burden for response. Additionally, the response distributions for these new questions will be reviewed with the expert panel to ensure that this meets expectations in the field. If any distributions do not meet these expectations, the questions will be reconsidered before fielding in future waves of the CJRP and JRFC.


Different Questionnaire Versions. As mentioned above in the new questions section, certain questions will be administered to half of the respondents to gain new information without increasing respondent burden. Specifically, the pilot test will randomly assign facilities to one of two different questionnaires for both the CRJP and the JRFC, with central reporters having all selected facilities assigned to a single version. Details and reasons for the differences are detailed below. Tables 5 and 6 display the overview of the differences in versions for CJRP and JRFC.


Table 5. CJRP differences in questionnaire versions

Question

Version A

Version B

Self-classification

S1_CLASSIFY_A

No Change

S1_CLASSIFY_B

Labels removed from response options.

When are young persons locked in sleeping rooms


S1_LOCKSCHED_A

Response options have specific quantifiers:

  • All of the time

  • During the day for 2 hours or less

  • During the day for more than 2 hours

  • At night

S1_LOCKSCHED_B

Response options have vague quantifiers:

  • Rarely

  • Sometimes

  • Often

  • Always

Length of stay: Average


Not included

S2a_LOS30

Included

Length of stay: Where are young persons released to

S2a_LOSINTRO_A

Open-ended question

S2a_LOSINTRO_B

Closed-ended question


Table 6. JRFC differences in questionnaire versions

Question

Version A

Version B

Self-classification

S1_CLASSIFY_A

No Change

S1_CLASSIFY_B

Labels removed from response options.

When are young persons locked in sleeping rooms


S1_LOCKSCHED_A

Response options have specific quantifiers:

  • All of the time

  • During the day for 2 hours or less

  • During the day for more than 2 hours

  • At night

S1_LOCKSCHED_B

Response options have vague quantifiers:

  • Rarely

  • Sometimes

  • Often

  • Always

Staff training

S1_STAFFTRAIN_REQ_A

Open-ended question

S1_STAFFTRAIN_REQ_B

S1_STAFFTRAIN_REQ_OTHER_B

Closed-ended question


  • Self-classification. Half of the sample will be randomly assigned to the standard self-classification question that has been used in previous waves of the CJRP and JRFC. The other half of the sample will receive a version of the self-classification question without the labels (e.g., “Detention Center”) for each response option. Instead, the response options will only list the definitions of the facilities. Feedback from the expert panel members and cognitive interviews indicated that many of the terms used in the self-classification question may be outdated. Therefore, the goal of this change is to determine if the distribution differs based on if those labels are removed. The distributions will be compared across the two groups.

  • When are young persons locked in sleeping rooms. As part of the change to the when are young persons locked in sleeping rooms question, the pilot study will test the scale used for the “as part of a schedule” question. Half of the sample will be randomly assigned to specific time-based response options of “All of the time”, “During the day for 2 hours or less”, “During the day for more than 2 hours”, and “At night”. This question will be select all that apply with the first response option being mutually exclusive of the other three. The other half of the sample will receive a version with vague quantifiers as response options (“Rarely”, “Sometimes”, “Often”, and “Always”). The original question had response options that included “Part of each day” and “Most of each day”, so these new response options are designed to determine if more specific response options would lend to a different distribution of response options than the vague quantifiers. The distributions will be compared across the two groups.

  • Length of stay: Average. Half of the sample will be randomly assigned to receive a question asking them to calculate their facility’s average length of stay for the 30 days prior to the reference date for the pilot test. Facilities may not have the capability of calculating this, therefore it will only be asked of half of the sample to reduce burden and will be used in conjunction with the detailed individual level data to determine the accuracy of this measure. If facilities can accurately calculate their own length of stay average, future waves of the study may choose to include questions that ask facilities to calculate their own length of stay for different demographic groups instead of collecting detailed information. However, if the amount of missing data is large, or the estimates are inaccurate, future waves of the study may choose to continue to ask for the detailed individual level data.

  • Length of stay: Where are young persons released to. As part of the youth level data collected on length of stay, NIJ is asking for facilities to include where the young person was released to. Half of the sample will be randomly assigned to receive an open-ended question asking for facilities to write-in where a young person was released to. As facilities may not track where youth are released or it may be burdensome for them to uncover this information, the other half of the sample will receive a closed-ended question with vague categories and a “Don’t Know” response option. The missing data and percent of “Don’t Know” responses will be evaluated to determine if this information can be accurately provided by facilities. Additionally, responses to the open-ended question will be compared with the distribution of responses from the closed-ended to determine if the closed-ended version encompasses all places and can be used in future waves.

  • Staff training. Half of the sample will be randomly assigned to a question asking them to select which trainings (from a list of training types) staff are required to take before working. This question will be followed by an open-ended question asking about other required trainings not included in the list. The other half of the sample will be assigned one open-ended question asking them to write in all required trainings. These questions are intended to help NIJ figure out what trainings are already required. The intention of only providing a list to half of the respondents is to avoid priming respondents to only think of trainings similar to those that are listed. Only asking the open-ended question of half the sample will reduce overall respondent burden.


Estimate of Respondent Burden


CJRP

Based on previous administrations of the CJRP and the Census Bureau’s analysis of paradata from the 2017 CJRP, NIJ estimates the average time to complete the original CJRP form was 3 hours.


Differences in facility characteristics, staffing, reporting procedures, and populations housed, indicate that not all facilities will have the same hour burden. For example, public facilities, on average, house more youth. Therefore, the burden for a public facility to submit data is likely to be greater than the burden for a private provider. In addition, it is expected that the burden for respondents that report manually will be greater than for those using electronic means of data submission. In 2019, 22% of facilities who completed the CJRP completed by mail or fax (manual reporting).


With the addition of new questions requesting individual level data for 20 youth, NIJ anticipates the overall burden average to increase by about 1 hour regardless of the data submission mode, or the type of facility.


The sample size for the CJRP Pilot Test is 200 facilities. Assuming an 80 percent response rate to the full questionnaire request, the total number of estimated annual burden hours requested to complete the form is expected to be 656 burden hours (9 hours x 21 facilities + 3 hours x 14 facilities + 4 hours x 75 facilities + 2.5 hours x 50 facilities = 656 hours). Additionally, the CJRP will provide nonrespondents the opportunity to complete critical items only during the last few weeks of data collection. It is estimated that an additional 10 percent of facilities will complete the 15-minute critical item data collection. The total number of estimated burden hours requested to complete the critical items form is expected to be 5 burden hours (.25 hours x 20 facilities = 5 hours). Therefore, the total number of burden hours estimated for the CJRP is 661 hours (656 hours + 5 hours = 661 hours). The following table (see Table 7) provides an overview of the hour burden estimates by type of data provider (manual or electronic) and facility type.


Table 7. Estimated total burden hours for CJRP pilot test

Data/Facility Type

Number of Facilities

Original Hour Burden per Facility

Additional Hour Burden per Facility

Total Hour Burden per Facility

Total Hours

Full CJRP Form






Manual Data Providers






Public Facilities

21

8 hours

1 hour

9 hours

189 hours

Private Facilities

14

2 hours

1 hour

3 hours

42 hours







Electronic Data Providers






Public Facilities

75

3 hours

1 hour

4 hours

300 hours

Private Facilities

50

1.5 hours

1 hour

2.5 hours

125 hours







Critical Items Form

20

.17 hours

.08 hours

.25 hours

5 hours

Total Burden Hours

180

~2.8 hours

~.9 hours

~3.7 hours

661 hours


JRFC

Based on previous administrations of the JRFC and the Census Bureau’s analysis of paradata from the 2018 JRFC, NIJ estimates the average time to complete the original JRFC form was 2 hours. With the addition of new questions, NIJ anticipates the overall burden average to increase by about 15 minutes. Additionally, unlike the CJRP, the respondent burden on the JRFC is unlikely to differ by regardless of the data submission mode or the type of facility.


The sample size for the JRFC Pilot Test is 200 facilities. Assuming 80 percent response rate to the full questionnaire request, the total number of estimated burden hours requested to complete the full form is expected to be 360 burden hours (2.25 hours x 160 facilities = 360 hours). Additionally, the JRFC does provide nonrespondents the opportunity to complete critical items only during the last few weeks of data collection. It is estimated that an additional 10 percent of facilities will complete the 15-minute critical item data collection. The total number of estimated burden hours requested to complete the critical items form is expected to be 5 burden hours (.25 hours x 20 facilities = 5 hours). Therefore, the total number of burden hours estimated for the JRFC is 365 hours (360 hours + 5 hours = 365 hours). The following table (see Table 8) provides an overview.


Table 8. Estimated total burden hours for JRFC pilot test


Data/Facility Type

Number of Facilities

Original Hour Burden per Facility

Additional Hour Burden per Facility

Total Hour Burden per Facility

Total Hours

Full JRFC Form

160

2 hours

.25 hours

2.25 hours

360 hours

Critical Items Form

20

.17 hours

.08 hours

.17 hours

5 hours

Total Burden Hours

180

~1.8 hours

~.2 hours

~2 hours

365 hours


Total

Based on the above assumptions, the entire pilot test of both the CJRP and the JRFC is estimated to have a total of 1,026 burden hours (see Table 9 for details).


Table 9. Estimated total burden hours for entire pilot test

Data/Facility Type

Number of Facilities

Original Hour Burden per Facility

Additional Hour Burden per Facility

Total Hour Burden per Facility

Total Hours

CJRP

180

~2.8 hours

.9 hours

~ 3.7 hours

661 hours

JRFC

180

~1.8 hours

.2 hours

~ 2 hours

365 hours

Total Burden Hours

360

~2.3 hours

~.6 hours

~ 2.9 hours

1,026 hours



Data Confidentiality and Security

All information tending to identify individuals (including entities legally considered individuals) will be held strictly confidential according to Title 34, United States Code Section 10231. A copy of this section is included with this submission as Appendix F. Regulations implementing this legislation require that NIJ staff and contractors maintain the confidentiality of the information and specify necessary procedures for guarding this confidentiality. These regulations (28 CFR Part 22) are also included in Appendix F. The cover letter that accompanies the 2021 CJRP and JRFC pilot tests notifies persons responsible for providing these data that their response is voluntary and the data will be held confidential. A copy of this letter, along with the necessary notification, is included in Appendix B.


Response data for both the CJRP and JRFC pilot test will be collected and stored in RTI’s National Institute of Standards and Technology or NIST‐Moderate “Enhanced Security Network” (ESN) environment. The ESN network is isolated from the internet by an enterprise‐level firewall, exposing only interfaces for inbound data such as data collection instruments. ESN access requires two‐factor authentication (PIN plus token) and no end-user devices are connected directly to the ESN network. Data stored in the ESN is further protected by the implementation of access restrictions by Windows security groups where membership is granted on a least privilege basis, so that only authorized project personnel with a business need to access the data can do so. ESN web servers for data collection instruments allow inbound Hypertext Transfer Protocol Secure (HTTPS) connections in an area separately firewalled from the two-factor ESN.



Institutional Review Board

RTI’s Institutional Review Board (IRB) has determined the project to be not human subjects research (see Appendix G).




References


Hockenberry, S. 2020. Juveniles in Residential Placement, 2017. U.S. Department of Justice, Office of Juvenile Justice and Delinquency Prevention. https://ojjdp.ojp.gov/sites/g/files/xyckuh176/files/media/document/juveniles-in-residential-placement-2017.pdf


Puzzanchera, C. 2020. Juvenile Arrests, 2018. U.S. Department of Justice, Office of Juvenile Justice and Delinquency Prevention. https://ojjdp.ojp.gov/sites/g/files/xyckuh176/files/media/document/254499.pdf


Schwede, L. and Ott, K. 1995. Children in Custody Questionnaire Redesign: Results from Phase 1 Exploratory Interviews. U.S. Census Bureau, Center for Survey Methods Research.


Scott, C., Jimerson, K., Webb, S., and Adolph, N. 2019. Internal Report. U.S. Census Bureau, Economic Reimbursable Surveys Division, Criminal Justice Branch.


List of Appendices


  1. Omnibus Crime Control and Safe Streets Act (34 U.S.C. 10121-10122), Juvenile Justice and Delinquency Prevention Act (34 U.S.C. 11161), and Juvenile Justice Reform Act of 2018 (H.R.6964 / P.L. 115-385)

  2. Contact Materials

  3. List of External Consultants

  4. CJRP Instrument

  5. JRFC Instrument

  6. Confidentiality of Information (34 U.S.C 10231) and Privacy Certification Requirements (28 C.F.R 22)

  7. IRB Approval and Privacy Certificate



13


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File Modified0000-00-00
File Created2024-07-31

© 2024 OMB.report | Privacy Policy