Census of Publicly Funded Crime Labs Pretest

2009 CPFFCL-Pretest_Report.pdf

2009 Census of Publicly Funded Forensic Crime Laboratories

Census of Publicly Funded Crime Labs Pretest

OMB: 1121-0269

Document [pdf]
Download: pdf | pdf
BJS Census of Publicly Funded Crime Labs Pretest

1

BJS CENSUS OF PUBLICLY FUNDED CRIME LABORATORIES:
PRETEST RESULTS AND RECOMMENDATIONS
OVERVIEW
Due to a growing need for information about forensic laboratories in the United States,
the Bureau of Justice Statistics (BJS) obtains information from crime labs as part of its
Census of Publicly Funded Crime Laboratories (CLC) data collection series. The Urban
Institute (UI) has been contracted to perform the 2009 CLC. In accordance with guidelines
from the Office of Management and Budget (OMB), the Urban Institute administered the
draft BJS CLC instrument to nine publicly funded forensic labs as part of a pretest. The
purpose was to pilot the instrument to assess the level of burden for respondents, the utility of
the collection, and identify measurement issues or areas needing further clarification. The
current report details observations from the administration of this pretest, as well as findings
from post-administration interviews with each of the pretest sites.
METHODOLOGY
The pretest for the CLC consisted of two tasks. First, the research team administered a
draft version of the survey instrument to nine eligible crime labs along with a pretest
respondent questionnaire to obtain additional feedback about the survey. Second, the
research team contacted all pretest site respondents by phone to discuss the experience of
completing the survey, obtain opinions on items being considered for revision, and clarify
any unclear responses.
Survey Administration
The CLC draft instrument was mailed to the nine pretest laboratories on February 11,
2010. Lab responses were delayed due to a snow storm in the Washington, DC area.
However, all responses were received by March 31, 2010. Respondents were also asked to
complete a separate questionnaire asking about completion times and resources expended to
complete the survey (see Appendix A).
Post-Administration Interviews
The UI team asked all pretest site respondents a series of questions, in addition to any
questions that arose in regards to the lab’s particular responses. All pretest site respondents
were asked the following:
•

Do you want your pretest response to count as your official response for the 2009
Census of Publicly Funded Crime Labs?

BJS Census of Publicly Funded Crime Labs Pretest

2

•

Are cases or requests easier for your lab to report? Are you able to track by both?
Did you have any problems counting cases for items D3 and D4 and counting
requests for items D8-D20?

•

How comfortable are you reporting the lab director salary?

•

Did you calculate or estimate the turnaround time? Was this a burdensome or
difficult question to answer? How important do you think this question is to the
field?

•

What is your opinion on item F7, asking about performance expectations? Was this
a burdensome or difficult question to answer? How important do you think this
question is to the field?

•

Did you use the help text or glossary? How helpful were these?

•

Do you have any additional suggestions or is there anything else you want to share
about the experience of completing the census?

These questions were asked to gain a better understanding of the general experience of
completing the survey and to hear respondent opinions about items being considered for
revision. In addition, the UI team discussed potential survey changes with the two forensic
consultants included on the team. Their opinions are also identified below.
FINDINGS
The following findings are divided into four categories: (a) reported amount of time to
complete the survey by pretest sites, (b) general feedback on instrument, (c) respondent
opinions on items being considered for revision, and (d) observations of other survey issues
and responses to questions about individual survey responses.
Completion Times
Pretest sites reported a wide range in survey completion times. The overall time to
complete the survey ranged from 1 hour and 50 minutes to 52 hours and 53 minutes (see
Table 1). On average, pretest sites took nine and a half hours to complete the survey. Lab 7
was an outlier, reporting it took nearly 53 hours to complete the survey. Removing Lab 7
from calculations, the average time to complete the survey decreased to 4.1 hours (see Table
2). Lab 7 has more disciplines than the typical lab and explained that they sent the survey
out to each unit for completion of the workload sections. Adding the time to complete
overall laboratory items to the time for each unit to complete their individual sections
resulted in the reported completion time.

BJS Census of Publicly Funded Crime Labs Pretest

3

The Current Issues section took the least amount of time for labs to complete, while the
Workload section was the most burdensome. The nine pretest sites reported using between 1
and 15 staff members to complete the survey, and no labs reported additional resource
expenses to complete the survey other than the listed staff time. Individual lab completion
times for each sub-section are listed below in Tables 1 and 2.
General Feedback on Instrument
Conversations with pretest sites did not elicit strong complaints about the census
instrument or overall burden in completing it. All sites, except one, wanted the pretest to
count as their official submission. While some of the sites reported the Help Text and
Glossary were moderately helpful, other labs did not use these tools or could not remember if
they had used them. In multiple contexts, labs reported that the most difficult thing about the
census is that labs track information differently.
Respondent Opinions on Items Considered for Revision
Pretest sites were asked about the following issues or items that were being considered
for removal or revision.
Tracking cases versus requests
Five pretest labs reported they tracked by request or submission; three labs said they
could track by either unit of measurement; and one lab said there was no distinction between
cases and requests within their agency (i.e., 10 items come in from one crime and this is one
request). A couple pretest respondents had reported requests in items D3-D4 without noting
on their survey that they were reporting requests rather than cases.

BJS Census of Publicly Funded Crime Labs Pretest

4

Table 1. Response Times for All Pretest Sites
Section
A. Organization
B. Budget
C. Staff
D. Workload
E. Outsourcing
F. Quality
Assurance
G. Current Issues
Entire Survey

Lab 1

Lab 2

Lab 3

Lab 4

Lab 5

Lab 6

Lab 7

Lab 8

Lab 9

Average
minutes

Std Dev
(min)

Average
hours

Std Dev
(hr)

3
36
7
60
4

30
60
30
120
15

5
10
15
60
10

30
15
15
30
10

15
30
15
150
75

4
12
16
65
7

3
10
180
2880
10

15
60
30
120
10

5
30
30
380
15

12.22
29.22
37.56
429.44
17.33

11.12
19.94
54.07
924.84
21.90

0.204
0.487
0.626
7.157
0.289

0.185
0.332
0.901
15.414
0.365

9

60

5

10

60

30

60

60

15

34.33

25.30

0.572

0.422

1
120

15
330

5
110

10
120

15
360

1
135

20
3163

5
300

5
480

8.56
568.67

6.78
981.85

0.143
9.478

0.113
16.364

Std Dev
(hr)
11.30

Average
hours
0.223

Std Dev
(hr)
0.188

Table 2. Response Times for Pretest Sites Excluding Lab 7
Lab 1

Lab 2

Lab 3

Lab 4

Lab 5

Lab 6

Lab 7

Lab 8

Lab 9

A. Organization

3

30

5

30

15

4

15

5

B. Budget

36

60

10

15

30

12

-----

Average
minutes
13.38

60

30

31.63

19.87

0.527

0.331

C. Staff

7

30

15

15

15

16

---

30

30

19.75

8.94

0.329

0.149

D. Workload

60

120

60

30

150

65

---

120

380

123.13

111.32

2.052

1.855

E. Outsourcing

4

15

10

10

75

7

---

10

15

18.25

23.22

0.304

0.387

F. Quality Assurance

9

60

5

10

60

30

---

60

15

31.13

25.02

0.519

0.417

G. Current Issues

1

15

5

10

15

1

---

5

5

7.13

5.62

0.119

0.094

120

330

110

120

360

135

---

300

480

244.38

141.56

4.073

2.359

Section

Entire Survey

BJS Census of Publicly Funded Crime Labs Pretest

5

Laboratory respondents brought up a few issues for consideration. Respondents reported
that it was important to note that requests could have multiple items, so tracking at either the
case- or request-level would not indicate the number of evidence samples being analyzed.
Regarding D4 (the # of backlogged cases on 1/1/2010), one respondent said this was difficult
to answer, because cases could have some backlogged items, whereas other items could be
complete. It was unclear from the survey if a case should be counted as backlogged if one or
all items were backlogged. In addition, laboratories in multi-lab systems could get separate
requests from the same case (i.e., one case could create multiple submissions to different
labs). Furthermore, laboratories in multi-lab systems can transfer evidence to other labs for
analysis. One pretest site counted requests they received from another lab in the multi-lab
system, because they were responsible for analyzing the evidence. In contrast, another
pretest site did not count requests they received from other labs in their multi-lab system,
because they assumed the originating lab would count this in their requests.
The diagrams below illustrate some of the varying ways that cases and requests can be
handled in laboratories (this is not exhaustive) and the relationship between request and item.
The item-request relationship is ultimately a function of the physical evidence generated
from a criminal event and is not usually affected by laboratory policy. Figure 4 shows how
one law enforcement case number may generate two laboratory case numbers. This situation
may occur when evidence is collected from a suspect (DNA, fingerprints, etc.) during the
course of the investigation for comparison with the original items collected, or when
additional evidence is discovered or collected at a later date.
On the issue of using case versus request, the project’s forensic consultants agreed that
D4 should be changed to requests, but were divided on whether D3 should be tracked as case
or request. One consultant felt D3 should remain as is for legacy reasons. The other
consultant recommended D3 be changed to requests.

BJS Census of Publicly Funded Crime Labs Pretest

Figure 1. Requests and Cases

Figure 2. Requests and Cases

6

BJS Census of Publicly Funded Crime Labs Pretest

Figure 3. Requests and Cases

Figure 4. Requests and Cases

7

BJS Census of Publicly Funded Crime Labs Pretest

8

Lab Director Salary
No lab respondents felt uncomfortable reporting the lab director salary. Every
respondent independently brought up the fact that the director salary is public information.
The two consultants strongly agreed that reporting lab director salary should not be an issue
for labs. One said its absence would be a “hole in the survey.”
Turnaround Time
Every interviewed respondent felt the items asking about the current average turnaround
time for requests (D8-D16h, measured in full days) were extremely important for the field.
Respondents said that this information could be used to make funding decisions or to request
new funding or positions (by comparing their lab’s turnaround to the national rate). Labs
said this metric was important internally to compare themselves to other labs in multi-lab
systems, or to compare their turnaround time to their reported turnaround time in previous
years. Most respondents did not find this to be a burdensome question, because they were
able to calculate this with their LIMS. However, labs tended to calculate the turnaround time
with whatever entry and end stage their LIMS used (i.e., they did not necessarily use the
definition reported in the help text). Example entry stages used were (a) evidence submitted
to lab, (b) evidence submitted to section, (c) request made, and (d) assignment made. The
pretest labs used report complete as the end state. The census instrument used the definition
of turnaround time as time of evidence assignment to report generation. One pretest
respondent was also concerned about the fact that some analyses take substantially greater
time than others (e.g., trace may need to test for 5 substances for one sample and 2
substances for another). Since the item is defined as “average” turnaround time, these
differences should average out. However, the research team did notice that the current
definition of “turnaround time” does not include a period duration for which to average
turnaround time across.
Project consultants differed in their opinions. One of the consultants felt it was important
to keep this item in the census, while the other consultant thought it should be dropped
because it will vary among analysts and with case submission rates, which are not constant
throughout the year.
Performance Expectations
Pretest respondents had more mixed opinions on the utility of item F7, asking about
performance expectations (expected # requests completed by one FTE examiner per year) for
each discipline. Two-thirds thought this information was helpful, while one third were
unsure whether it was important. Most respondents did not think this was burdensome to
complete. There was concern, however, that this item does not take into consideration the

BJS Census of Publicly Funded Crime Labs Pretest

9

fact that some analysts have competing responsibilities (e.g., training) and are not expected to
perform casework full-time. Therefore, expectations will vary across all staffing levels.
One of the consultants was also concerned with this issue and said that, in particular,
entry-level staff would have higher caseload expectations than more senior staff who are
more involved with training and supervision. The other consultant said this information is
critical to laboratory managers. From the interviews, it was clear that some labs have this
outlined in policy, while other labs did not have formal, established expectations. One
respondent recommended that it would be more helpful to know actual performance of
examiners rather than the performance expectations. Those that thought this was an
important question said it would be helpful to see what other labs expect of their staff.
Additional Observations and Survey Issues
In addition to the questions asked of all pretest respondents above, some individual-level
concerns and problems arose. These are listed below:
Toxicology Subcategories
One lab checked all categories for item A9b, because analysts will do BAC only if a high
enough BAC is found and then will continue to conduct drug analyses only if the BAC was
under a certain limit. This confusion should be alleviated by the previously proposed change
to make this item a check-all and modify the wording.
Staffing
Multiple labs misinterpreted item C2 to mean the number of positions that were funded
but not filled (rather than the intended meaning which was overall number of positions that
were funded regardless of whether they were filled). One lab also requested additional
clarification on where to include lab staff, such as document examiners, latent print
examiners, and crime scene specialists.
Budget
One lab had a large discrepancy between the reported budget (B1) and the sum of budget
categories (B2). In the follow-up interview, the lab reported this was due mainly to a large
contract for a building lease (and also somewhat to additional operational costs). Two other
labs reported $0 for personnel budgeted amounts, because this cost does not come out of the
lab’s personal budget (one lab’s personnel costs were paid by the state while the other lab’s
personnel costs were paid by headquarters).
Workload
A small number of labs had difficulty reporting the forensic biology workload sections
when completing the census. One lab did not include convicted offender (D19) and arrestee

BJS Census of Publicly Funded Crime Labs Pretest

10

(D20) workload statistics in the overall Forensic Biology category (D16), because two
separate divisions completed these independently (one section did all casework samples
while the other did all convicted/arrestee samples). Another lab could not separate out sexual
assault evidence within their LIMS, so they used multiple LIMS queries to come to their best
estimate of sexual assault evidence requests.
One lab said it would be easier to complete the census if the workload items were
identical across disciplines. Another lab reported they were unable to separate out trace
workload from impressions workload statistics (in this lab, these two disciplines are within
the same division).
Outsourcing
One lab was unsure whether to put “0” or “NA” for a discipline they have in the lab but
which does not outsource. Although the Help Text has instructions on when to use each
response option, this pretest respondent did not use the Help Text and, consequently,
completed this incorrectly.
RECOMMENDATIONS
The following recommendations are made based on findings from the pretest survey
administration, post-administration interviews with respondents, and discussions with the
team’s forensic consultants. The research team looks forward to discussing the pretest results
and the following UI recommendations in the near future in order to make final decisions on
survey revisions.
1. Include additional instruction at the beginning of the census form for respondents to
make a copy of their completed census form for their records and to make comments
in the Feedback section if they are unable to complete a response according to the
directions provided.
a. Conclusion: BJS and UI agree to adopt recommendation 1.
2. Remove A6 (year lab established) and A8 (have there been any major modifications
or improvements in your facility since 2005) as planned, but include A7 (year facility
constructed) to help capture part of the phenomenon of lab upgrades.
a. Conclusion: BJS and UI agree to adopt recommendation 2.
3. In items D3 and D4 (asking about the number of cases the laboratory received in
2009 and the number of cases backlogged on 1/1/2010, respectively), report the
number of requests rather than the number of cases. While this still will not capture
the number of evidence items being analyzed, it will approximate this definition
more closely than the number of cases. BJS should note that if this change is made,

BJS Census of Publicly Funded Crime Labs Pretest

11

direct comparisons to previous survey waves will not be possible for these items.
Cautions about the interpretation of how request is defined (including the fact that
this does not directly correlate to the number of items) should be included in publicly
available data and reports. Census Help Text should also provide instruction for how
to handle cases received/sent to other labs in multi-laboratory systems. For future
administrations of the census, BJS may want to use laboratory contacts to learn
whether or not it would be appropriate and/or feasible for labs to track workload
statistics by item (as opposed to requests or cases).
a. Conclusion: UI and BJS agree to change the unit in D3 and D4 from ‘case’
to ‘request.’
4. Do not remove item C4-C6a (director salary range), because no pretest sites reported
discomfort with this question. If removed, audiences may question why this
information is not provided.
a. Conclusion: UI and BJS agree to include ‘Director’ as a salary category.
5. Engage in more discussion over the benefits and drawbacks of including turnaround
time (items D8-D16h). On the one hand, labs believe this is a very important
measure. On the other hand, labs did not necessarily use the listed definition. If this
item remains, the definition should be included in the actual item (as opposed to only
being listed in the glossary and Help Text). One option is to keep the item, but
include an additional question that asks what start and end stages are being used for
the calculations. In addition, there should be additional guidance on what time
period to use for averaging. We suggest using the year-long period of 2009, but
additional research may be needed to determine if this is a feasible request to make
of labs.
a. Conclusion: UI and BJS agree to remove D8-D17 part h. due to difficulty in
ensuring consistent reporting and scope of the data collection.
6. Revisit the importance of including performance expectations (item F7). While this
has been included in past versions, pretest respondents had mixed opinions on the
value of this question. Burden was reported to be low, but some labs have formal
expectations whereas other labs have no performance expectations as a part of policy
and may estimate these or calculate based on actual performance. Furthermore,
performance expectations would vary across different examiners depending on
responsibilities other than casework. An alternative approach to this question might
be to ask whether the labs have formalized performance expectations.

BJS Census of Publicly Funded Crime Labs Pretest

12

a. Conclusion: UI and BJS agree to drop the item requesting performance
expectations and replace this item with a Y/N item asking if the lab has
performance expectations for any discipline.
7. Change item A9b to be a check-all of the following categories: (1) Antemortem
BAC, (2) Antemortem BAC and Drugs, and (3) Postmortem.
a. Conclusion: UI and BJS agree to adopt recommendation 7.
8. Change the wording in item C2 from “FTE positions were funded (but not
necessarily filled) at your laboratory” to “FTE positions were funded (may or may
not be filled) at your laboratory.”
a. Conclusion: UI and BJS agree to adopt recommendation 8.
9. Provide additional examples in help text to clarify where respondents should
categorize examiners such as document examiners, latent print examiners, and crime
scene specialists. The Urban Institute will work with BJS and the project’s forensic
consultants to determine where these employees are most appropriately placed.
a. Conclusion: UI and BJS agree to adopt recommendation 9.
10. BJS and the Urban Institute should revisit the budget categories and make
determinations for when a respondent will be called back to clarify discrepancies in
the listed budget and the sum of the budget categories. Since not all budget
categories are included in item B2, it is expected that there will always be at least
some small discrepancy. However, rules need to be set for if, and at what point, a
respondent is called for further clarification for larger discrepancies. BJS and UI
also need to discuss how to handle situations where large budget portions are paid by
agencies outside of the individual lab. Because it is unknown what is included in the
listed budget, it may be beneficial to consider adding a checkbox where respondents
can state what categories are included in the budget. An example of how this was
done for the BJS Census of State Court Prosecutors is below:
B2. Does the budget amount entered at B1 include funding for the following budget
categories?
Yes
□
□
□
□
□
□
□

No
□
□
□
□
□
□
□

a. Staff salaries
b. Expert services
c. Investigator services
d. Interpreter services
e. Child support enforcement
f. DNA testing
g. Staff training

BJS Census of Publicly Funded Crime Labs Pretest

13

a. Conclusion: B2 and B3 have been altered to clarify budget categories.
11. Include instructions within the item on when to use “0” or “NA” for item E3 on
outsourcing.
a. Conclusion: UI and BJS agree to add instructions to question E3.
12. Add explicit instruction on items with multiple “Yes/No” responses that respondents
need to check the “No” box and not leave it blank. In addition, BJS and UI should
discuss how to handle blank responses on Yes/No items where “Yes” is endorsed but
no “No” responses are endorsed- instead they are left blank. Options are to:
a. Follow-up on all missing items where “Yes” is endorsed and all other parts
of item are left blank;
b. Change item to check-all (this has theoretical problems in that there is no
way to determine whether something is left blank intentionally or is a “No”
response; or
c. Only follow-up with respondents if at least one “No” is endorsed but other
parts of the item are left blank.
d. Conclusion: UI and BJS agree to adopt option a. of recommendation 12.
13. BJS and the Urban Institute need to engage in discussions to come to a decision on
when to encourage or discourage estimations, if an exact number is unable to be
provided by the lab (e.g., they do not have an existing mechanism to calculate a
statistic, a discipline’s statistics cannot be divided from another discipline, it would
be too burdensome to determine, etc.). While estimating will likely not be
encouraged on the instrument, BJS needs to provide guidance to UI on how to handle
individual situations when following up with labs on incomplete items. Census
instructions currently guide labs to contact the help line if they are unable to provide
exact counts.
a. Conclusion: UI and BJS agree to accept no estimations and re-evaluate
decision when collection period is nearing completion.


File Typeapplication/pdf
File TitleMicrosoft Word - BJSReport_Pilot DeIdent.doc
Authorkwalsh
File Modified2010-08-25
File Created2010-07-29

© 2024 OMB.report | Privacy Policy