Nonresponse Bias in the SOII

Nonresponse Bias in the SOII (August 2013).pdf

Survey of Occupational Injuries and Illnesses

Nonresponse Bias in the SOII

OMB: 1220-0045

Document [pdf]
Download: pdf | pdf
JSM 2013 - Government Statistics Section

Nonresponse Bias in the
Survey of Occupational Injuries and Illnesses October 2013
Erin M. Huband, Patrick Bobbitt
Bureau of Labor Statistics, 2 Massachusetts Ave NE Room 3160, Washington, DC 20212

Abstract
The Bureau of Labor Statistics Survey of Occupational Injuries and Illnesses collects data
to estimate the counts and rates of work-related injuries and illnesses. Participation by
private sector employers is mandated by the Occupational Safety and Health Act of 1970.
For state and local government establishments, however, state laws determine whether
the survey is mandatory. While private sector response rates reflect the mandatory nature
of the survey, response rates for states in which public sector response is voluntary are
low. To determine whether the survey suffers from bias attributable to non-response,
government units were classified as either “likely” or “unlikely” respondents using a
logistic regression model. Counts and rates of injuries and illnesses for these groups were
then compared to provide an indication of potential nonresponse bias. This paper
describes the methodology used for this analysis and presents some preliminary results
from the analysis.
Key Words: nonresponse bias, logistic regression

1. Overview
1.1 Survey Scope
The Survey of Occupational Injuries and Illnesses (SOII), administered by the Bureau of
Labor Statistics (BLS), provides annual information on the rates and counts of workrelated injuries and illnesses, and how these statistics vary by incident, industry,
geography, occupation, and other characteristics. Each yearly sample of workplaces
selected by the BLS consists of approximately 240,000 establishments. SOII data are
solicited from employers having 11 or more employees in agricultural production, and
from employers of all sizes in all other industries. Starting with survey year 2008, SOII
also collected data from state and local government establishments to provide estimates
of occupational injuries and illnesses among government workers for the nation and each
state. In 2011, the portion of establishments that were government was only 1.3%; of the
national employment, only 6.2% was in governments. Prior to 2008, state and local
government injury and illness estimates were available for only a selection of states and
at varying levels of detail. Self-employed persons are not considered to be employees
under the 1970 act. Private households, the United States Postal Service, and federal
government workers are also out of scope for the SOII. Most SOII data are directly
collected from employers, except for data in mining and railroads in which case the data
come from the Mine Safety and Health Administration of the U.S. Department of Labor
and the Federal Railroad Administration of the U.S. Department of Transportation.

3517

JSM 2013 - Government Statistics Section

1.2 Survey Sample Design
Because the SOII is a federal-state cooperative program and the data are designed to meet
the needs of the states, an independent sample is selected for each state. The survey uses
a stratified sample design, with strata of state, ownership (whether private, state
government, or local government), industry, and size class (a grouping defined by the
establishment’s average annual employment). All establishments in the largest size class
are selected with certainty. Sample sizes are determined by the participating states based
on budget constraints. The optimal allocation procedure used by the SOII distributes the
sample to the industries in a manner intended to minimize the variance of the total
number of recordable cases in the universe or, alternatively, the incidence rate of
recordable cases in the universe. In strata with higher variability of the data, a larger
sample is selected.

1.3 Survey Estimation
Data collected for the SOII are used to tabulate estimates for two separate data series:
annual summary (industry-level) estimates and more-detailed case circumstance and
worker characteristic estimates for cases that involved days away from work. Part of the
estimation process involves weighting sample units and cases to represent all injuries and
illnesses from units on the frame from which the sample was selected.

1.4 Survey Nonresponse
A SOII response rate analysis from 2011 showed that for the years 2003 – 2010, overall
response rates slowly declined from 94% to 90%. It was found that private industry
response rates didn’t vary much from 91%, but response rates for state and local
governments dropped dramatically starting in 2008. This is the year when the BLS began
collecting government data for all states. Previous to this, it had only been collecting state
and local government data from a small number of states.
Even though all states and their government establishments are now being surveyed,
there remain some states where reporting the government injury and illness data is
voluntary. When we looked only at those states where reporting data for the state and
local governments is not required by law, response rates for governments are low
(between 30% — 50% in 2010). Though overall response rates are not low enough to
trigger an Office of Management and Budget-mandated nonresponse bias analysis, those
for public sector data in voluntary states are. It is in this case that nonresponse bias was
studied.

1.5 Nonresponse Bias Analysis
As part of this nonresponse bias analysis, we assessed what factors influenced a
respondent actually responding. Using a logistic regression model, we used the responses
of those least likely to respond as a proxy for those that did not respond. Comparing
likely responders to non-likely responders allowed us to measure any nonresponse bias.

2. Methodology
2.1 Overview
Data for this analysis were compiled from the database of all sample units from the 2009
– 2011 SOII. There are 23 states for which public sector reporting is voluntary. The entire
samples (including government and private sector units) for these 23 states comprise
280,016 units (37%) of all 755,545 SOII sample units for the three-year period.
Nonviable units (units that had gone out of business or were out of scope for the survey,

3518

JSM 2013 - Government Statistics Section

that were reported for in more than one way, or for which good addresses were not
available) and units from the private sector were removed from the dataset, leaving
19,067 observations. Descriptive information such as how many employees each unit
had, each unit’s state, each unit’s industry, number of injury and illness case for each
unit, and the sampling weight for each unit was also included in the dataset. Each unit
was identified as either a respondent or nonrespondent, based on whether the unit
responded to the SOII. Because a unit’s status as a respondent is a binary variable,
logistic regression was a good choice.
Like linear regression, logistic regression uses input (or independent) variables. Unlike
linear regression, logistic regression uses the independent variables to predict the
probability of the outcome (or dependent variable) occurring. Once we predicted whether
a unit will respond, we used the predictive probabilities from the model to predict if
similar units in the future will respond to the SOII. Having information about whether
units are likely to respond helped us build more efficient allocation algorithms. After
using the model to predict the units to be respondents or nonrespondents, we compared
the two groups to see if any bias exists.

2.2 Logistic Regression Model
It was shown that geographical region in which the unit exists, size class of the unit (five
groups based on the number of employees each unit has), and industry sector have an
effect on whether a unit will respond (Huband, 2010). The set of independent variables
used for this analysis was as follows:
x Ownership (whether in state or local government)
x Region (six geographical regions plus one for the states whose data are collected
by staff in the national office)
x Size class (five levels depending on the number of employees)
x Supersector (eleven different categories of industries)
x Weight class (five levels depending on the sample weight of the establishment)
x Certainty (whether the unit was selected to be in the sample with probability 1)
x Interaction between region and supersector
From the SAS output obtained after running the logistic regression model, it gave a
likelihood ratio chi-square test statistic value of 1960.9431. This verified that our model
fits significantly better than a model with no independent variables. The other test
statistics, Score and Wald (1822.8242 and 1468.8408 respectively), also indicated
significance of the model. Among all the initial independent variables, weight class was
the only one found to not add to the model’s fit; therefore, it was dropped in the stepwise
logistic regression. The point estimates and standard errors for the independent variables
can be found in Appendix A.
The following table shows the independent variables significance test results, each with a
significant (at α = 0.05) outcome.

3519

JSM 2013 - Government Statistics Section

Table 1: Independent Variables and
Corresponding Test and Significance Statistics
Independent
Variable
Ownership
Region
Size class
Supersector
Certainty
Region * Supersector

Degrees of
Freedom
1
5
4
10
1
47

Wald
Chi-Square
150.1707
11.1840
413.9952
38.3634
35.2425
180.5169

Significance
Level
< 0.0001
0.0479
< 0.0001
< 0.0001
< 0.0001
< 0.0001

2.3 Classification Test
After the model was built and found to fit the data well, we had to see how well the
model classified the units as respondents. To do this, a classification test was run. The
first step in running a classification test is to split the data into two groups: one used for
creating the model, the other used for testing the model. The dividing of the data was
done by simple random sampling, with 90% of the dataset being used for the model
building, and the remaining 10% used to test the model. We used this 90/10 split to
ensure that we had enough data on which to build the model, while the test data had
enough observations to estimate the model’s performance.
The model built on the 90% was then applied to the randomly-selected 10%. Based on
the new model, units were classified as “likely” respondents if the probability of
responding was greater than some cut-off value (which ranged from 0.20 to 0.90). If the
probability of responding was less than the cut-off value, the establishment was classified
as an “unlikely” respondent.
Next, the results of the classification test were summarized in confusion matrices for each
of the eight cut-off values.
Actual
Actual
Respondent
Nonrespondent
Predicted
Respondent

True Positives

False Positives

Predicted
Nonrespondent

False Negatives

True Negatives

Using the true positives, false positives, and false negatives, the precision (the proportion
of those predicted as respondents that actually responded), recall (the proportion of actual
respondents correctly predicted to be respondents), and F1 scores were calculated. The F1
score (or F-measure) combines precision and recall.
	ଵ ൌ ʹ ‫כ‬

’”‡…‹•‹‘ ‫ŽŽƒ…‡” כ‬
’”‡…‹•‹‘ ൅ ”‡…ƒŽŽ

These three statistics helped determine our cut-off value. In this case, the F1 score reaches
a maximum at the cut-off of 0.40. The table below shows these summary statistics (as
well as values from the confusion matrices) for each of the eight cut-off values.

3520

JSM 2013 - Government Statistics Section

Table 2: Possible Cut-off Values with Confusion Matrix Statistics
Cut-off Value
True Positives
False Positives
False Negatives
True Negatives
Precision
Recall
F1

0.20
766
1033
15
93
0.426
0.981
0.594

0.30
702
855
79
271
0.451
0.899
0.601

0.40
561
496
220
630
0.531
0.718
0.610

0.50
343
271
438
855
0.559
0.439
0.492

0.60
133
81
648
1045
0.621
0.170
0.267

0.70
59
16
722
1110
0.787
0.076
0.138

0.80
43
5
738
1121
0.896
0.055
0.104

0.90
37
1
744
1125
0.974
0.047
0.090

The F1 score is at a maximum when the cut-off value is 0.40, but precision and recall
reach their maximums at opposing values of the cut-offs. To find the value that made
sense for all three statistics, we treated the precision and recall as means of Bernoulli
random variables and created confidence intervals around them. For cut-off values less
than 0.40, the precision values are not significantly different from each other. For cut-off
values greater than 0.60, the recall values are not significantly different from each other.
But for cut-off values 0.40 and 0.50 precision does not differ significantly, while the
recall does. It is for this reason that 0.40 was selected as the cut-off value. While it might
seem like 0.50 would be the best choice, we found that when using the cut-off of 0.40,
the predicted response rate mimicked what we found in the data, thereby solidifying our
choice.

2.4 Applying the Classification
Once the cut-off value was chosen, the original model was applied to the original dataset,
and the probability of response was calculated for each respondent. If that probability was
greater than 0.40, then the unit was labeled a “likely” responder. If, however, the
probability was less than 0.40, then the unit was labeled an “unlikely” responder.

2.5 Comparison
Once we had the predictions for which of our establishments would respond or not, we
compared the means of original sampling weights, raw counts of total injuries and
illnesses, weighted counts of total injuries and illnesses, and injury and illness rates for
the two groups (likely and unlikely respondents) and found that for each year, they
differed significantly at the α = 0.05 level, indicating that there is potential for
nonresponse bias. The comparisons were made using t-tests across many different levels.
From the charts below, we can see that mean case counts and mean case rates differ for
likely respondents and unlikely respondents. And comparing the two charts, we see that
the direction of the differences changes between counts and rates; case rates take into
account the hours worked, which could explain some of the difference.

3521

JSM 2013 - Government Statistics Section

Mean Case Counts
16
14
12
10
8
6
4
2
0

Likely Respondent

Unlikely Respondent

Chart 1: Bar chart of mean case counts for predicted likely and unlikely respondents.

Mean Case Rates
3.5
3
2.5
2
1.5
1
0.5
0

Likely Respondent

Unlikely Respondent

Chart 2: Bar chart of mean case rates for predicted likely and unlikely respondents.

3. Conclusion
Even though there is indication of nonresponse bias within the SOII, we are confident
that it is most likely confined to the limits of this study, that is, confined to state and local
government units within those states where reporting to the survey is voluntary. In states
that require that the survey be submitted, BLS has response rates around 90%. Again, in
2011, the portion of states and governments represented is only 1.3% of the
establishments and 6.2% of the employment. Because these percentages are so low, it is
unlikely that nonresponse biases associated with state and local governments had a large
impact on the national estimates for all industries. But for future work, public sector data
for all states should be examined, as well as data for the private sector for the entire
country.
Any opinions expressed in this paper are those of the authors and do not constitute
policy of the Bureau of Labor Statistics.

3522

JSM 2013 - Government Statistics Section

Appendix A: Maximum Likelihood Estimates and Standard Errors for
Independent Variables in the Logistic Regression Model

Variable
Intercept
Ownership
Region
Region
Region
Region
Region
Size Code
Size Code
Size Code
Size Code
Supersector
Supersector
Supersector
Supersector
Supersector
Supersector
Supersector
Supersector
Supersector
Supersector
Certainty
Region * Supersector
Region * Supersector
Region * Supersector
Region * Supersector
Region * Supersector
Region * Supersector
Region * Supersector
Region * Supersector
Region * Supersector
Region * Supersector
Region * Supersector
Region * Supersector
Region * Supersector
Region * Supersector
Region * Supersector
Region * Supersector

First
Value

Second
Value

20
ATL
BOS
CHI
DAL
NSG
1
2
3
4
CON
EHS
FIA
INF
LEH
MFG
NRM
OTS
PAD
PBS
N
ATL
ATL
ATL
ATL
ATL
ATL
ATL
ATL
ATL
ATL
BOS
BOS
BOS
BOS
BOS
BOS

CON
EHS
FIA
INF
LEH
MFG
NRM
OTS
PAD
PBS
CON
EHS
FIA
INF
LEH
MFG

3523

Estimate
-1.38801
-0.43731
0.10874
-0.44317
0.02286
0.37211
0.26776
1.41464
1.30677
1.03580
0.86676
-0.63539
1.11866
1.85596
1.31867
-0.10780
0.61265
0.38391
-1.35575
0.68336
0.81594
-0.27588
-0.98550
-1.39065
-0.59824
-0.61359
0.06490
-13.73234
0.16217
0.91361
-0.82713
-1.38059
-1.75554
-0.39137
-0.46668
-1.15239
0.66492
0.00000

Standard
Error
0.24688
0.03569
0.27839
0.37838
0.28999
0.26031
0.25696
0.07353
0.07068
0.06630
0.06630
0.57795
0.26890
0.71363
0.59677
0.66199
0.47613
1.04974
1.11223
0.26065
0.53747
0.04647
0.86109
0.30801
0.81714
0.65600
0.71685
373.26794
1.12846
1.15008
0.30143
0.62929
1.20686
0.41586
0.93673
0.80963
0.85981

JSM 2013 - Government Statistics Section

Region * Supersector
Region * Supersector
Region * Supersector
Region * Supersector
Region * Supersector
Region * Supersector
Region * Supersector
Region * Supersector
Region * Supersector
Region * Supersector
Region * Supersector
Region * Supersector
Region * Supersector
Region * Supersector
Region * Supersector
Region * Supersector
Region * Supersector
Region * Supersector
Region * Supersector
Region * Supersector
Region * Supersector
Region * Supersector
Region * Supersector
Region * Supersector
Region * Supersector
Region * Supersector
Region * Supersector
Region * Supersector
Region * Supersector
Region * Supersector
Region * Supersector
Region * Supersector
Region * Supersector
Region * Supersector

BOS
BOS
BOS
BOS
CHI
CHI
CHI
CHI
CHI
CHI
CHI
CHI
CHI
CHI
DAL
DAL
DAL
DAL
DAL
DAL
DAL
DAL
DAL
DAL
NSG
NSG
NSG
NSG
NSG
NSG
NSG
NSG
NSG
NSG

NRM
OTS
PAD
PBS
CON
EHS
FIA
INF
LEH
MFG
NRM
OTS
PAD
PBS
CON
EHS
FIA
INF
LEH
MFG
NRM
OTS
PAD
PBS
CON
EHS
FIA
INF
LEH
MFG
NRM
OTS
PAD
PBS

3524

0.00000
1.33900
-0.05735
-0.57728
1.78212
-0.25737
-1.11196
-0.90830
0.92516
13.07326
12.80997
2.92966
-0.09148
-0.46938
1.10092
-0.81931
-1.19754
-1.04928
0.19844
1.32895
2.66523
1.78023
-0.37570
-0.63510
0.01403
-1.30526
-0.60185
-0.66640
-0.10222
0.00000
1.56766
1.24903
-0.96368
-1.09838

1.29738
0.41322
0.74599
0.74713
0.31923
0.81038
0.69311
0.72141
306.57790
163.93051
1.20302
0.31259
0.65497
0.59998
0.28735
0.74929
0.62448
0.68209
1.15603
1.13344
1.13618
0.28056
0.57005
0.60274
0.28634
0.74251
0.61969
0.68015
1.08459
1.12746
0.27889
0.56121

JSM 2013 - Government Statistics Section

References
BLS Handbook of Methods, Chapter 9, September 2008,
http://www.bls.gov/opub/hom/pdf/homch9.pdf
Huband, Erin. Survey of Occupational Injuries and Illnesses Response Rate Analysis,
March 2010.
Selby, Philip; Burdette, Terry; Huband, Erin, Overview of the Survey of Occupational
Injuries and Illnesses Sample Design and Estimation Methodology, October 2008.

3525


File Typeapplication/pdf
File TitleNonresponse Bias in the Survey of Occupational Injuries and Illnesses
File Modified2019-08-21
File Created2014-01-16

© 2024 OMB.report | Privacy Policy