Download:
pdf |
pdfUpdate on Use of Administrative Data to Explore Effect of
Establishment Non-response Adjustment on the National
Compensation Survey Estimates October 2008
Erin McNulty1, Chester H. Ponikowski1, Jackson Crockett1
1
Bureau of Labor Statistics, 2 Massachusetts Ave., NE, Washington, DC 20212
Abstract
Nearly all establishment surveys are prone to some level of non-response. Non-response may lead to biases in survey
estimates and an increase in survey sampling variance. Survey practitioners use various techniques to reduce bias due
to non-response. The most common technique is to adjust the sampling weights of responding units to account for nonresponding units within a specified set of weighting classes or cells. In the National Compensation Survey (NCS),
which is an establishment survey conducted by the Bureau of Labor Statistics, the weighting cells are formed using
available auxiliary information: ownership, industry, and establishment employment size. At JSM 2006, we presented
a paper in which we explored how effective the formed cells are in reducing potential bias in the NCS estimates and
presented results for one NCS area. Since 2006, NCS has expanded the study of potential bias due to non -response to
several additional survey areas and time periods. In this paper, we present results from this additional research. We
include localities of different size and with different levels of non-response. Also we compare the direction and
magnitude of bias across time and across areas.
Key Words: Unit non-response, bias, weighting cells
1. Introduction
The non-response rates are increasing in many establishment surveys. As a result of increasing non-response rates
there is a greater emphasis put on non-response bias studies. In fact, the non-response bias studies are called for by
recent OMB standards and guidelines for all U.S. federal government funded statistical surveys when the expected unit
response rate is below 80 percent. In the National Compensation Survey (NCS) Program, unit response rate has dipped
below 80 percent for the private industry samples. Unit non-response occurs because of refusal or inability of a sample
establishment to participate in the survey. In addition non-response may occur because of inability of an interviewer to
make contact with a sample establishment within a specified survey data collection cycle. Since non-responding
sample establishments’ data on employee earnings may be systematically different, that is, larger or smaller on average
from responding establishments, there may be bias in the survey estimates due to non-response. Non-response also
causes an increase in the variance of survey estimates because the effective sample size is reduced. However, bias is
usually considered to be a bigger concern because, in the presence of a significant bias, a calculated confidence interval
will be centered on the wrong value and thus will be misleading.
The goal of non-response bias studies is to provide data users with assessment of bias that may exist in key survey
estimates. Over the years, a number of methods have been presented in statistical literature for assessing non-response
bias (Brick et al. 2003; Curtin, Presser, and Singer 2000, 2005; Lin and Schaeffer 1995; Potthoff, Manton, and
Woodbury 1993; Groves and Couper 1998; Groves 2006). Groves describes several methods and their properties,
including their strengths and their weaknesses. Perhaps the most common approach for non-response bias analysis is
comparing survey estimates based on useable sample responses to estimates based on administrative data for all sample
units. This is the approach used in our assessment of non-response bias in the NCS estimates.
In the 2006 paper (Ponikowski and McNulty 2006), we explored the effect of establishment non-response adjustment
procedures on the NCS estimates. Using data from one NCS area survey, we calculated and compared response rates
for the auxiliary variables that are used in forming weighting adjustment cells. We found that response rates vary by
industry group and establishment employment size class, the auxiliary variables. We used administrative data to
determine whether non-response might be biasing survey estimates. We noted that the NCS weighting adjustment
helps reduce the bias due to non-response. Also as a part of our study we selected 100 samples from the original frame
and then calculated the ratio of the bias to the standard deviation to assess the effect of bias on the accuracy of average
monthly earnings estimates. We found that the effect of non-response bias on the accuracy of estimates is usually
negligible. However the study was limited to one NCS sample area and time period.
In this paper we explore further the effect of non-response adjustment on estimates in the NCS. We examine data from
other survey areas and time periods. We include three more areas of different size and with different levels of non response. We compare the direction and magnitude of bias across time and across the four areas. We provide a brief
description of the NCS in Section 2; present empirical analysis and results in Section 3; and state our conclusion and
propose issues for further research in Section 4.
2. Description of the National Compensation Survey
The NCS is an establishment survey of wages and salaries and employer-provided benefits conducted by the Bureau of
Labor Statistics (BLS). It is used to produce three general types of survey outputs: the Employer Cost Index (ECI),
employee benefits data, and locality wage data. The ECI is a series of national indexes that track quarterly and annual
changes in wages and benefit costs along with quarterly cost level information on the cost per hour worked of each
component of compensation. The employee benefits data includes the incidence and provisions of benefit plans and is
published once a year. The locality wage data include annual publication of occupational wages for a sample of
localities, census divisions, and for the nation as a whole. All state and local governments and private sector industries,
except for farms and private households, are covered in the survey. All employees are covered except the selfemployed.
The BLS Quarterly Census of Employment and Wages (QCEW) serves as the sampling frame for the NCS survey and
was used as the administrative data for this study. The QCEW is created from State Unemployment Insurance (UI)
files of establishments, which are obtained through the cooperation of the individual state agencies.
The integrated NCS sample consists of five rotating replacement sample panels. Each of the five sample panels will be
in sample for five years before being replaced by a new panel selected annually from the most current frame. The NCS
sample is selected using a three-stage stratified design with probability proportionate to employment sampling at each
stage. The first stage of sample selection is a probability sample of areas; the second stage is a probability sample of
establishments within sampled areas; and the third stage is a probability sample of occupations within sampled areas
and establishments.
The samples used in this analysis were selected from an NCS sample of 152 areas based on the Office of Management
and Budget (OMB) 1994 area definitions. In 2003 OMB released a new set of area definitions. The new area
definitions define a set of Core Based Statistical Areas (CBSA) and designate the remaining geographical areas as
outside CBSA counties. The outside CBSA areas for NCS sampling purposes are usually clusters of adjacent counties,
not single counties. The NCS has selected a new sample of areas using the 2003 OMB definitions which is in the
process of replacing the current set of primary sampling units (PSUs) over the next few years. A more detailed
description of the NCS sample design is provided in Chapter 8 of Handbook of Methods available from the BLS
website: www.bls.gov.
The NCS locality wage program collects wage data for a sample of occupations within sampled establishments. During
the initial interview or update interview, some sample establishments refuse to provide or are unable to provide wage
data. This results in establishment or unit non-response. Ignoring the establishment non-response could result in
substantial bias in estimates and incorrect variance estimates.
In our study, we used the administrative and NCS private industry sample data from the Chicago Consolidated
Metropolitan Statistical Area and the Orlando, San Antonio, and San Diego Metropolitan Areas from 2001 to 2005.
The definitions of these areas are provided in the 1997 BLS Handbook of Methods. The administrative data provided
us with auxiliary variables as well as data on establishment earnings and employment. The administrative data are
available approximately nine months after the reference date for the quarterly data collection. The NCS data provided
the sample size allocated to private industry in the Chicago, Orlando, San Antonio, and San Diego areas and the
distribution of NCS non-respondents among industries and establishment size classes within these areas. The nonrespondents in the NCS are establishments that do not provide any earnings data. The useable establishments are
establishments with earnings data for at least one sampled occupation. The in-scope sample sizes for each area and
time period studied are shown in Table 1, below. The sample sizes include both usable and refusal units, but exclude
establishments that could not be matched with current administrative data.
Table 1. In-scope Sample Sizes
Area
Chicago
Orlando
Year
2001
2002
2003
2004
2005
2001
2002
2003
2004
2005
Units
665
737
807
812
916
216
211
246
248
206
Area
San Antonio
San Diego
Year
2001
2002
2003
2004
2005
2001
2002
2003
2004
2005
Units
189
197
209
204
221
415
409
479
536
458
3. Empirical Analysis and Results
To investigate the effect of establishment non-response adjustment on NCS estimates, we studied data from four NCS
areas – Chicago, Orlando, San Antonio, and San Diego – between 2001 and 2005. For each of the twenty area-year
samples, we used administrative data to calculate average earnings for the entire NCS sample, for useable
establishments, and for non-responding establishments. We also conducted a simulation study for each area and time
period using the administrative data.
We first assessed how well NCS adjustments for non-response compensate for data lost to establishment non-response.
We matched the NCS sample establishments with units on the administrative data file and extracted their earnings and
employment information from the file. The earnings data on the administrative file are available at the establishment
level only. We calculated average monthly earnings for the respondents, the non-respondents, and the total sample.
The initial sample weights were used in the calculations of estimates. The total sample estimates simulate estimates
that might be produced if NCS had no non-response; the average earnings for respondents simulate estimates that might
be obtained if no non-response adjustment is done to account for the non-respondents. In addition, we calculated
average earnings for respondents using initial sample weights that were adjusted for non-respondents using current
weighting adjustment cells and procedures. Collapsing of cells was done using the NCS collapse pattern when
adjustment factor was greater than 4.0 within a cell. These estimates simulate published estimates. The area-wide
results are presented in Table 2, attached at the end of paper.
The Total Sample estimate was less than the estimate for the Responding Sample without Weights Adjusted for Nonresponse in fourteen of the twenty samples (see Table 2), reflecting lower average wages for nonrespondents in these
samples. After non-response adjustment, the Total Sample estimate was less than the estimate for the Responding
Sample with Weights Adjusted for Non-response in fifteen of the twenty samples. In most samples, non-response
adjustment did not mask the effect of the non-respondents; the difference between the Total and Responding Sample
estimates remained negative (or positive) in all but three of the twenty samples. In all five Orlando samples,
Responding Sample estimates were greater than the Total Sample estimate both before and after nonresponse
adjustment. Other areas had mixed results.
The difference between the Total and Responding Sample estimates after non-response was smaller than the difference
between the Total and Responding Sample estimates before non-response in thirteen of the twenty samples. This
shows that non-response adjustment improved the Responding Sample estimate in these samples. For example, in
Chicago-2002, the Total Sample estimate of $3,948.62 was larger than the Responding Sample by $284.47 (7.2% of
full-sample estimate) when estimates were calculated using unadjusted respondent data, but the difference decreased to
$207.37 (5.3% of full-sample estimate) after the weights were adjusted for nonresponse.
The difference between the Total and Responding Sample estimates were usually less than 10% of the Total Sample
estimate. The largest difference as a percentage of the Total Sample estimate before non-response adjustment was
11.5% in Orlando-2001, where the Responding Sample estimate of $2,867.14 was $295.72 greater than the Total
Sample estimate. After non-response adjustment, the largest difference was 9.9% in Orlando-2003, where the
Responding Sample estimate was $284.01 greater than the Total Sample estimate of $2,871.32.
The results show that non-response error in NCS samples varies in magnitude and direction. It is also largely affected
by area; Orlando showed more non-response error than the other three areas. Sample size also seems to be related to
the amount of error, since the areas with larger samples (Chicago and San Diego, see Table 1) in general had less bias
than the areas with smaller samples (Orlando and San Antonio). Adjusting weights for nonresponse does not always
bring estimates closer to the full-sample estimates, though it did bring (or keep) error below 10% of the full-sample
estimate in all samples.
The amount of bias in estimated average earnings for each area and time period cannot be determined from a single
sample. To measure the amount of bias in the average earnings estimates of an area-year NCS sample, we drew a total
of 100 samples of the same size and same industry composition as the NCS sample. These samples were taken from
the frame corresponding to the same area and year as the NCS sample; this frame is also the administrative source of
the wages and employment figures summarized in Table 2. For each sample, a response set was obtained by using the
current NCS sample response rates within each non-response adjustment cell. The non-respondents within each nonresponse adjustment cell were assigned at random. Under missing at random assumption the bias is expected to be
negligible. However, in our study the random assignments did not assure negligible bias because non-response cells
were collapsed when the non-response adjustment factor exceeded the maximum factor of 4.0.
We generated two sets of estimates using the respondent data. In the first set we used the initial sample establishment
weight, and in the second set we used the initial sample establishment weight adjusted for non-response. The sample
weight adjustment was done using the current NCS weight adjustment procedures and cells that have five size classes.
When the adjustment factor exceeded 4.0 within a cell, the collapsing of cells was done using the NCS collapse pattern.
The variances for each sample were computed using balanced repeated replication (BRR) methodology. For a detailed
description of the BRR methodology see Wolter (1985). Re-weighting for nonresponse was not re-done for each
replicate, following NCS procedures.
The formulas used to calculate the amount of bias in average earnings and ratios of bias to standard deviation are as
follows:
B d E ( y ) E ( y ) Y dr Y d
dr
r
d
d
B
d
d
where,
B
is the bias in average earnings for domain d
d
E ( y ) is the expected value of average earnings of respondents in domain d over the 100 samples
dr
E ( y ) is the expected value of average earnings of the total sample in domain d over the 100 samples
d
Y
Y
r
dr
d
is the average earnings of respondents in domain d
is the average earnings in domain d
is the ratio of the bias to standard deviation in domain d
d
d
is the standard deviation for the average earnings in domain d
The results, averaged over the 100 samples for each area-year survey, are displayed in Table 3 (attached at the end of
paper).
The results show that, as with the NCS sample, the amount of bias varies in magnitude and direction, but survey area is
the source of the most notable pattern. The bias of the Responding Sample without Weights Adjusted for Nonresponse was greater than the bias of Responding Sample with Weights Adjusted for Non-response for ten of the
twenty samples (see Table 3). However, the bias increased in four of the five Chicago time periods and four of the five
Orlando time periods, indicating that non-response adjustment was less effective in these areas. The survey area seems
to have an effect on the amount of bias that remains after weights are adjusted for non-response.
To determine the effect of bias on the accuracy of estimates, we calculated the ratio,
r
d
, defined above, for each
industry group estimate. Cochran (1953) points out that the effect of bias on accuracy of an estimate is negligible if the
bias is less than one tenth of the standard deviation of the estimate. A ratio between 0.1 and 0.2 is considered to have a
modest impact on accuracy of an estimate. The calculated ratios are presented in Table 4.
Table 4. Ratio of the Bias to the Standard Deviation by Areas and Year in Adjustment for
Non-response
Area
Chicago
Orlando
Year
2001
2002
2003
2004
2005
2001
2002
2003
2004
2005
r
d
0.02
0.21
0.34
0.51
0.23
0.26
0.26
0.06
0.28
0.75
Area
San Antonio
San Diego
Year
2001
2002
2003
2004
2005
2001
2002
2003
2004
2005
r
d
0.02
0.06
0.06
0.10
0.07
0.14
0.04
0.09
0.04
0.08
The ratios are above 0.2 in four of the five Chicago time periods and four of the five Orlando time periods. The ratios
are under 0.1 in four of the five San Antonio time periods and four of the five San Diego time periods, and those not
less than 0.1 are less than 0.2. The survey area seems to have a large effect on the impact of the observed bias.
4. Conclusion and Issues for Further Research
In this study, we have explored whether establishment non-response might be biasing the NCS survey estimates. We
used administrative data from the Chicago, Orlando, San Antonio, and San Diego survey areas to calculate average
earnings for responding units, non-responding units, and the entire NCS sample in those areas. We noted that the NCS
weighting adjustment usually helps reduce the bias due to non-response; the industry and employment size class are
powerful auxiliary variables in treating non-response.
We selected 100 samples from the original frame and then calculated the ratio of the bias to the standard deviation to
assess the effect of bias on the accuracy of average monthly earnings estimates. We noted that the effect of bias on the
accuracy of estimates is usually negligible.
In our further research we would like to explore using other methods for assessing potential non-response bias in our
earnings estimates. In particular we would like to use response rate comparisons across industry and establishment
size, comparisons to similar estimates from other sources, and comparisons of respondent estimates from early cooperators with those from final respondent data set.
We would like to continue to extend this study to other survey areas and time periods. We plan to continue to include
localities of different size and with different levels of non-response. We plan to continue to compare the direction and
magnitude of the bias across time and across areas. If it turns out that there are some more consistent trends, then there
may be justification for making a non-response bias adjustment. We would still like to perform some evaluation of
coverage of confidence intervals. We would also like to investigate whether there are any other auxiliary variables that
may be useful in reducing bias due to non-response. In particular, we would like to explore whether using average
monthly wage as an auxiliary variable would lend strength to re-weighting procedures. In addition, we would like to
investigate the current criteria used for collapsing weighting adjustment cells. As part of this work we would like to
determine whether requiring a minimum number of responding establishments within weighting adjustment cells has an
impact on bias and variance of estimates. Also, we would like to explore using both the magnitude of the weight
adjustment factor and number of responding units in the criteria for collapsing weighting adjustment cells.
References
BLS Handbook of Methods (Bulletin 2490, April 1997), Washington, D.C.: Bureau of Labor Statistics, 57-67.
Brick, M.J., Ferraro, D., Strickler, T., and Rauch, C. (2003), No. 8: 2002 NSAF Response Rates. Washington, DC:
Urban Institute. Available online at http://www.urban.org/UploadedPDF/900692_Methodology_8.pdf
Cochran, William G. (1953), Sampling Techniques, New York: John Wiley & Sons, Inc.
Curtin, R., Presser, S., and Singer, E. (2000), “The Effects of Response Rate Changes on the Index of Consumer
Sentiment,” Public Opinion Quarterly 64:413-28.
_______ (2005), “Changes in Telephone Survey Nonresponse Over the Past Quarter Century,” Public Opinion
Quarterly 69:87-98.
Ernst, L.R., Guciardo, C., Ponikowski, C.H., and Tehonica, J. (2002), “Sample Allocatio and Selection for the National
Compensation Survey,” Proceedings of the Section on Survey Research Methods, Washington, D.C.: American
Statistical Association.
Groves, R. and Couper, M. (1998), Nonresponse in Household Interview Surveys. New York: John Wiley.
Groves, R.M. (2006), “Nonresponse Rates and Nonresponse Bias in Household Surveys,” Public Opinion Quarterly,
Vol.70, No.5, Special Issue, pp. 646-675.
Kalton, G., and Kasprzyk, D. (1982), "Imputing for Missing Survey Responses," Proceedings of the Survey Research
Methods Section, Washington, D.C.: American Statistical Association, 22-31.
Lin, I-Fen. And Schaeffer, N. (1995), “Using Survey Participants to Estimate the Impact of Non-participation,” Public
Opinion Quarterly 59:236-58.
Little, R.J.A., (1986), Missing Data Adjustment in Large Surveys, Journal of Business & Economic Statistics, 6, 287296.
Oh, H. L. and Scheuren, F. J. (1983), Weighting Adjustment for Unit Nonresponse, In W.G. Madow, I. Olkin and D. B.
Rubin (editions), Incomplete Data in Sample Surveys, Vol. 2, New York: Academic Press.
Platek, R., and Gray, G.B. (1978), "Nonresponse and Imputation," Survey Methodology, 4, 144-177.
Ponikowski, C.H., and McNulty, E.E. (2006), “Use of Administrative Data to Explore Effect of Establishment
Nonresponse Adjustment on the National Compensation Survey Estimates,” Proceedings of the Section on Survey
Research Methods, Washington, D.C.: American Statistical Association.
Potthoff, R., Manton, K., and Woodbury, M. (1993), “Correcting for Nonavailability Bias in Surveys by Weighting
Based on Number of Callbacks,” Journal of American Statistical Association 88:1197-207.
Rubin, D.B. (1978), "Multiple Imputations in Sample Surveys--A Phenomenological Bayesian Approach to
Nonresponse," Proceedings of the Section on Survey Research Methods, Washington, D.C.: American Statistical
Association, 20-34.
Rizzo, L., Kalton, G. and Brick, J. M. (1996), A Comparison of Some Weighting Adjustment Methods for Panel
Nonresponse, Survey Methodology, 22, 43-53.
Sarndal, C.E. and Lundstrom, S. (2005), Estimation in Surveys with Nonresponse, London: John Wiley & Sons, Inc.
Sverchkov, M., Dorfman, A. H., Ernst, L.R., Moehrle, T.G., Paben, S. P., and Ponikowski, C.H. (2005), “On Non response Adjustment via Calibration,” Proceedings of the Section on Survey Research Methods, Washington, D.C.:
American Statistical Association.
Wolter, Kirk M. (1985), Introduction to Variance Estimation, New York: Springer-Verlag, Inc.
Any opinions expressed in this paper are those of the authors and do not constitute policy of the Bureau of Labor
Statistics.
Table 2. Average Monthly Earnings Estimates for NCS Responding, Non-responding, and
Total Sample by Selected Area and Time Period, Based on 100 Samples
Area
Chicago
Orlando
San Antonio
San Diego
Time
Period
2001
2002
2003
2004
2005
2001
2002
2003
2004
2005
2001
2002
2003
2004
2005
2001
2002
2003
2004
2005
Total
Sample
$3,763.48
$3,948.62
$3,961.06
$4,598.48
$4,509.53
$2,571.42
$2,662.63
$2,871.32
$2,787.26
$2,684.78
$2,800.59
$3,028.62
$3,140.19
$3,467.81
$2,449.77
$3,186.65
$3,367.90
$3,543.45
$3,407.08
$3,492.18
Responding
Sample Without
Weights
Adjusted for
Non-response
Nonresponding
Sample
Responding
Sample with
Weights
Adjusted for
Non-response
$3,721.62
$3,664.15
$3,909.15
$4,702.42
$4,588.02
$2,867.14
$2,941.94
$3,127.75
$2,846.79
$2,742.34
$3,017.42
$3,251.65
$3,268.18
$3,652.92
$2,357.73
$3,218.49
$3,479.64
$3,606.67
$3,331.62
$3,454.20
$3,847.42
$4,584.01
$4,074.09
$4,350.24
$4,304.58
$2,103.82
$2,113.66
$2,166.06
$2,539.36
$2,453.74
$2,248.29
$2,378.95
$2,685.05
$2,853.17
$3,255.45
$3,124.95
$3,166.55
$3,418.39
$3,583.40
$3,577.85
$3,815.92
$3,741.25
$3,921.84
$4,761.39
$4,583.14
$2,749.11
$2,922.70
$3,155.33
$2,903.39
$2,769.60
$3,056.68
$3,169.84
$3,239.38
$3,648.53
$2,414.85
$3,209.05
$3,356.75
$3,644.03
$3,391.71
$3,515.32
Table 3. Estimates of Bias and Variance Based on 100 Samples by Area and Survey Year
Area
Time
Period
Chicago
Orlando
San
Antonio
San
Diego
2001
2002
2003
2004
2005
2001
2002
2003
2004
2005
2001
2002
2003
2004
2005
2001
2002
2003
2004
2005
Total
Sample
$3,697.06
$3,721.55
$3,878.60
$3,958.56
$4,270.98
$2,538.83
$2,626.75
$2,709.50
$2,824.75
$3,041.84
$2,631.37
$2,668.28
$2,812.52
$3,032.19
$3,113.19
$3,075.97
$3,194.89
$3,238.84
$3,349.21
$3,494.62
Responding Sample
Without Weights Adjusted
for Non-response
$3,658.12
$3,693.32
$3,928.50
$4,058.37
$4,337.33
$2,600.91
$2,651.22
$2,711.80
$2,860.08
$3,096.87
$2,655.10
$2,647.88
$2,814.78
$3,066.74
$3,200.53
$3,162.67
$3,243.58
$3,237.08
$3,339.92
$3,456.15
Nonresponding
Sample
$3,735.00
$3,762.65
$3,798.58
$3,810.86
$4,163.70
$2,481.15
$2,594.80
$2,706.83
$2,766.62
$2,956.05
$2,592.59
$2,713.80
$2,803.46
$2,941.52
$2,893.03
$3,002.97
$3,152.93
$3,241.44
$3,364.25
$3,567.75
Responding Sample
with Weights Adjusted
for Non-response
$3,702.46
$3,785.67
$3,943.41
$4,080.14
$4,338.32
$2,590.50
$2,675.50
$2,721.20
$2,885.87
$3,247.91
$2,636.43
$2,648.81
$2,827.63
$3,062.52
$3,140.05
$3,111.03
$3,183.35
$3,254.82
$3,341.73
$3,479.63
Table 3. Estimates of Bias and Variance Based on 100 Samples by Area and Survey Year
(Continued)
Area
Chicago
Orlando
San
Antonio
San
Diego
Survey
Year
Bias of Responding
Sample Without Weights
Adjusted for Non-response
Variance of
Responding
Sample
2001
2002
2003
2004
2005
2001
2002
2003
2004
2005
2001
2002
2003
2004
2005
2001
2002
2003
2004
2005
$38.94
$28.23
-$49.89
-$99.81
-$66.35
-$62.08
-$24.48
-$2.31
-$35.33
-$55.02
-$23.73
$20.40
-$2.26
-$34.55
-$87.34
-$86.70
-$48.69
$1.76
$9.29
$38.47
70,406
65,256
34,098
55,531
62,109
39,546
32,364
31,979
41,401
85,162
79,853
116,214
58,036
101,097
175,077
60,753
50,798
28,152
30,274
30,190
Bias of Responding
Sample With
Weights Adjusted for
Non-response
-$5.40
-$64.12
-$64.81
-$121.58
-$67.33
-$51.67
-$48.76
-$11.71
-$61.11
-$206.07
-$5.05
$19.46
-$15.11
-$30.34
-$26.86
-$35.07
$11.55
-$15.98
$7.48
$14.99
Variance of
Responding Sample
With Weights Adjusted
for Non-response
87,352
91,357
36,495
57,732
86,336
40,932
35,542
37,830
47,616
75,445
82,507
108,526
56,912
101,621
158,491
65,203
69,136
29,865
30,216
36,558
File Type | application/pdf |
File Title | Update on Use of Administrative Data to Explore Effect of Establishment Non-response Adjustment on the National Compensation Sur |
File Modified | 2009-05-20 |
File Created | 2008-10-17 |