Current Employment Statistics
OMB Control No. 1220-0011
OMB Expiration Date: 12/31/2020
Supporting Statement for
BLS Current Employment Statistics Program
OMB CONTROL NO. 1220-0011
B. COLLECTION OF DATA EMPLOYING STATISTICAL METHODS
1. Describe (including a numerical estimate) the potential respondent universe and any sampling or other respondent selection methods to be used. Data on the number of entities (e.g., establishments, State and local government units, households, or persons) in the universe covered by the collection and in the corresponding sample are to be provided in tabular form for the universe as a whole and for each of the strata in the proposed sample. Indicate expected response rates for the collection as a whole. If the collection had been conducted previously, include the actual response rate achieved during the last collection.
The Current Employment Statistics (CES) sample for the private sector is drawn from a private sector universe comprised of about 7.6 million in-scope U.S. private business establishments (worksites). Private households and all agriculture except logging are excluded from the frame. The universe frame, known as the Longitudinal Database (LDB), serves as the sampling frame and research database of the Bureau of Labor Statistics (BLS). The primary data source for the LDB is the Quarterly Contribution Reports filed by employers with their State’s Unemployment Insurance Agency. The BLS has cooperative agreements with each State to access these data through the Quarterly Census of Employment and Wages (QCEW) program. The LDB contains microdata records with the name, location, North American Industry Classification System (NAICS) industrial classification, employment, and wages of nearly all nonfarm establishments in the United States. Each quarter, the LDB is updated with the most recent universe data available.
The CES probability sample includes approximately 206,000 Unemployment Insurance (UI) accounts selected from the approximately 6.2 million UI accounts in the private sector. The sampled UI accounts cover approximately 1 million individual worksites. These UI accounts are selected on a random probability basis as described in section 2a below.
In addition, the CES program collects data from Federal, State, and local governments. The governments’ sample is not selected on a probability basis, because data are collected for a large percentage of the population employment. Data are collected for: approximately 84.7 percent (2.80 million employees) of total Federal civilian employment; approximately 87.7 percent (4.67 million employees) of total State employment; and approximately 72.8 percent (14.23 million employees) of total local government employment. The government sample units are selected from the 19,700 State, and 75,213 local government UI accounts on the LDB. Federal government employment is largely collected from the National Finance Center.
Data from sample members are collected each month on employment, hours, and earnings. The survey is a Federal-State cooperative survey, with the national sample being a composite of the State samples.
2. Describe the procedures for the collection of information including:
Statistical methodology for stratification and sample selection,
Estimation procedure,
Degree of accuracy needed for the purpose described in the justification,
Unusual problems requiring specialized sampling procedures, and
Any use of periodic (less frequent than annual) data collection cycles to reduce burden.
2a. Statistical methodology for stratification and sample design
The CES probability sample is a stratified, simple random sample, where the strata are defined by State, major (NAICS) industry division, and employment size (BLS Handbook of Methods, Chapter 2, page 5, Bureau of Labor Statistics, 2011). Stratification groups population members together for the purpose of sample allocation and selection. With 13 industries and 8 size classes, there are 104 total strata per State (Erkens, Huff, and Gershunskaya, 2005). The sampling rates for each stratum are determined through a method known as optimum allocation, which distributes a fixed number of sample units across a set of strata to minimize the overall variance, or sampling error, on the employment estimates at a State level.
The UI account is the basic sampling unit of the CES survey. UI account numbers are unique within a State and include all individual establishments within a firm. Though the LDB is updated on a quarterly basis, the sample is redrawn annually due to budget and operational constraints. The annual sample is redrawn to reflect the changes in the population or the sampling frame. These changes include, but are not limited to: removal of establishments that have gone out of business; inclusion of new establishments (i.e., births); growth or decline in the employment of each establishment and consequently the UI account; and updated industrial classification for some establishments. The annual sample of UI accounts is drawn using a permanent random numbers technique in order to achieve the desired sample overlap with the UIs from previous samples (Butani, Robertson, and Mueller, 1998). The use of this technique keeps solicitation costs within budget and keeps the sample statistically valid in terms of updated probabilities of selection that reflect all recent changes (Crankshaw, Kujawa, and Stamas, 2002). In addition to the annual redraw, the CES sample is updated on a semi-annual basis, as more recent universe data becomes available. The semi-annual update provides the opportunity to sample birth units that were not previously on the sampling frame during the annual redraw.
Large UI accounts are sampled with virtual certainty. In addition, all units reporting through the Electronic Data Interchange (EDI) center are sampled with certainty. EDI units are collected via direct file transmission from large, multi-unit employers for whom conventional data collection methods would be less cost effective.
The size of the currently selected probability sample and sample coverage is shown in the following table.
CES Survey Universe and Sample Size Comparison on NAICS basis, Private Sector
(in thousands)
|
Universe (March 2019) |
Sample (March 2019) |
||||
Industry |
UI Accounts |
Reporting Units |
Employment |
UI Accounts |
Reporting Units |
Employment |
Mining and Logging |
30.3 |
36.0 |
739.4 |
2.0 |
5.7 |
362.1 |
Construction |
671.6 |
700.1 |
7,160.0 |
18.5 |
40.4 |
1,476.1 |
Manufacturing |
295.0 |
327.0 |
12,866.4 |
13.4 |
31.7 |
5,142.2 |
Services |
5,179.8 |
6,503.4 |
98,430.6 |
168.2 |
988.3 |
44,442.6 |
Education |
100.0 |
109.0 |
2,974.6 |
4.00 |
9.2 |
1,438.5 |
Total |
6,276.7 |
7,675.5 |
122,171.0 |
206.1 |
1,075.3 |
52,861.5 |
2b. Estimation Procedure
The estimation technique used in estimating All Employees (AE) is a weighted link-relative estimator, which is a form of ratio estimation (for detailed mathematical formulae of this estimator, see the BLS Handbook of Methods, Chapter 2, pages 5-7, Bureau of Labor Statistics, 2011). From a sample composed of establishments reporting for both the previous and current months, the ratio of current month weighted employment to that of the previous month, weighted employment is computed. The weights are defined to be the inverse of the probability of selection in the sample. The weight is calculated based on the number of UI accounts actually selected within each allocation cell. Estimates are calculated within each estimation cell and then summed across appropriate cells to form estimates for aggregate levels.
The weighted link and taper estimator used for non-AE datatypes accounts for the over-the-month change in the sampled units, but also includes a tapering feature used to keep the estimates close to the overall sample average over time. The taper is considered to be a level correction. Like the estimator for AE, only the matched sample is used to reduce the variance on the over-the-month change. The estimator tapers the estimate toward the sample average for the previous month of the current matched sample before applying the current month’s change.
2c. Degree of accuracy needed for the purpose described in the justification
Like other sample surveys, the CES survey is subject to two types of error, sampling and non-sampling error. The magnitude of sampling error, or variance, is directly related to the size of the sample and the percentage of universe coverage achieved by the sample. Because the CES sample covers about 40 percent of total universe employment, the sample error on the national, first closing, total nonfarm employment estimate is small. The relative standard error for the major industry divisions at the national level under the probability sample are given in the table below.
Industry (form type) |
Major Industry Division |
Average Relative Standard Error for All Employment (in percent) |
Mining and Logging |
Mining and Logging |
1.8 |
Construction |
Construction |
0.8 |
Manufacturing |
Manufacturing |
0.3 |
Services |
Wholesale Trade |
0.5 |
Retail Trade |
0.3 |
|
Transportation and Warehousing |
0.7 |
|
Utilities |
1.3 |
|
Information |
1.2 |
|
Financial Activities |
0.4 |
|
Professional and Business Services |
0.5 |
|
Leisure and Hospitality |
0.5 |
|
Health Services |
0.3 |
|
Other Services |
0.5 |
|
Education |
Education |
0.9 |
The estimation of sample variances for the CES survey is accomplished through the method of Balanced Half Samples (BHS). This replication technique uses half samples of the original sample and calculates estimates using those subsamples. The sample variance is calculated by measuring the variability of the estimates made from these subsamples. (For a detailed mathematical presentation of this method, see the BLS Handbook of Methods, Chapter 2, pages 16-17, Bureau of Labor Statistics, 2011.)
2d. Unusual problems requiring specialized sampling procedures
Benchmark Revisions
The sum of sampling and nonsampling error can be considered the total survey error. Most sample surveys are only able to publish the sampling error as their only measure of error. The CES program has the ability to produce an approximation of the total error, on a lagged basis, because of the availability of the independently derived universe data from the QCEW program. While the benchmark error is used as a measure of the total error for the CES survey estimate, it actually represents the difference between two independent estimates derived from separate processes, and, thus, reflects the errors present in each program. Historically, the benchmark revision has been very small for total nonfarm employment. Over the past decade, the absolute benchmark error has averaged 0.2 percent, with an absolute range from <0.05 percent to 0.3 percent.
Specialized Procedures
The BLS has conducted extensive research into various ways to more directly capture the impact of new business births. This research included obtaining early records of new UI accounts and a pilot program to solicit from this frame. Operationally, a sample-based approach did not yield satisfactory results. This was mainly due to the lack of a comprehensive sampling frame on a timely basis. While both employment gains and losses from new and failed businesses are large in terms of over the year change, research conducted by the BLS shows the net employment (employment gained minus employment lost) is small because the gains and losses offset each other (Mueller, 2006). The sample design accounts for the majority of the employment gain from new businesses by imputing for UI accounts that have gone out-of-business (Kropf, Strifas, and Traetow, 2002). On a semi-annual basis, the universe is reviewed to identify new births. A portion of the births are selected on a probability basis. Thus, only births (and deaths) since the semi-annual update (about a 15-month lag) must be imputed for. The BLS has researched models to account for the residual birth employment not accounted for by the death imputation model. Models are currently in use for all private-industry estimates.
2e. Any use of periodic (less frequent than annual) data collection cycles to reduce burden.
The CES survey was mandated by Congress to be a monthly survey.
3. Describe methods to maximize response rates and to deal with issues of non-response. The accuracy and reliability of information collected must be shown to be adequate for intended uses. For collections based on sampling, a special justification must be provided for any collection that will not yield "reliable" data that can be generalized to the universe studied.
New firms are enrolled into the survey by interviewers working in BLS Data Collection Centers. The response rate for new enrollments is about 65 percent. After enrollment, sample attrition averages about 1 percent per month. The CES survey rotates new units into the sample each quarter both to replace deaths and to re-align the sample by industry and size. Typically about 25 to 30 percent of the units are replaced each year.
The response rate used in making final estimates for the private sector probability sample is about 60 percent. As indicated earlier, these sample respondents from the private sector are combined with government reports that cover 84.7 percent of Federal government employment, 87.7 percent of State government employment, and 72.8 percent of local government employment to make estimates for total nonfarm employment.
The link-relative estimating technique implicitly adjusts for nonrespondents using the respondents' relationship of current to previous month's employment within each estimation cell.
CES survey estimates are generated three times for each month to reflect additional sample received and error corrections. (Estimates are revised two more times to reflect updated universe counts.) Policy makers in both the private and public sectors rely extensively on the first estimate for the month. The BLS has implemented procedures to limit the size of revisions in these preliminary estimates. Automated collection methods, such as CATI, Web, and TDE, have been identified as the best possible means of overcoming the revision problem in the first estimate. These methods have been found to consistently maintain collection rates for preliminary estimates between 68 and 80 percent.
The BLS and the cooperating Territories conduct an extensive and vigorous program of notification and non-response follow-up. These include:
Targeted advance notice emails, postcards, and faxes, sent to all sample units; and
Time specific nonresponse prompting emails, telephone calls, and faxes.
The BLS conducts an average of 25,000 non-response prompt phone calls and prints 28,000 non-response prompt postcards per month.
In addition, the BLS follows an aggressive refusal conversion protocol. Each month the BLS Data Collection Centers target prior refusals for re-contact. About one-half of these refusals agree to resume reporting.
Growth of EDI, the direct transfer of data from the firm to the BLS, also provides a high level of response and stability. The BLS currently collects approximately 150,000 reports from 80 large firms via EDI. For final estimates, virtually all of these firms provide data. EDI also experiences very few refusals.
Each year the BLS conducts analyses of the survey estimates and decomposes the total survey error into its various components including sampling error, non-response error, reporting error, and errors from other sources. These analyses are possible since the LDB provides employment data for all units in the population. It is possible to use this employment information to calculate CES estimates using all of the units selected in the sample (100 percent response) and compare it with the CES estimates using only those units that responded to the CES survey. This provides a measure of the non-response error. Similar methods are used to measure other sources of survey error which can then be aggregated to the total survey error. See Gershunskaya, Eltinge, and Huff, 2002, for detailed mathematical formulae and numerical results on analysis of multiple error components in small domain estimates for the CES program. These analyses are useful in determining the source of errors for past estimates; however, they have not yet proven useful in predicting or limiting the errors in current estimates. Gershunskaya and Huff (2009) found that non-response error was not a principal factor in the total survey error and that the non-response error is not unidirectional or particularly related to the response rates in a research paper presented at the 2009 American Statistical Association Annual Meeting. The study covers the CES estimates from 2004 through 2008. Detailed tables of response rates for the private sector probability sample are available for earlier years and are updated on an annual basis. Details are available on request.
Disaster Recovery due to COVID-19
The CES program only made one change to data collection due to the coronavirus pandemic, directing most of the CATI respondents to provide data online since we did not have enough staff to collect information over the phone. In order to do so, beginning with the March 2019 collection cycle we began sending our CATI respondents an email notification instructing them to report via the Web.
4. Describe any tests of procedures or methods to be undertaken. Testing is encouraged as an effective means of refining collections of information to minimize burden and improve utility. Tests must be approved if they call for answers to identical questions from 10 or more respondents. A proposed test or set of test may be submitted for approval separately or in combination with the main collection of information.
CES has been testing several contact strategies to identify improvements made to materials sent to respondents by both postal mail and e-mail. The goal of these tests have been to identify changes that can increase response and/or minimize respondent burden. Previous tests resulted in transitioning from postcards to email reminders of upcoming reporting and the elimination of pocket folders sent with forms. Another test showed that there was no benefit in providing a letter in advance of address refinement, so these plans were cancelled. For these tests, CES performed an analysis of response rates, number of phone calls placed to respondent, and total length of the calls, for control and test groups.
CES will continue to perform similar tests to continue to refine our contact strategies. This includes additional comparisons of forms sent by postal mail vs. email as well as changes to the wording of cover letters and/or emails to most clearly explain the goal of the contact. We will continue to test these by reviewing response rates, number and total length of the calls. When it makes sense, we will also seek feedback from data collectors on whether changes being tested made the process of CES reporting clearer and easier understood.
5. Provide the name and telephone number of individuals consulted on statistical aspects of the design and the name of the agency unit, contractor(s), grantee(s), or other person(s) who will actually collect and/or analyze person(s) who will actually collect and/or analyze the information for the agency.
Mr. Edwin Robison, Chief, Statistical Methods Division of the Office of Employment and Unemployment Statistics, is responsible for the statistical aspects of the CES survey.
6. References
American Statistical Association (1994) "A Research Agenda to Guide and Improve the Current Employment Statistics Survey." American Statistical Association Panel for the Bureau of Labor Statistics' Current Employment Statistics Survey, January, 1994. Alexandria, VA: American Statistical Association.
Bureau of Labor Statistics. BLS Handbook of Methods Chapter 2: Employment, Hours, and Earnings from the Establishment Survey. Washington DC: Bureau of Labor Statistics, 2004, p.5. http://www.bls.gov/opub/hom/pdf/homch2.pdf
Bureau of Labor Statistics. BLS Handbook of Methods Chapter 2: Employment, Hours, and Earnings from the Establishment Survey. Washington DC: Bureau of Labor Statistics, 2004, pp. 5-7. http://www.bls.gov/opub/hom/pdf/homch2.pdf
Bureau of Labor Statistics. BLS Handbook of Methods Chapter 2: Employment, Hours, and Earnings from the Establishment Survey. Washington DC: Bureau of Labor Statistics, 2004, pp. 8-9. http://www.bls.gov/opub/hom/pdf/homch2.pdf
Butani, Shail, Kenneth W. Robertson, and Kirk Mueller (1998) "Assigning Permanent Random Numbers to the Bureau of Labor Statistics Longitudinal (Universe) Data Base." Proceedings of the Survey Research Methods Section, American Statistical Association, 9-13 August, 1998. Dallas: American Statistical Association, 1998.
http://www.bls.gov/osmr/research-papers/1998/pdf/st980080.pdf
Crankshaw, Mark, Laurie Kujawa, and George Stamas (2002) "Recent Experiences in Survey Coordination and Sample Rotation within Monthly Business Establishment Surveys." Proceedings of the Survey Research Methods Section, American Statistical Association, 11-15 August, 2002. New York: American Statistical Association, 2002.
http://www.bls.gov/osmr/research-papers/2002/pdf/st020290.pdf
Erkens, Gregory, Larry L. Huff, and Julie B. Gershunskaya (2005) "Alternative Sample Allocations for the U.S. Current Employment Statistics Survey." Proceedings of the Survey Research Methods Section, American Statistical Association, 7-11 August, 2005. Minneapolis: American Statistical Association, 2005, pp. 1-4.
https://www.bls.gov/osmr/research-papers/2005/pdf/st050240.pdf
Gershunskaya, Julie, John L. Eltinge, and Larry L. Huff (2002) “Use of Auxiliary Information to Evaluate a Synthetic Estimator in the U.S. Current Employment Statistics Program.” Proceedings of the Section on Survey Research Methods, American Statistical Association, 11-15 August, 2002. New York, NY: American Statistical Association, 2002.
http://www.amstat.org/sections/SRMS/Proceedings/y2002/Files/JSM2002-000844.pdf
Gershunskaya, Julie, and Larry Huff (2009) “Components of Error Analysis in the Current Employment Statistics Survey.” Proceedings of the Survey Research Methods Section, American Statistical Association, 1-6 August 2009. Washington, DC: American Statistical Association, 2009.
https://www.bls.gov/osmr/research-papers/2009/st090050.htm
Goldenberg, Karen L., Susan E. Moore, and Richard J. Rosen (1994) "Commercial Payroll Software and the Quality of Employment Data." Proceedings of the Survey Research Methods Section, American Statistical Association, 13-18 August, 1994. Toronto: American Statistical Association, 1994.
http://www.amstat.org/sections/SRMS/Proceedings/papers/1994_178.pdf
Kristin Fairman, Margaret Applebaum, Chris Manning, and Polly Phipps (2009) “Response Analysis Survey: Examining Reasons for Employment Differences Between the QCEW and the CES Survey” Proceedings of the Survey Research Methods Section, American Statistical Association, 1-6 August 2009. Washington, DC: American Statistical Association, 2009.
https://www.bls.gov/osmr/research-papers/2009/st090240.htm
Kropf, Jurgen, Sharon Strifas, and Monica Traetow (2002) "Accounting for Business Births and Deaths in CES: Bias vs. Net Birth/Death Modeling." Washington DC: Bureau of Labor Statistics, 2002.
https://www.bls.gov/osmr/research-papers/2002/pdf/st020090.pdf
Mueller, Kirk (2006) "Impact of business births and deaths in the payroll survey." Monthly Labor Review, Vol. 129, No. 5, May 2006, pp. 28-34. http://www.bls.gov/opub/mlr/2006/05/art4full.pdf
Werking, George S., Richard L. Clayton, and Richard J. Rosen (1995) "Studying the Causes of Employment Count Differences Reported in Two BLS Programs." Proceedings of the Survey Research Methods Section, American Statistical Association, 13-17 August, 1995. Orlando: American Statistical Association, 1995.
http://www.amstat.org/sections/SRMS/Proceedings/papers/1995_137.pdf
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
Author | GRDEN_P |
File Modified | 0000-00-00 |
File Created | 2021-01-13 |