Nonresponse Bias Analysis

Attachment M - Nonresponse Bias Analysis.pdf

Construction Progress Reporting Surveys

Nonresponse Bias Analysis

OMB: 0607-0153

Document [pdf]
Download: pdf | pdf
October 31, 2018

Non-response Bias Study for Value of
Construction Put in Place
Joseph J Barth, CSSMB, ESMD

1. Introduction
Both private, non-residential construction and multifamily residential projects in the Value of
Construction Put in Place (VIP) series suffer from a low response rate. In order to both understand any
problems that this may be causing as well as to fulfill Office of Management and Budget requirements,
we conducted a nonresponse bias study for these two components of the VIP. We look at response rates
and estimates of bias for some key variables, and provide some suggestions for improvements and other
future work.

2. Background
2.1 Survey Overview
The VIP is a monthly measure of the dollar amount of construction put in place within the United States.
The VIP data are used in the National Income and Product Accounts produced by BEA. The current
historical series began in the early 1960's. The analysis presented in this paper covers only privatelyowned, nonresidential construction and multifamily residential construction
Published VIP data are compiled from: (a) a series of construction project surveys, (b) estimates from
other construction series, and (c) data from secondary sources such as regulatory agencies. This
approach is quite different from the establishment or company-based survey methods used by most
economic surveys at the Census Bureau. Data collected through the VIP approach represents an allencompassing economic measure of construction spending. The survey data are collected from the
project owner's point of view. All construction related expenditures are included, not just contractor
receipts.
The following types of expenditures are included in VIP:
•
•
•
•
•
•
•
•
•
•

New buildings and structures
Additions, alterations, major replacements, etc. to existing buildings and structures
Installed mechanical and electrical equipment
Installed industrial equipment, such as boilers and blast furnaces
Site preparation and outside construction, such as streets, sidewalks, parking lots, utility
connections, etc.
Cost of labor and materials (including owner supplied)
Cost of construction equipment rental
Profit and overhead costs
Cost of architectural and engineering (A&E) work
Any miscellaneous costs of the project that are on the owner's books

1

October 31, 2018
The VIP excludes several types of expenditures, such as the value of maintenance and repairs to existing
structures and land acquisition.
Most of the survey methodology is not necessary for understanding nonresponse bias, however as
imputation is by nature linked to nonresponse we wish to go over it briefly. Total construction cost
(Rev5c) is imputed by multiplying either the project selection value (PSV), an estimate of a construction
project’s cost available from the sampling frame, for private non-residential cases or the total units for
private multifamily cases, by a factor; the factor is calculated annually as a sum of ratios of Rev5c to PSV
for responding construction projects. Monthly VIP is imputed by multiplying Rev5c by a factor specific to
the month being imputed for; the factor is calculated monthly as a ratio of total monthly VIP for active
construction projects to the total Rev5c for active projects, with a possible additional adjustment added
if the start date of the project was imputed due to nonresponse.

2.2 Input Data
This nonresponse bias analysis was conducted using production data for May 2017. Different data
sources are used for the two components of VIP studied.
For the private, non-residential component we use a file created each month containing all in-scope
construction projects for the current and previous 26 months is created. This file, known as the 27month file, is our starting point. We further restrict the file to privately (i.e. non-public) owned
nonresidential construction projects that entered the universe after August 2001 and that are not in
abeyance (suspended construction) or a duplicate of another project.
For multifamily projects we use a combination of monthly files for both VIP and SOC. We restrict this
merged file to only multifamily units selected after August 2001, with duplicates and out of scope
projects already having been eliminated from the input files due to our regular processing.

3. Response Rates
The following section focuses on response rates for Private nonresidential projects. Multifamily
response rates are discussed in Section 5.
Unit response rates (URRs) are calculated two ways. The first considers a unit to be a response if it
reported VIP for a given month. The second considers a unit to be a response if it reports revised item 5c
during an initial selection mailouts. In both cases we exclude units with a missing response flag, as this
indicates that the unit is not active at the given time. The unit response rate is calculated as the number
of units with a response flag indicating a response, divided by the total number of units with a nonmissing response flag.
In detail, to calculate URRs using monthly response status, we classify each project into one of three
categories in each month. The first category contains all projects with a VIP response flag value of “R”
and a status flag not equal to “7”; status 7 projects are analyst imputed and not truly a response. The
second category contains all projects with a VIP response flag of “*” or a status flag equal to “7”. The
third category contains all remaining projects (VIP response flag not equal to “R” or “*” and status flag
not equal to “7”) and consists of all projects that are ineligible for tabulation that month. If we denote
the number of projects in the first category as 𝑛𝑛𝑟𝑟 and the number of projects in the second category as
𝑛𝑛
𝑛𝑛𝑛𝑛𝑛𝑛 then the URR is equal to 𝑛𝑛 +𝑛𝑛𝑟𝑟 .
𝑟𝑟

𝑛𝑛𝑛𝑛

2

October 31, 2018
The calculation of URRs using revised item 5c response status is largely similar, just with response
defined differently. The three categories are: respondents, those projects with revised item 5c response
flag equal to “R”; nonrespondents, those projects with revised item 5c response flag equal to “*”; and
ineligibles, those projects with revised item 5c response flag not equal to “R” or “*”. Again we denote
the number of projects in the first category as 𝑛𝑛𝑟𝑟 and the number of projects in the second category as
𝑛𝑛
𝑛𝑛𝑛𝑛𝑛𝑛 and the URR is equal to 𝑛𝑛 +𝑛𝑛𝑟𝑟 .
𝑟𝑟

𝑛𝑛𝑛𝑛

A unit is considered a certainty if it is taken from a stratum where every unit in the stratum is selected
into the sample, i.e. the sampling rate for that stratum is equal to 1. This includes every unit that has a
value of $10 million or more based on the sampling frame, and includes smaller units from certain types
of construction. Note that it is possible for units from strata without a sampling rate of 1 to have a
weight close to or equal to one after adjustments and that these units are not considered certainties in
this analysis.
We calculate response rates for a given reference month and lag. For example, since we are calculating
the response rates for a reference month of May 2017, so lag 0 refers to the data collected for May
2017 that is available in May 2017; lag 1 refers to the data collected for April 2017 that is available in
May 2017, and so on. Since revised item 5c is either reported or not reported, and this doesn’t change
by month, we only need to calculate one set of response rates using that response status. Since VIP is
reported on a monthly basis, we calculate response rates for each of a possible 27 lags available on the
monthly VIP file.
Table 3.1: Overall URR
Lag, Monthly Response Status

Overall

0

1

2

3 to 10

11 to 18

19 to 26

Average

21.6%

27.0%

29.6%

34.5%

37.1%

39.7%

35.9%

Average
Monthly
Sample
8,714

Rev5c
Resp
Flag
54.4%

Table 3.2: URR by Certainty Status
Certainty Status

Lag, Monthly Response Status
0

1

2

3 to 10

11 to 18

19 to 26

Average

Average
Monthly
Sample

Rev5c
Resp
Flag

Noncertainty

18.3%

23.0%

25.3%

31.2%

32.2%

35.2%

31.7%

3,835

53.2%

Certainty

24.1%

30.1%

33.0%

37.3%

41.0%

43.3%

39.3%

4,878

55.5%

3

October 31, 2018
Table 3.3: URR by Type of Construction
Type of
Construction

Lag, Monthly Response Status
0

1

2

Lodging(01)

23.3%

28.5%

Office(02)

17.4%

21.6%

Commercial(03)

18.4%

Health Care(04)

Average
Monthly
Sample

Rev5c
Resp
Flag

11 to 18

19 to 26

Average

30.1%

3 to
10
32.9%

35.8%

38.1%

34.7%

829

50.7%

23.5%

28.0%

30.3%

32.3%

29.2%

1,555

46.9%

22.9%

25.6%

30.6%

32.5%

34.6%

31.4%

2,705

51.3%

31.7%

40.3%

42.6%

47.6%

50.8%

53.2%

49.2%

982

65.0%

Education(05)

34.6%

42.7%

47.2%

52.2%

50.5%

52.6%

50.6%

678

69.4%

Religious(06)

28.9%

34.7%

37.2%

39.5%

41.8%

44.3%

40.9%

286

64.1%

Amusement &
Recreation(08)
Transportation
(09)
Power(11)

24.6%

29.8%

33.4%

38.2%

40.8%

44.9%

40.0%

352

59.2%

13.7%

25.0%

27.9%

35.8%

42.2%

44.8%

38.9%

112

55.0%

11.3%

14.3%

17.7%

25.0%

28.1%

28.9%

25.9%

398

40.7%

Not Elsewhere
Classified
Manufacturing(2039)

20.0%

26.8%

27.5%

28.3%

40.1%

43.5%

35.9%

53

57.5%

19.8%

25.2%

27.9%

35.3%

39.3%

43.1%

37.6%

763

59.7%

Rev5c
Resp
Flag

Table 3.4: URR by Project Selection Value (in Thousands of Dollars)
PSV
Category

Lag, Monthly Response Status

0

1

2

3 to 10

11 to 18

19 to 26

Average

Average
Monthly
Sample

>=10,000

24.3%

30.2%

33.0%

37.1%

40.5%

42.9%

38.9%

4,530

55.0%

>=5,000

22.9%

28.4%

31.7%

37.1%

41.9%

43.5%

39.4%

1,256

58.6%

>=2,000

22.6%

27.6%

30.9%

35.9%

36.8%

41.4%

36.8%

1,026

57.0%

>=750

15.2%

19.7%

21.2%

28.0%

29.2%

29.6%

27.8%

1,028

50.7%

>=250

11.7%

16.1%

18.1%

24.7%

23.0%

28.0%

24.1%

644

49.8%

>=75

11.9%

13.4%

16.0%

23.9%

20.0%

26.5%

22.4%

230

47.2%

4

October 31, 2018

Figure 3.1: URR over Time by Type of Construction
60.0%

50.0%

1

Unit Response Rate

2
40.0%

3
4
5

30.0%

6
8

20.0%

9
11

10.0%

NEC
Manf

0.0%
0

1

2

3 to 10

11 to 18

19 to 26

Lag

Quantity response rates (QRRs) for monthly VIP are calculated in a similar manner to URRs. Instead of
simply counting responding and nonresponding units, we sum up weighted monthly VIP for respondents
and divide it by the weighted monthly VIP for responding and nonresponding units.
When calculating monthly VIP QRRs using monthly VIP reporting status, place projects into the same
three categories as they were placed into when calculating URRs. However, instead of using 𝑛𝑛𝑟𝑟 and 𝑛𝑛𝑛𝑛𝑛𝑛 ,
we calculate the weighted sum of monthly VIP for categories one and two. We define 𝑡𝑡̂𝑟𝑟 as the weighted
sum of monthly VIP for every project in the first category and 𝑡𝑡̂𝑛𝑛𝑛𝑛 as the weighted sum of monthly VIP
for every project in the second category. Projects in the third category are again unused. The QRR is
then calculated as

𝑡𝑡̂𝑟𝑟
.
𝑡𝑡̂𝑟𝑟 +𝑡𝑡̂𝑛𝑛𝑛𝑛

While we could calculate a QRR using response of revised item 5c, it would involve treating imputed
values of monthly VIP as reported which would be of limited value. Since projects go into the first
category based on the response status to revised item 5c, not monthly VIP, it is entirely possible that we
will have projects with imputed monthly VIP in the first category. Similarly, we may have projects in the
second category where monthly VIP is reported, not imputed. In both cases we would be misclassifying
monthly VIP and the result would not match any standard definition of a response rate. We could
calculate a QRR for revised item 5c instead of monthly VIP, however the focus of this study is monthly
VIP.

5

October 31, 2018
Table 3.5: Overall QRR of Monthly VIP
Overall

Overall

Lag, Monthly Response Status

0

1

2

3 to 10

11 to 18

19 to 26

Average

Average
Monthly
Sample

31.6%

38.1%

42.5%

45.9%

47.3%

48.9%

46.3%

8,714

Table 3.6: QRR of Monthly VIP by Certainty Status
Certainty Status

Lag, Monthly Response Status

0

1

2

3 to 10

11 to 18

Noncertainty 38.7% 46.7% 49.1%
Certainty 29.0% 34.9% 39.8%

53.1%
42.4%

52.6%
44.9%

19 to 26 Average
53.9%
46.5%

Average
Monthly Sample

52.3%
43.5%

3,835
4,878

Lag, Monthly Response Status

Average
Monthly Sample

Table 3.7: QRR of Monthly VIP by Type of Construction
Type of
Construction
Lodging(01)
Office(02)
Commercial(03)
Health Care(04)
Education(05)
Religious(06)
Amusement &
Recreation(08)
Transportation(09)
Power(11)
Not Elsewhere
Classified
Manufacturing
(20-39)

0

1

36.0%
34.0%
34.7%
42.7%
49.7%
34.8%
50.6%

41.8%
39.2%
42.2%
52.3%
59.5%
46.2%
53.6%

2 3 to 10 11 to 18 19 to 26 Average
43.5%
41.7%
43.8%
53.5%
63.5%
51.8%
54.9%

46.2%
46.3%
48.4%
56.9%
65.6%
49.5%
50.6%

47.5%
46.8%
49.7%
58.1%
61.6%
52.1%
51.0%

47.4%
46.3%
50.1%
61.8%
62.7%
51.7%
52.7%

46.3%
45.6%
48.4%
57.9%
62.7%
50.3%
51.6%

829
1,555
2,705
982
678
286
352

23.3% 41.2% 41.4%
17.1% 22.0% 36.1%
18.5% 20.4% 30.0%

55.4%
40.7%
26.1%

57.3%
46.5%
40.9%

55.9%
48.0%
54.7%

53.9%
42.8%
38.6%

112
398
53

14.8% 22.3% 24.5%

27.2%

29.0%

33.5%

28.9%

763

6

October 31, 2018

Table 3.8: QRR of Monthly VIP by Project Selection Value (in Thousands of Dollars)
PSV
Category
>=10,000
>=5,000
>=2,000
>=750
>=250
>=75

Lag, Monthly Response Status

0
28.9%
39.0%
35.1%
40.3%
34.6%
56.0%

1
34.8%
46.8%
44.6%
45.9%
47.5%
55.9%

2
39.7%
49.3%
48.5%
47.2%
50.4%
53.4%

3 to 10
42.2%
53.1%
51.0%
52.8%
54.9%
57.9%

11 to 18
44.5%
54.7%
52.6%
54.1%
51.2%
49.7%

19 to 26
46.1%
57.7%
56.2%
50.8%
50.5%
56.5%

Average
43.2%
54.0%
52.1%
51.7%
51.3%
54.7%

Average
Monthly
Sample

4,530
1,256
1,026
1,028
644
230

Figure 3.2: QRR over Time by Type of Construction
70.0%

Quantity Response Rate

60.0%

1
2

50.0%

3
4

40.0%

5
30.0%

6
8

20.0%

9
11

10.0%

NEC
Manf

0.0%
0

1

2

3 to 10

11 to 18

19 to 26

Lag

We see a few patterns in URR and QRR for private nonresidential monthly VIP. URR and QRR both
increase with lag, with the bulk of the increase occurring during the first few months. URR is higher for
certainties than non-certainties, while QRR is higher for non-certainties than certainties. QRR is lowest
for the highest value group but it is only 8-10% less than the other value groups. Manufacturing
construction has the lowest QRR but Power has the lowest URR. Overall when looking at response
across value groups, we see a slight trend towards higher URR for higher value groups but there is no
discernable trend when looking at QRR by value group.

7

October 31, 2018

4. Relative Bias
The following section focuses on relative bias for Private Nonresidential projects. This analysis was not
possible for multifamily projects due to lack of an available frame variable like Project Selection Value
(PSV).
We look at the relative bias of Project Selection Value (PSV) calculated for various groups and two ways
of determining response status. PSV is used for bias calculations as it is available for both respondents
and non-respondents and is reasonably well correlated with both revised item 5c and the total sum of
monthly Value of Construction Put in Place (VIP) for a project; correlations are all at least 0.75 between
the three variables. We look at Type of Construction (TC), certainty status, and PSV for our
categorization variables. For response status, we look at either the actual response status of VIP for a
month or whether or not the revised item 5c is reported; this second definition is considered as revised
item 5c is used for imputation of monthly VIP.
We calculate relative bias for a given reference month and lag. For example, if we are calculating the
relative bias for a reference month of May 2017, then lag 0 would refer to the data collected for May
2017 that is available in May 2017; lag 1 would refer to the data collected for April 2017 that is available
in May 2017, and so on. Since revised item 5c is either reported or not reported, and this doesn’t change
by month, we only need to calculate one relative bias using that response status. Since VIP is reported
on a monthly basis, we calculate a relative bias for each of a possible 27 lags available on the monthly
VIP file.
Relative bias is presented for response based on monthly VIP response status for lags 0, 1, and 2
corresponding to preliminary, first, and second revision releases. We average the relative biases for lags
3 to 10, 11 to 18, and 16 to 26 in order to keep the presented data manageable while still giving a
picture of the full life of the relative bias.
For a given reference month and lag, we set projects as either a response or non-response. We also
classify it into it’s appropriate category based on TC, certainty status, or PSV depending on what we are
looking at. The average PSV is calculated for the response and non-response groups for each level of the
𝑛𝑛−𝑛𝑛𝑟𝑟
������𝑟𝑟 − 𝑃𝑃𝑃𝑃𝑉𝑉
��������
(𝑃𝑃𝑃𝑃𝑉𝑉
classification variable. The bias is then calculated as
𝑛𝑛𝑛𝑛 ), where 𝑛𝑛 is the number of
𝑛𝑛

sampled projects for the given level of the classification variable; 𝑛𝑛𝑟𝑟 is the number of responding
������𝑟𝑟 is the estimation weighted
sampled projects for the given level of the classification variable; 𝑃𝑃𝑃𝑃𝑉𝑉
average of PSV for responding sampled projects for the given level of the classification variable; and
��������
𝑃𝑃𝑃𝑃𝑉𝑉𝑛𝑛𝑛𝑛 is the estimation weighted average of PSV for non-responding sampled projects for the given
level of the classification variable. Thus the relative bias depends on both the response rate and the
difference in average PSV between respondents and non-respondents. A positive bias means that
respondents tend to be larger than non-respondents and that basing our estimates on only respondents
will result in an overestimation. Relative bias is calculated by taking the bias and dividing by the average
PSV for all sampled units for a given level of the classification variable.
We do not see a strong trend that relative bias decreases as lag time increases, either overall or by
different categorization variables. We also see (not yet shown) an increase in response rate as lag time
increases. This means that despite collecting data from more units over time, we do not see a decrease
8

October 31, 2018
in difference between respondents and non-respondents. This suggests that we may be able to find
improvements to current nonresponse followup (NRFU) procedures. Any final conclusions would require
an analysis of bias with respect to current imputation procedures, as imputation is the model that links
project selection value to monthly VIP.
We see certainties have a lower relative bias than noncertainties, which suggests putting greater effort
into NRFU on non-certainties. The degree to which the effort should be increased is proportional to the
percentage of the estimated totals that non-certainties account for. It’s also worth noting that the
opposite is true for some types of construction; one such example is TC 09 where certainties have a
68.8% relative bias and noncertainties have a -28.7% relative bias at lag 0 (the initial release of an
estimate for a given month.)
Looking at relative bias for Type of Construction level estimates and excluding Not Elsewhere Classified,
we see that TC 01 tends to have the lowest relative bias. After TC 01 come TC 08 and 09. TCs 02 and 11
tend to have the largest relative bias values, suggesting that they may benefit the most from additional
NRFU resources.
Looking at relative bias by project selection value gives consistently low results. This is to be expected
since the categories limit how much the nonrespondents can differ from the respondents. No strong
patterns appear in the relative bias statistics to suggest a particular area to focus NRFU. The occasional
large value of relative bias for the largest category (those units with project selection values over
$10,000,000) can be partially understood by the potential for arbitrarily large differences between
project selection values for units in that group; in smaller groups the difference is bounded by the
minimum and maximum values that define the group.
While the relative bias values for project selection value can be quite high, they do not immediately
indicate a problem. We know project selection value for all units in the sampling frame, so in practice
we know what the true total is. Our real estimation target is the total value of construction, which has a
reasonably strong relationship with project selection value. By making use of the project selection value
of nonrespondents, we should be able to produce imputation-based estimates of total value of
construction with small relative bias.
Table 4.1: Overall Relative Bias of Project Selection Value
Overall
Overall

Lag, Monthly Response Status

0

1

2

3 to 10

11 to 18

19 to 26

Average

Average
Monthly
Sample

47.7%

41.1%

52.0%

32.5%

40.8%

24.9%

34.3%

8,714

Rev5c
Resp Flag

7.3%

9

October 31, 2018
Table 4.2: Relative Bias of Project Selection Value by Certainty Status
Certainty
Status

Lag, Monthly Response Status

Average

Average
Monthly
Sample

Rev5c
Resp
Flag

0

1

2

3 to 10

11 to 18

19 to 26

Noncertainty

39.5%

38.0%

37.2%

23.5%

31.6%

24.1%

27.7% 3,835

9.1%

Certainty

-2.5%

-6.9%

1.8%

0.3%

-1.9%

-7.5%

-3.0% 4,878

-3.0%

Table 4.3: Relative Bias of Project Selection Value by Type of Construction
Type of
Construction

0

1

2

3 to 10

11 to 18

19 to 26

Average

Lodging(01)

9.9%

17.1%

15.5%

18.4%

20.3%

9.2%

15.8%

Average
Monthly
Sample
829

Office(02)

75.4%

66.5%

60.3%

31.9%

49.7%

53.7%

47.6%

1,555

2.10%

Commercial(03)

71.0%

82.3%

73.6%

46.9%

48.8%

23.8%

43.8%

2,705

8.65%

Health Care(04)

74.2%

65.4%

68.3%

37.7%

39.1%

34.6%

40.7%

982

15.82%

Education(05)

34.0%

24.8%

26.5%

22.4%

31.1%

16.4%

23.9%

678

4.06%

Lag, Monthly Response Status

Rev5c
Resp
Flag
6.29%

Religious(06)

25.0%

33.1%

56.6%

47.0%

45.8%

40.6%

43.8%

286

15.71%

Amusement &
Recreation(08)
Transportation
(09)
Power(11)

23.8%

18.2%

25.5%

26.9%

17.4%

3.2%

16.6%

352

18.25%

17.4%

19.5%

27.5%

29.9%

40.6%

38.2%

34.6%

112

-6.07%

68.9%

47.4%

81.9%

49.6%

63.3%

42.9%

53.5%

398

16.50%

Not Elsewhere
Classified
Manufacturing
(20-39)

3.5%

-14.2%

-23.6%

10.1%

-11.8%

-13.3%

-5.7%

53

-19.10%

-14.0%

-24.3%

-16.7%

-39.8%

-39.4%

-34.5%

-35.7%

763

-3.84%

Table 4.4: Relative Bias of Project Selection Value by Project Selection Value (in Thousands of Dollars)
PSV
Category

Lag, Monthly Response Status

0

1

2

3 to 10

11 to 18

19 to 26

>=10,000
>=5,000
>=2,000
>=750
>=250
>=75

-3.1%
2.1%
0.2%
4.8%
2.7%
-12.0%

-7.3%
2.5%
-1.1%
5.3%
1.1%
-9.2%

1.7%
2.4%
-0.8%
6.2%
4.0%
-11.1%

0.6%
1.6%
-0.7%
3.8%
0.9%
-2.0%

-0.9%
0.6%
1.7%
2.4%
1.2%
1.2%

-6.7%
0.3%
1.8%
2.1%
-2.8%
-0.4%

Average

Average
Monthly
Sample

Rev5c
Resp
Flag

-2.4%
1.0%
0.8%
3.0%
0.1%
-1.5%

4,530
1,256
1,026
1,028
644
230

-2.2%
0.3%
0.1%
0.9%
-0.1%
-0.5%

10

October 31, 2018

Figure 4.1:
Relative Bias of PSV over Time by Type of Construction, All

100.0%

01
02

80.0%

03

Relative Bias

60.0%

04

40.0%

05

20.0%

06
08

0.0%

09
11

-20.0%

NEC

-40.0%

Manf

-60.0%
0

1

2

Lag

3 to 10

11 to 18

19 to 26

Figure 4.2: Relative Bias of PSV over Time by TC, Noncertainties

02

60.0%

03

40.0%

Relative Bias

01

80.0%

04

20.0%

05

0.0%

06

-20.0%

08

-40.0%

09

-60.0%

11

-80.0%

NEC

-100.0%
0

1

2

3 to 10

11 to 18

19 to 26

Manf

Lag

Figure 4.3: Relative Bias of PSV over Time by TC, Certainties
80.0%

Relative Bias

60.0%
40.0%
20.0%
0.0%
-20.0%
-40.0%
-60.0%
0

1

2

3 to 10

11 to 18

19 to 26

01
02
03
04
05
06
08
09
11
NEC
Manf

Lag

11

October 31, 2018

5. Private Multifamily Structures
Unit and Quantity Response Rates were calculated for private multifamily (MF) structures. For these
structures, response status is defined only by whether or not a unit reported VIP for a given month. As
with private non-residential structures, URR is calculated as the number of units with a response flag
indicating a response, divided by the total number of units with a non-missing response flag. Quantity
response rates for monthly VIP are calculated in a similar manner to URRs. Instead of simply counting
responding and nonresponding units, we sum up weighted monthly VIP for respondents and divide it by
the weighted monthly VIP for responding and nonresponding units. Project Selection Value is not
available for multifamily structures, so we define size categories based on the number of units
associated with that structure on the frame; number of units is also collected as a survey variable, but
for measure of size purposes we use the frame value. Additionally, since multifamily structures all share
the same Type of Construction (residential), we do not provide a breakdown by that variable.
We see similar patterns in URR and QRR for both private, non-residential and private, multifamily
structures. URR and QRR both increase with lag, with the bulk of the increase occurring during the first
few months. URR is higher for certainties than non-certainties for both groups, which flips with QRR
being higher for non-certainties than certainties; the difference for URR is much closer in the case of
multifamily structures. When looking at size of units, we see a slight trend towards higher URR for larger
units for both multifamily and private, non-residential structures. There is no strong trend when looking
at QRR by size, although we see some evidence that for multifamily structures the smallest structures
have a lower QRR than the rest of the population.
Table 5.1: Overall URR
Overall

Overall

Lag, Monthly Response Status

0

1

2

3 to 10

11 to 18

19 to 26

26.9%

31.4%

33.2%

34.9%

38.0%

40.3%

Average
36.9%

Average
Monthly
Sample

3,957

Table 5.2: URR by Certainty Status
Certainty
Status

0

1

2

3 to 10

Noncertainty
Certainty

26.4%
27.1%

32.0%
31.3%

33.7%
33.1%

35.1%
34.8%

Lag, Monthly Response Status

11 to
18
37.0%
38.2%

19 to
26
39.4%
40.5%

Averag
e
36.5%
37.0%

Average
Monthly
Sample

727
3,227

12

October 31, 2018

Table 5.3: URR by Number of Units
Number
of Units

Lag, Monthly Response Status

>= 300
>= 200
>= 100
>= 50
>= 25
>= 5
>= 0

Average
Monthly Sample

0
35.7%

1
41.6%

2
43.6%

3 to 10
44.6%

11 to 18
48.1%

19 to 26
51.3%

Average

32.4%

36.2%

40.6%

43.8%

47.5%

49.3%

47.1%

548

30.9%
35.2%

36.7%
40.2%

37.3%
41.9%

41.3%
41.2%

47.2%
45.6%

49.2%
49.5%

45.7%

474

44.7%

573

25.9%

30.0%

31.6%

33.0%

36.2%

39.0%

44.7%

541

16.8%

20.2%

21.4%

22.5%

23.7%

25.9%

35.3%

571

9.52%

13.4%

16.8%

18.3%

19.5%

18.6%

23.5%

1,118

18.2%

130

Table 5.4: Overall QRR
Overall
Overall

Lag, Monthly Response Status

0

1

2

3 to 10

11 to 18

19 to 26

Average

Average
Monthly
Sample

52.6%

59.2%

62.3%

63.9%

67.1%

69.8%

65.94%

3,957

Table 5.5: QRR by Certainty Status
Certainty
Status

Lag, Monthly Response Status

0

1

19 to 26 Average

Average
Monthly
Sample

2

3 to 10

11 to 18

Noncertainty

56.8% 62.6% 62.9%

64.1%

70.0%

74.7%

68.6%

727

Certainty

50.2% 57.1% 62.0%

63.6%

65.0%

66.2%

64.0%

3,227

Table 5.6: QRR by Number of Units
Number
of Units
300+
200-299
100-199
50-99
25-49
5-24
2-4

Lag, Monthly Response Status

0
49.9%
55.0%
49.5%
67.3%
43.0%
48.4%
44.6%

1
59.4%
57.7%
57.7%
69.4%
53.0%
55.5%
41.0%

2
64.9%
63.9%
58.4%
68.7%
55.1%
58.6%
43.5%

3 to 10
65.1%
67.6%
57.9%
69.3%
64.0%
59.2%
48.8%

11 to 18
66.9%
68.4%
66.2%
72.9%
68.0%
59.5%
50.5%

19 to 26
67.1%
71.9%
71.8%
75.7%
73.7%
55.0%
56.2%

Average
65.4%
68.1%
64.2%
72.2%
66.5%
57.5%
50.9%

Average
Monthly
Sample

548
474
573
541
571
1,118
130

13

October 31, 2018
We did not look at relative bias for multifamily structures due to the lack of a suitable frame variable.
While we could attempt a model, any such model would be highly dependent on imputed values which
would limit the value of any results we may be able to obtain.

6. Future Work
While evidence for nonresponse bias is present throughout the survey, we do see some areas that may
be viable for targeting. In general, noncertainties display a larger relative bias than certainties but are
also smaller; it would be worth allocating resources towards published estimates that are primarily
driven by noncertainty projects. Nonresponse bias is present in all types of construction, with the
relative rankings shifting depending on the definition of response used and the time of measurement.
This makes it difficult to suggest any types of construction to focus nonresponse bias reduction efforts
on. However, we see that Health Care, Religious, and Power projects tend to have larger relative biases
regardless of the response definition used; this suggests that looking at these three construction types
should be helpful.
While nonresponse bias estimates are not available for multifamily structures, we are able to look at
response rates. While both certainties and non-certainties have similar unit response rates, the quantity
response rate for certainties is lower, suggesting that we may want to distribute more resources to
nonresponse follow-up for certainties.
These results can also feed into future work on imputation. Ideally imputation will eliminate almost all
nonresponse error from our estimates, although this is an ideal that will never be achieved in reality.
When conducting imputation research, the results from this study could be used as a baseline for
measuring how well the imputes are doing.
Finally, VIP will be making use of the Account Manager Program, where analysts develop relationships
with specific businesses in order to obtain better responses from those businesses. We hope to see
response rates improve with this program, which should lead to some improvement in the nonresponse
bias.

7. References
Construction Spending (VIP) Survey Website,
https://www.census.gov/construction/c30/vip_csec_9798.html
Survey of Construction (SOC) Website, https://www.census.gov/econ/overview/co0400.html

14


File Typeapplication/pdf
AuthorJoseph J Barth (CENSUS/ESMD FED)
File Modified2018-10-31
File Created2018-10-31

© 2024 OMB.report | Privacy Policy