Download:
pdf |
pdfAnnual Integrated Economic
Survey Pilot: Phase II
Preliminary Findings and
Recommendations
Melissa A. Cidade, EMD
Heidi St.Onge, ADEP
June 27, 2023
The Census Bureau has reviewed this data product for unauthorized disclosure of confidential information
and has approved the disclosure avoidance practices applied (Approval ID: CBDRB-FY23-ESMD001-013).
1
Say hi and thank Heidi.
Today we are rolling out the findings and recommendations from Phase II of the AIES
Pilot. There’s a lot to cover, so please hold your questions until the end of my prepared
remarks, and then we will have time for further conversation. Let’s jump right in…
1
AIES Pilot Overview
Phase I: “The 78” Pilot 2022
• Goal: Understand response processes and further instrument refining
• Qualtrics instrument, 78 companies
Phase II: Response Spreadsheet Pilot 2023
• Goal: Induce independent response
• Response spreadsheet, about 900 companies
Phase III: Dress Rehearsal
• Goal: Troubleshooting and infrastructure building
• Centurion instrument, about 8,000 companies
2
Today we are talking about the findings and recommendations for Phase II of the AIES
Pilot. Phase I was last year and was 78 total companies. Phase III is also known as the
“Dress Rehearsal,” is scheduled for the late summer, and is about 8,000 companies.
Today, we are only looking at Phase II of the Pilot, which we conducted in the early
spring of this year.
2
AIES Pilot Phase II Testing Agenda
From Phase I
Other AIES
teams
For EID
For ESMD
• Survey Structure
• Response spreadsheet
• Burden
• Communications materials
• Early test data
• Sampling procedures
• Contact information for establishments
• Readiness for returning to in-person interviewing
3
The Phase II Pilot is specifically designed to answer questions left from Phase I of the
Pilot. But we also used the Pilot to test other emerging aspects of AIES like providing
test data and refining sampling procedures. We also had two additional research
questions embedded in the design: one for EID looking at establishment-level contact
information, and one for ESMD looking at readiness for return to in-person
interviewing. The presentation today will only be covering the questions left over from
Phase I.
3
Phase I Finding
Phase II Goal
Most companies responded by page-by-page method at the
company-level, and most companies responded by uploadable
spreadsheet at the establishment and industry levels.
•
Summing to the total works for some – but not all – companies,
and that sum needs to be included as a check when asking for subcompany data.
Duplication of content caused by multiple response units in one
survey.
Respondents want content organized by topic, and a direct and
clear submission process.
Respondents like respond-by-spreadsheet, but it must take a
holistic approach to the company.
Get feedback on the new structure – 3 step
response process
Test key elements of the spreadsheet design:
• Holistic unit listing
• Units summing to reduce burden
• Color coding content
Reports on the amount of time to complete relative to the current
annual surveys are mixed.
Gain additional information about response
burden
We will continue to have unit errors in the integrated survey, but
we can mitigate with flexibility and clear communication.
Develop respondent communications
This table has the findings from the Phase I Pilot, and the Phase II goal that comes from
the finding. I’m not going to rehash the findings from Phase I. Rather, we included this
table to demonstrate how the Phase II Pilot research goals were developed coming out
of the Phase I findings. The first goal was to test a survey structure where respondents
provide data at the company level page-by-page, and at the establishment and industry
level by spreadsheet. Unfortunately, we could not test this structure in the pilot, but
look forward to upcoming opportunities to test this new structure in the future.
Instead, we turn our attention to the remaining three topics for today. The first is
testing the performance of the response spreadsheet. The second is to learn more
about burden. The third is to develop respondent communications.
4
Research Modalities
Response spreadsheet survey (N = 318)
Response Analysis Survey (RAS) (N = 105)
Debriefing interviews (N = 28)
Contact from the field (N = 227 emails, 79
inbound calls)
5
On screen now are the research modalities that we used for Phase II. This includes a
response spreadsheet survey with all of the integrated content, a Response Analysis
Survey of respondents focused on burden, debriefing interviews, and cataloguing all
respondent interactions. These are the same methods we used in Phase I of the Pilot,
too.
You’ll see results from these efforts all through this presentation.
5
Instrument
Respondent portal
Bespoke response spreadsheet
• ID and password
• FAQs and instructions
• Download spreadsheet
• Upload spreadsheet
• Tabs:
•
•
•
•
•
•
Overview
Company
Survey
Add Locations
Products*
Instructions
• Customized listings
6
Because of timing and resource constraints, the instrument for the Pilot was a response
spreadsheet. Respondents logged into a basic portal using an ID and password that we
sent by mail and email. There were FAQs and instructions they could access, but
mostly the portal was for downloading the response spreadsheet and then uploading it
back to us when they were done. Each company had a customized ‘bespoke’ response
spreadsheet that included an overview tab, a tab for reporting at the company level,
one for the establishment and industry level, a tab for adding additional locations, a tab
for products, but only if it is a manufacturing company, and a tab with additional
question by question instructions. These spreadsheets featured customized
establishment and industry listings.
6
Of uploaded spreadsheets as of 04/17/2023 (N = 318)
Pilot Phase II Respondents
Response Status as of 04/17/2023
Uploaded a
response
spreadsheet
318
36%
Has not
uploaded a
response
spreadsheet
572
64%
N
Percent
Single Unit
108
34.0%
Multi Unit
210
66.0
SU not supported
108
34.0
MU not supported
85
26.7
FSAM or EC Am
29
9.1
Not FSAM or EC AM, but still
supported
96
30.1
Manufacturing
25
7.9
Non-manufacturing
293
92.1
7
In total, we had 890 units that received the full pilot research protocol. We launched
on February 22, 2023. We are still getting pilot response, but our cut-off day for this
presentation is April 17, 2023. As of April 17, 2023, a respondent uploaded an Excel file
for 318 units in our study, representing 36 percent of those who remained in the study.
Of those that uploaded a file, 34 percent are single units and 66 percent are multi units.
In terms of support status, 39 percent of uploaded spreadsheets came from those
companies assigned response support, either as an EC or full service account manager
or as other response support. And, 25 of the uploaded spreadsheets came from
companies we flagged as having at least one establishment classified in a six digit NAICS
code in the manufacturing sector.
I just want to pause here and note that we use a cut off date for this presentation of
April 17, but we have continued to collect data after that date. In fact, as of August 11
– this past Friday – we had received 503 uploaded spreadsheets, representing a 58
percent response rate. However, for the remainder of this presentation, we will
consider only the 318 response spreadsheets uploaded on or before April 17.
7
Methodological Limitations
Instrument
Limitations
• No response portal
Analytic Limitations
• Data frozen as of
April 17, 2023
• Limited content
checks; easily
circumnavigated
• No processing
systems in place
• Approximation of
production
• Instrument
performance focus
Field Limitations
• Crowded survey
landscape – ran
concurrent with EC
and other surveys
8
Let me pause here and lay out some of the methodological limitations to consider as
we move toward findings and recommendations.
We note some instrument limitations that impact our findings. The first is that the pilot
was fielded without using the typical respondent portal – that meant that functionality
found in that portal like secure messaging, extension requests, and delegation, were
not available to pilot respondents. We have no way of knowing how that may have
impacted response patterns and behaviors. Similarly, while the response spreadsheet
had some content checks built in, these checks were easily circumnavigated by
respondents. We have evidence of respondents ‘unlocking’ the spreadsheet and
entering data wherever they pleased. This leads to the most important instrument
limitation – the response spreadsheet is an approximation of what the production
instrument could look like – because of that, we have no way of knowing the
performance of the actual production instrument. Instead, the pilot is designed to test
features that might or might not support response; these features can then be included
where appropriate and feasible into the final instrument design.
We also acknowledge that we have some analytical limitations. Note that the data for
this presentation were frozen as of April 17, 2023. Since then, responses have
8
continued to come in – those later responders are not included in these analyses, and
may represent types of companies that are systematically different than those that met
the research deadline. At the same time, it is important to remember that the Pilot
Phase II instrument was a spreadsheet that respondents downloaded, filled out, and
then uploaded back into a simple portal. As such, the response data for this
presentation have not gone through the usual processing systems that we use to clean
our data in production. In fact, as we found anomalies in submission – like multiple
submissions for a single company, or file naming convention changes in submissions –
we had a team handle these data on a case-by-case basis, independent of any of our
internal systems. We also want to note that the analyses presented today are focused
on instrument performance. This limits the scope of what we will talk about today.
Finally, it is important to note that the AIES Pilot Phase II was run concurrently with the
2022 Economic Census, our flagship collection. We cannot overstate the impact that
the Economic Census had on the survey landscape – respondents mentioned the EC in
all aspects of the data we collected.
8
Test key elements of the
spreadsheet design
Holistic unit listing
Units summing to reduce burden
Color-coded content
9
Ok, now we will turn our attention to some of the new elements of the response
spreadsheet to see how these changes performed. Specifically, Pilot used a holistic unit
listing – all establishments in one list regardless of classification. We also included
automatic summations throughout the response spreadsheet to minimize burden.
And, we built in that color-coding to denote applicability and optionality at the unit
level within the spreadsheet.
9
Holistic Listing Approach
Percentage of Completed Key Pilot Variables*
Across All Modules by Phase of Collection
Percentage
Phase I
Phase II
complete
(N = 78)
(N = 318)
None
20.5%
3.4%
1 – 24.9
5.1
0.3
25 – 49.9
12.8
3.1
50 – 74.9
19.2
29.5
75 – 99.9
35.9
16.0
All
6.4
47.6
*Total payroll, total employment, first quarter
employment, and total revenue.
• “I think overall, when I was
done, I said yeah, this is going to
work. It's all in one
place.” – Debrief
• “The fewer times you have to
gather this information, the
better.” – Debrief
10
First up, let’s talk about the holistic listing approach in the pilot. Remember that our
response spreadsheet represents one instrument collecting data at the company,
establishment, and location level regardless of classification, and that this is a major
change from the first phase.
We have some evidence that the holistic unit listing is improving response. On screen
now is a comparison of the proportion of reported data for Phases I and II. To get these
categories, I looked at four key variables – total payroll, total employment, first quarter
employment, and total revenue – reported at the company level and at the
establishment level. These are the four key variables we looked at in Phase I. If a
respondent provided data for all four of these variables at both the company level and
for all listed establishments, that company is said to have reported 100 percent, or all,
of these variables. In Phase 1, 6.4 percent of all responding companies gave us data for
all establishments and the company. For Phase II, when we introduced the holistic unit
listing into one instrument, we saw that figure jump to 47.6 percent of respondents
providing data for all four variables for the company and across all listed
establishments.
Respondents indicated that having everything in one place was a benefit of the design;
10
in the first quote, the respondent notes that “it’s all in one place.” In the second, the
respondent notes that “the fewer times you have to gather this information, the
better”, again endorsing the streamlining of this instrument.
10
Finding #1: Holistic listing approach requires
instrument flexibility.
• Units don’t always align:
“[I’m] trying to complete [the] survey by location, but as we have a
single shared corporate office, a lot of our expenses (computer
services, advertising, professional fees, software, data processing,
G&A, etc.) are not allocated on a location level but
recorded as a corporate-wide expense. Survey is asking for
expense per location which is not possible...” – Email
• Lists can be overwhelming:
“There was a ton of duplicates [in our establishment list], 350 lines
for 50 locations. Home office locations, we still filled out because
they’re legit from HR standpoint. Manufacturing questions didn't
apply for those.” - Debrief
11
We note that units don’t always align as expected however – in the first quote on
screen, the respondent is pointing out that while we are asking questions by location or
by industry, some of the data we are requesting “are not allocated on a location level
but recorded as a corporate-wide expense,” suggesting that respondents simply cannot
provide the data at the granularity we are requesting.
At the same time, we note that the lists of establishments and industries can be
overwhelming to respondents. Note the second quote where the respondent says that
her response spreadsheet had “350 lines for 50 locations” – suggesting some
duplication and misclassification – and that the ‘manufacturing questions’ don’t apply
to the ‘home office locations’, suggesting that some content was not applicable for
some locations.
This brings us to our first finding: the holistic unit listing supports response, but only if
the instrument is flexible enough to support updates to the establishment and industry
listings. If we want to move forward with this design, we will need to continue to
explore ways of increasing flexibility with unit listings.
11
Sum of Establishments Compared to Company-level Reporting for Four
Variables for Pilot
Phase I (N = 78) and Phase II (N = 318)
Percentage of companies by value match type
100%
90%
80%
70%
60%
50%
40%
30%
20%
10%
0%
Phase I
Phase II
Phase I
Total Employment
Missing
Phase II
Annual Payroll
Exact match
Phase I
Phase II
First Quarter Payroll
Within 10 percent
Phase I
Phase II
Revenue
More than 10 percent difference
12
One of the major findings from Phase I was that by asking the same questions at
multiple levels, we were duplicating effort for respondents and increasing burden. For
the Phase II pilot, we built in autosumming capabilities to try to not only lessen
respondent burden, but also increase the agreement in response across these various
units.
On screen now is the difference that methodology has made for each of the four key
variables on the Pilot. For each one, we compared the sum of the listed and added
establishments to the value reported at the company level, and then categorized that
comparison into four categories: exact match means that the sum of establishments
matches the value reported at the company level. Within 10 percent means that the
sum of establishments and the company-level reported value is plus or minus ten
percent of each other, and more than ten percent difference means that the sum of
establishments and company report are different from each other by more than
percent. Missing means that one or more data points is missing – this could be at the
company level or at any of the reported or added establishments.
Looking across all four variables, (click) the proportion of ‘missing’ cases has declined,
suggesting that in the Phase II Pilot respondents reported their data more completely
12
than in Phase I. Likewise, across the two pilots, (click) the proportion of companies with
exact matches has increased across all four of the key variables – the orange in the bar
chart. Initial review of the data suggests that including the autosum feature has
increased the proportion of data reported AND has increased the agreement in these
data points across variables.
12
Finding #2: Autosumming supports response
for multi-unit companies.
• Some used it:
“It was helpful, like for payroll by site, I saw that it matched” - Debrief
• Not everyone saw it:
“No, I did not [see the totals] until now. That definitely would be helpful for anyone
who is doing it, you could see, [and ask yourself] does it make sense?” - Debrief
• Exception: Single Units:
“I will be completely honest, the survey tab was pulling answers from the company tab
since we only have one [location]. It was just autosumming. If there was something I
was supposed to, I didn't do anything with autosum.” - Debrief
13
In fact, the qualitative data support the idea in finding number two: the summing
functionality supports response for our multi-unit companies.
We present this finding with an important caveat: not everyone completely understood
or saw the autosumming features. The first quote – that the autosumming was
“helpful, like for payroll by site, I saw that it matched” – is an example of a respondent
positively reacting to a feature that she is indicating that she used during response. In
the second quote it’s a little different: “No, I did not see the totals until now. That
definitely would be helpful for anyone who is doing it, you could see, and ask yourself
does it make sense?” Here, the respondent is still positively reacting to the feature, but
he admits that he did not use this feature to aid in response because he did not see it
or interact with it. We can say that for those who use it, they like it, and for those that
don’t, it doesn’t hinder their response and they are open to the functionality.
I do want to take a moment to remind you that the Phase II Pilot was the first time we
put AIES content in front of single units, and one thing we wondered was how would
this instrument – designed with medium sized multi unit companies in mind – perform
for our single unit companies. The quote on screen is pretty representative of what we
heard in the field: “I will be completely honest, the survey tab was pulling answers
13
from the company tab since we only have one location. It was just autosumming. If
there was something I was supposed to do, I didn’t do anything with autosum.” The
respondent acknowledges the interaction between the tabs in the workbook, and the
autosum functionality, and even that these features are not necessarily useful for his
company. These features do not hinder his response.
13
Providing optional response
177 multi-unit, nonmanufacturing companies
reported optional
establishment level data for
one or more variables.
14
One of the features we introduced in the Pilot was optionality at the unit level. We
heard in Phase I that some respondents wanted to report by establishment, and others
wanted to report by industry. So, (click) we listed these questions as yellow – optional
– at the establishment level and white – required- at the industry level.
(click)We have only just begun to dig into these data, but so far, we can say that 177
multi-unit, non-manufacturing companies reported optional establishment level data
for one or more variables. These are data that would have usually be collected at the
industry level, but were reported at the more granular establishment level because of
the optional response color coding.
14
Finding #3: Optionality by unit supports
response but needs additional consideration.
• Color-coding made sense:
“I understood the color coding and did refer back to the color tab once
or twice.” – Debrief
• Optional by unit vs. optional by question:
“For the extra, I didn't need to fill out in terms of [the yellow cells]. So I didn't
touch optional too much.” – Debrief
“Yellow meant "optional." You don’t need the data, but I should include if I can
find it.” - Debrief
15
This brings us to our third finding – providing flexibility in the unit of response is helpful
for respondents, but we need to revisit how to communicate to this them. Note the
first quote, where the respondent says that he “understood the color coding and did
refer back to the color tab once or twice” while completing the survey. Many
respondents told us that the color coding made sense and helped them to provide the
data, and we have evidence that respondents are using the feature.
At the same time, quote two brings up an important nuance that is being lost on
respondents: optional at the unit level vs. optional at the question level. Note in the
first quote under the second heading, “for the extra, I didn’t need to fill out in terms of
[the yellow cells]. So I didn’t touch optional too much.” She’s indicating that the yellow
signaled to her that the question was optional, and so did not provide any data at all for
those questions. This is echoed in the second quote where the respondent says that
“yellow means optional. You don’t need the data but I should include it if I find it.”
Again, the respondent interpreted the yellow to indicate optionality at the question
level, not noticing the requirement to report at the industry level.
15
Recommendations: Test key elements of the
spreadsheet design
Phase II Goal
Finding
Recommendation
Test key elements of the spreadsheet design
Holistic unit listing
Holistic Listing Approach requires
instrument flexibility.
•
•
Include functionality to
clean up establishment lists
Consider functionality to
orient respondents within
the spreadsheet
Units autosumming
Autosumming supports response for multiunit companies.
•
Retain and highlight this
functionality as appropriate
Color coding
Optionality by unit supports response but
needs additional consideration.
•
Continue to explore ways of
communicating optionality
at the unit level.
16
Here we are at our first set of recommendations. We wanted to test three elements of
the Phase II response spreadsheet.
Looking at the holistic unit listing, we learned that the approach is working for
companies, but that the implementation requires instrument flexibilities. We
recommend moving into the next round of collection that the instrument include the
ability for respondents to clean up their establishment lists. This will give them the
opportunity to align the survey to the reality of their company. We also recommend
some functionality for respondents to identify for which establishment they are
entering data, whether that be to use color, highlight or bolding, or some other visual
means of helping the respondent to orient themselves to the list.
We also tested the autosumming across units, and found that it is a useful tool for
multiunit companies that use it, but that not all respondents saw it or understood it.
We recommend retaining autosumming where appropriate, and developing additional
support materials to draw respondents’ attention to the functionality. This could be
FAQs, walk throughs, splash screens, and others, or it could include other ways of
orienting respondents to the units within the survey.
16
Finally, we looked at color-coding and optionality by unit. We noted that some
respondents included data at the establishment level that would otherwise have been
reported at the industry level, but that others saw the optional unit response as
meaning optional at the question level and did not provide any response for the
question at any unit of collection. We recommend retaining the optional respond by
establishment, but exploring better ways of communicating optionality by unit not
question.
16
Gain additional information about response
burden
17
Alright, we have explored changes to the instrument design between Phase I and Phase
II. One of the lingering questions from Phase I was a better sense of response burden.
Let’s turn attention now to what we learned about burden…
17
Survey Structure:
RAS
Overall, which of the following AIES
questionnaire sections was most challenging
to complete?
Which of the following, if any, were challenges to
completing the AIES questionnaire? Select all that apply.
N = 105
Had to collect information from more
than one database or other source
61.9%
Already had other surveys to
complete at the same time
49.5%
Had to add, allocate, or otherwise
manipulate data to fit questions
N
Percent
Establishment-level data
35
40.7%
Company-level data
22
25.6
Industry-level data
20
23.3
Unclear or inadequately defined
terms
Additional establishments
data
5
5.8
Some other challenge
Establishment-level data for
products (if applicable)
4
4.6
The online survey was difficult to use
Total
86
100
46.7%
Had to wait to rely on others within
my company for the requested data
42.9%
Too many questions
38.1%
26.7%
24.8%
21.0%
Questions were too complicated
10.5%
0.0%
20.0%
40.0%
60.0%
80.0%
100.0%
18
On the Response Analysis Survey, we asked respondents to identify the MOST
challenging section of the questionnaire. We note here that two in five respondents
said the establishment-level data are the most challenging to complete – 40.7 percent.
We also asked respondents to identify challenges in completing the AIES questionnaire.
Of the top four chosen responses, three of them related to accessing or manipulating
the appropriate data to fit the survey request, including collecting information from
more than one database or source; adding, allocating, or otherwise manipulating data
to fit questions; and waiting on others within the company to report the requested
data. This speaks to the varied content within the AIES instrument being housed across
different data infrastructure at our companies, as well as instances where the
specifications of our questions do not match the ways that companies keep their data.
We note that the second most commonly cited challenge to completing AIES is that
respondents “already had other surveys to complete at the same time,” suggesting
once again that the pilot’s concurrence with the Economic Census created a crowded
survey landscape that may have overwhelmed respondents.
18
Finding #4: Content continues to be a
challenge.
• Too many questions, too many topics:
“Some of this we don't even have. Purchased electricity, for example.
There’s a lot of detail here.” – Debrief
• Ambiguous questions and instructions:
“We have a person tracks and does all the work for cap ex, some of the
questions weren't straightforward and had to do some research.” – Debrief
• Non-numeric data and grids:
“When it asks for foreign ownership percent I cannot put 0. Our company has
no foreign ownership, but the options start with at least 10 percent. How do I
answer this question correctly?” - Email
19
All of this evidence brings us to our fourth finding: Content continues to be a challenge
for the AIES, and this is supported by our debriefing interviews. Some said there are
just too many questions over all; others said that it was the types of questions over a
bunch of different topics that posed a challenge to response. Others suggested that
some questions and instructions are ambiguous and required additional attention. We
also heard that questions with non-numeric responses did not perform well in the grid
format – the third quote is a respondent struggling to answer both a non-numeric
question and a question with a skip pattern in the grid format.
19
Perceived Burden
Compared to annual surveys you have answered in previous years, how easy or difficult did you find completing the AIES
questionnaire?
N
Percent
Extremely difficult
12
12.6%
Somewhat difficult
33
34.7
Neither easy nor difficult
40
42.1
Somewhat easy
8
8.4
Extremely easy
2
2.1
Compared to annual surveys you have answered in previous years, how much time did it take to complete the AIES
questionnaire?
N
Percent
Less time
9
9.5%
More time
48
50.5
About the same amount of time
38
40.0
20
Remember that burden can be broken into two categories, actual and perceived.
Actual burden is the time and resources it takes to complete the survey. Perceived
burden is the respondents’ affect or attitude toward the amount of time and resources
it will take them to complete the survey. There is literature that suggests that BOTH
actual and perceived burden are important factors in the decision to respond. There is
also literature that suggests that respondents are themselves TERRIBLE at reporting
both, actual and perceived burden. But, for the purposes of this study, all we really
have is the respondent reported actual and perceived burden; it will have to serve as
the best measure available of burden for the Pilot.
On screen now are response distributions to two questions on the RAS. First, we asked
respondents relative to legacy annual surveys, how easy or difficult completing the pilot
ended up being. Here we see that two in five RAS respondents – 42 percent – said that
the pilot was neither easy nor difficult. Similarly, about the same proportion – 40
percent – said that the pilot took about the same amount of time to complete as the
current annual surveys. However, we also note that 47 percent of respondents called
the pilot “somewhat” or “extremely” difficult relative to the current annual surveys,
and that half of RAS respondents (50.5 percent) said that the Pilot took more time to
complete than the current annuals. This is similar to what we saw in the first phase of
20
the pilot, and frustratingly inconclusive. We’ll have to turn our attention to actual
burden to see if we can identify patterns more clearly.
20
Actual Burden
Response Analysis Survey
Approximately how long did it take to complete the AIES
questionnaire for this company, including time spent
reviewing instructions and gathering the necessary data?
4 hours or less
N
Percent
51
58.6%
AIES Pilot Instrument
Approximately how long did it take to complete this
survey?
N
Percent
4 hours or less
125
59.5%
19.1
5 to 10 hours
15
17.2
5 to 10 hours
40
11 to 30 hours
10
11.5
11 to 30 hours
29
13.8
31 ore more hours
11
12.6
31 or more hours
16
7.6
Measures of Central Tendency by Source of Response
Did you require assistance from other people or
departments to collect relevant information or complete
answers to questions in any of the AIES questionnaire?
Yes
65
67.7
No
31
32.3
RAS
Mean
Response spreadsheet
14.2 hours
14.75 hours
Minimum
30 mins
20 mins
Maximum
200 hours
480 hours
87
210
N
21
For the Phase II Pilot, we have two sources of estimates of actual burden. First, we
asked for an estimate of actual burden on the RAS. But, we also included a column in
the Company tab that asked for an estimate of actual burden. What we find is that the
estimates of actual burden are fairly similar across the two collections – almost six in
ten respondents estimate their burden at four hours or less on both the RAS and the
response spreadsheet. On average, respondents reported taking a little more than 14
hours to complete the survey.
Additionally, about 2 in 3 respondents to the RAS indicated that they required
assistance from other people or departments to collect relevant information or
complete answers to the question in any of the AIES questionnaire.
21
Finding #5: Changing the survey is a source
of burden.
• Steep learning curve:
“Total man hours 250-300…I don't know compared to how many we would
normally spend throughout the year, it's probably more because of how much work
I've put into it. Once it's all up and running it'll save 50 hours per year.”
– Debrief
• Could pay off:
“Since I was in the pilot last year, I knew how to do it…I get the notice
we have another due, pull up the current year's worksheet, pull up last year, go off my
notes question by question. I checks notes, like overall revenue figures, chart of
accounts. Here's the info you pull in, then I change my dates. Then making sure the
questions are similar year to year.” - Debrief
22
All of this information on burden brings us to finding number five: Changing the survey
is a source of burden.
In Phase I, we noted that respondent-reported estimates of burden were mixed – split
down the middle between the same or less, and more. In Phase II, we replicated that
finding – half say its worse, half say it’s the same or better.
What is causing this murk?
We suspect it is the added burden introduced by just changing the survey.
Respondents are struggling to estimate how much time it took to pull these data
because they are not just doing the typical response process, they are also learning
how to navigate a new instrument, which is adding to the amount of time and effort
necessary to pull the data. Note this first quote, the respondent talks about how much
work it was to pull the data this year – figuring 250 to 300 “man hours.” But note how
he ends the quote: “Once it’s up and running it’ll save 50 hours per year” responding
to surveys. Here, he is acknowledging that he has to build new infrastructure to report
to the survey.
22
Because this is the second iteration of the pilot, we can see just how that works – the
second quote is from a company that participated in Phase I, and she notes that “since I
was in the pilot last year I knew how to do it” for this year. Even with the instrument
changes between Phase I and Phase II, she had already started to build out new
response infrastructure and was relying on that update to provide response to the
Phase II pilot.
22
Recommendations:
Gain additional information about response
Phase
II Goal
Finding
Recommendation
burden
Gain additional information
about response burden
Content continues to be a
challenge.
•
•
•
Changing the survey is a source
of burden.
•
•
Consider cutting content
Consider organizing content
by topic
Explore other ways of
collecting non-numeric
responses.
Prime respondents for the
change
Provide alternate response
methods where appropriate
23
Ok, so, we wanted to know more about burden. We have two findings from the Pilot.
First, the Phase II Pilot provides additional evidence that there is just too much content
on the AIES, and that some content doesn’t perform well in the rows and column layout
we are using. We recommend a two-pronged solution: the AIES should both consider
cutting back on content, either by eliminating questions altogether or by experimenting
with question rotation or random assignment, and the AIES should consider organizing
the questions by content so that blocks of questions that relate to each other are
situated together. We should also explore other ways of collecting non-numeric
responses that underperformed in the grid format.
We also learned that just changing the survey will add burden to respondents, and will
make it difficult to estimate burden until new response processes have been
developed.
Because of this, we have two recommendations. First, prime respondents for the
upcoming change. Let companies know that this change is coming, give them
opportunities to prepare to the extent that they are capable and interested, and offer
support in building this new response infrastructure necessary to complete the AIES.
23
We also recommend providing alternate response methods where appropriate. This
could include data dumps and exports as response, system-to-system response, or
augmentation or replacement of data with third party and administrative data.
Providing flexible response options and exploring ways to make response as easy as
possible will cut burden.
23
Develop respondent communications
24
And, we now find ourselves at the third and final goal of the Phase II Pilot – further
development of respondent communications.
24
Finding #6: Respondents are using available
response support materials.
• Instrument-related:
“Printed instructions were fine, and I really do like the instruction feature, that you
could go to the specific instruction. I don't think we've had that before. Click to return
to where you were is a good feature.” – Debrief
• Response-support staff:
“I had a very positive response from [name] in EWD. She was very responsive,
knowledgeable.” – Debrief
helpful, and
• General inbox:
“I emailed the general [inbox], and then Melissa replied. I kind of figured I'd email this help line and
then never get a response…when I needed help I just [used the] generic one.” - Debrief
25
Finding 6 is that respondents are using available response support materials, and we
find this across the array of materials we’ve provided.
First, respondents said that they used the instrument-related response support
materials like the FAQs, the cover page to the survey, and the within-instrument help
links. The top quote relates to those links: “I really do like the instruction feature that
you could go to the specific instruction” and the “click to return you to where you were
is a good feature.”
They also noted that response support staff – whether as an EC or FSAM AM or other
support – was helpful. One debriefing participant described her response support as
“responsive, helpful, and knowledgeable.”
I do want to note that throughout the pilot field period, we occasionally received
questions or comments from response supported companies to the general pilot inbox.
The third quote on screen highlights this – the respondent notes that “when I needed
help I just used the generic” email even though she had an assigned staff for support.
25
Finding #7: Contact strategies need
additional refining.
Overall strategy:
• Email with mail backup
• Frequency and type
• Other methods
• QR Codes
• Videos
• Online tools
Messaging that
resonated:
Messaging that did not
resonate:
• “It mentions this is how
you measure economic
performance in the US,
and the GDP.
• “If you had to do one of
these surveys now you
have to do all of them.”
• “Because it says it is
required by law… It
sounds important.”
• “Looks like it's going to
take a long ass time to
complete this.”
26
We asked respondents specifically about our contact strategies, like letters and emails,
and finding number 7 is that our contact strategies need additional refining.
Across the letters and emails, we found that respondents generally prefer email as the
primary contact with paper letter as a back up. We also note that respondents find the
frequency and types of emails and letters that we send out to be manageable. We
asked about additional ways of communicating with respondents, including QR Codes,
videos, and online tools to support response, and respondents had varying levels of
interest in using these other communication tools.
When we asked about the content of our letters and emails, we noted that some
content resonated with respondents, and some did not. The importance of the data
collection, the ways that we use the data, and it’s centrality to economic indicators is a
message that respondents understand and is compelling to them – the first quote notes
that the survey is “how you measure economic performance in the US and the GDP.”
The mandatory nature of the survey is another resonant message; note the second
quote: “because it says it is required by law…it sounds important.”
Interestingly, the letters that we tested included a list of the in-scope legacy surveys
26
that were being integrated in the AIES, and this messaging did not resonate with
respondents. One respondent correctly identified that the list communicates that “if
you had to do one of these surveys now you have to do all of them,” but that this
messaging was discouraging response. Another summed it up succinctly: “Looks like its
going to take a long ass time to complete this.”
26
Recommendations:
Develop respondent communications
Phase II Goal
Finding
Recommendation
Develop respondent
communications
Respondents are using available •
response support materials.
•
Continue to develop
response support materials.
Consider response support
training.
Contact strategies need
additional refining.
Update letters to retain
resonant messaging and
drop discouraging
messaging.
Consider additional research
into communications
materials.
•
•
27
And, we are on our final set of recommendations! Our goal was to develop respondent
communications. We had two findings coming out of this – the first is that respondents
are using available response support materials. Because of this, we recommend the
development of additional response support materials like FAQs, help screens, videos,
and other support materials. We also recommend additional response support training
for staff at the Census Bureau that will be responding to inquiries from the field. Timely
and knowledgeable response from Census Bureau staff supported the response process
and encouraged reporting.
Our second finding is that our contact strategies need additional refining. We noted
that some of the messaging in the letters and emails is not resonating with
respondents. We suggest that letters for future collections emphasize messaging that
motivates and drop messaging that discourages response. We also suggest additional
investigations into communications materials that support response.
27
Phase II Goal
Finding
Recommendation
Get feedback on the
new structure
Unable to implement for AIES Pilot Phase II.
•
Test in AIES Dress Rehearsal collection.
Test key elements of the spreadsheet design
Holistic unit listing
Holistic listing approach requires instrument flexibility.
•
•
Include functionality to clean up establishment lists
Consider functionality to orient respondents within the
spreadsheet
Units autosumming
Autosumming supports response for multi-unit
companies.
•
Retain and highlight this functionality as appropriate
Color coding
Optionality by unit supports response but needs
additional consideration.
•
Continue to explore ways of communicating optionality at
the unit level.
Gain additional
information about
response burden
Content continues to be a challenge.
•
•
•
Consider cutting content
Consider organizing content by topic
Explore other ways of collecting non-numeric responses.
Changing the survey is a source of burden.
•
•
Prime respondents for the change
Provide alternate response methods where appropriate
Respondents are using available response support
materials.
•
•
Continue to develop response support materials.
Consider response support training.
Contact strategies need additional refining.
•
Update letters to retain resonant messaging and drop
discouraging messaging.
28 materials.
Consider additional research into communications
Develop respondent
communications
•
And, here we have it! These are the findings and recommendations of the Pilot. In the
coming months, we have some additional research to continue to refine the survey in
preparation for the 2024 Production launch. We learned so much about what is
working and about where we have opportunities for improvement. I want to end my
prepared remarks with a quick refresher on what is ahead as we move the AIES from
research orientation to production orientation.
28
Recommendation
Usability Testing
Dress Rehearsal Future Research
Test new survey structure in AIES Dress Rehearsal collection.
x
x
Include functionality to clean up establishment lists
x
x
Consider functionality to orient respondents within the spreadsheet
x
Retain and highlight autosumming functionality as appropriate
x
x
Continue to explore ways of communicating optionality at the unit level.
x
x
Consider cutting content
x
Consider organizing content by topic
x
Explore other ways of collecting non-numeric responses.
x
Prime respondents for the change
x
x
Provide alternate response methods where appropriate
Continue to develop response support materials.
x
x
x
Consider response support training.
x
Update letters to retain resonant messaging and drop discouraging
messaging.
x
Consider additional research into communications materials.
x
x
29
In fact, we address many of these recommendations in the coming months. We will
test the new survey structure in the Dress Rehearsal. We will test functionalities
including an online spreadsheet design in both usability testing and the Dress
Rehearsal, and into Production-based research. We will look at our communications
materials throughout the Dress Rehearsal and into the first year. We will need to
continue to grapple with the breadth and scope of data collected in this one survey,
and those conversations will continue into the first years of production of AIES.
29
Key Takeaways:
• Headed in the right direction!
•
•
•
•
Increased response rate
Improvements in reporting
Understand sources of burden
Piloting is a proven method
• Additional areas of investigation
•
•
•
•
Utilize Centurion and other systems as they come onboard
Test survey structure
Communications review
Test and refine help materials
30
OK! Wow, we’ve covered a lot of ground today! Your head may be spinning, so I
wanted to end our time today with two key takeaways from the Pilot Phase II.
First: While there are spots that need additional investigation, generally, I want you to
walk away from this briefing today understanding that all of our data suggests that we
are headed in the right direction. We saw improvements in the quantity and quality of
reported data in Phase II – not only are we getting more data with this design, we’re
getting better data with this design. We also have a better understanding of sources of
burden – those we can control, like content and design, and those we cannot, like the
added burden of simply changing the survey. We also have another instance of using
Piloting as a proven method of investigation – independent of any of our Census
infrastructure, and in just a few months time, we were able to get mountains of data
back on our design changes from more than 300 companies, and that is pretty
impressive!
All that said, we do want to recognize that there are a few considerations we should
focus on in the remainder of 2023 in preparation for the 2024 launch. Our next rounds
of testing will be using newly redesigned Centurion and other Census Bureau
infrastructure for collection. This means that not only will the respondents be learning
30
how to navigate the new instrument, but we will also be refining our processing and
reporting processes. We will be testing – for the first time – the new survey structure
that came from our Phase I findings, both through the Dress Rehearsal instrument and
usability testing. We are planning continued interviewing and refinement of our
communications strategies to encourage response to the AIES in production. And, we
will use feedback from the field from both usability testing and the Dress Rehearsal to
continue to refine our help and other response support materials to make response as
easy as possible.
We have learned so much during this research, and we are so close to production, so it
is vitally important that we stay focused through the next six months to rehearse and
refine our data collection procedures for AIES. I’m looking forward to our next round of
research in part because this round continued to be so fruitful and so promising.
30
Thank you!
• Melissa Cidade
• [email protected]
• Heidi St.Onge
• [email protected]
31
That concludes our prepared remarks, thank you so much for your time and attention.
If you have more specific questions, or if you want to talk about any of the pilot findings
further, please do not hesitate to reach out to either of us!
31
File Type | application/pdf |
File Title | Microsoft PowerPoint - EAMS Phase II |
Author | Melissa A Cidade (CENSUS/EWD FED) |
File Modified | 2024-02-15 |
File Created | 2024-02-15 |