2018EndtoEndCensusTest ISR Study Plan

ISR_Study Plan for the 2018 End to End Census Test_v5.2_10.11.2018_OMB.docx

2018 End-to-End Census Test – Peak Operations

2018EndtoEndCensusTest ISR Study Plan

OMB: 0607-0999

Document [docx]
Download: docx | pdf





2018 End-to-End Census Test Internet Self-Response

Operational Assessment
Study Plan




Internet Self-Response Integrated Project Team



Draft Pending Final Census Bureau Executive Review and Clearance.





October 11, 2018

Version 5.2









Page intentionally left blank



  1. Introduction

The 2018 End-to-End Census Test is an important opportunity for the Census Bureau to ensure an accurate count of the nation’s increasingly diverse and rapidly growing population. It is the first opportunity to apply much of what has been learned from census tests conducted throughout the decade in preparation for the nation’s once-a-decade population and housing census. The Address Canvassing portion of the 2018 End-to-End Census Test will be held in three locations: Pierce County, Washington; Providence County, Rhode Island; and the Bluefield-Beckley-Oak Hill, West Virginia, area. The remaining operations, including the self-response phase, will take place in Providence County, Rhode Island.

The 2018 End-to-End Census Test will be a dress rehearsal for most of the 2020 Census operations, procedures, systems, and field infrastructure to ensure there is proper integration and conformance with functional and nonfunctional requirements. The test also will produce prototypes of geographic and data products. Note that 2018 End-to-End Census Test results cannot be generalized to the entire United States.

This study plan documents how the Internet Self-Response (ISR) operation will be assessed, as guided by questions to be answered. The objectives of the 2018 End-to-End Census Test ISR operation are to develop communication and contact strategies to encourage the use of the internet as the primary response mode through a sequence of invitations and reminder mailings. These mailings are referred to as the stratified self-response mail strategy and use historical response rates, internet access, and demographics to tailor the strategy in order to make responding as easy as possible.

In addition, ISR operation works to increase high quality responses by increasing the opportunity and flexibility with which a person can respond. The primary way this is done is by providing an internet application that respondents can use at any time from nearly any location to respond online. This application can be used across most devices and browsers and can be displayed in multiple languages. Respondents may also respond through the application without a unique identification code, referred to as a User ID.

Lastly, the ISR operation works to increase response to the census by working with other operations to provide the opportunity to respond using other collection modes. Namely, the ISR operation provides the application used by the Census Questionnaire Assistance operation (CQA) to collect responses for those who call and respond over the telephone, as well as working with the Forms Printing and Distribution operation (FPD) to develop the stratified self-response mail strategy.



  1. Background

In 2020, the U.S. Census Bureau is committed to using the internet as a primary response option, as well as increasing census awareness and promoting self-response in a cost-effective manner.

To encourage the majority of self-respondents to complete their census questionnaires online, the Census Bureau will use contact strategies for mailing materials. These contact strategies refer to all attempts by the Census Bureau to make direct contact with individual households by mail. Types of contact strategies include invitation letters, postcards, and questionnaires mailed to the households.

The Census Bureau will use an approach called “Internet First,” (formerly known as “Internet Push”) in which the first mailing includes an invitation to respond to the census online. In areas with low internet coverage or connectivity or other characteristics that may make it less likely that respondents will complete the census questionnaire online, the Census Bureau will employ an “Internet Choice” contact strategy. In this approach, the first mailing includes both an invitation to complete the census online and a paper questionnaire. The Census Bureau anticipates that about 20 percent of the households in TEA 11 will receive the Internet Choice treatment, and the remainder will be Internet First. While all nonresponding households in the Internet First areas will eventually receive a paper questionnaire, households in Internet Choice areas will receive a paper questionnaire in the first mailing, and again in the fourth mailing if they have not yet responded.

Throughout the past decade, field tests have been conducted to evaluate a number of different contact strategies, including variations on the format, order, and timing of the mailings.

2010 Census Quality Survey (CQS)

Following the 2010 Census, the Census Bureau conducted the 2010 Census Quality Survey (CQS). The 2010 CQS was a census reinterview evaluation, using nearly identical content as the 2010 Census, with both a mailback and an Internet response component. The research was designed to be a first step in the 2020 Census Internet testing cycle. A sample of 2010 Census mail respondents was selected and assigned to one of the three CQS panels, each with a different contact strategy approach – Internet First, Internet Choice and Mail Only.

The final participation rates for the CQS are shown in Table 1. The participation rate is the percent of nonblank responses received (by mail or online) divided by the number of households that received the survey materials. The rate excludes households from the calculation if no response was received and the survey mail was returned as undelivered as addressed (UAA). As such, the participation rate is a better gauge of survey “participation” than the response rate (the number of responses received divided by the total mailout size) since it attempts to control for households vacancies.

It should be noted that due to an error in the specifications sent to NPC prior to printing the mailings, the reminder postcards for the Mail Only panel and the Internet/Mail Choice panels were switched. As a result, we received a small number of Internet returns from the Mail Only panel and the percentage of Internet returns for Internet/Mail Choice panel was lower than anticipated. Those Internet respondents were removed from the data analyses.


The highest overall participation rate was the Mail Only panel (56.0 percent). Some Internet responses were received from this panel (2.4 percent) due to the reminder postcard issue, but these cases are removed from further analysis.


The Internet/Mail Choice panel was lower, with about 55.1 percent participation. The lower

rate supports previous studies that have concluded that giving respondents a choice has a small

negative effect on overall response. In addition, the Choice panel respondents overwhelmingly opted to respond by mail compared to Internet (50.5 percent and 4.6 percent, respectively).


The Internet First panel had the lowest participation rate (46.5 percent). However, nearly a quarter of the Internet First sample (and more than half of the respondents), 24.8 percent, responded online. The replacement mailing provided a substantial boost to the participation rate for the Internet First panel, as those households who preferred to respond by mail were now given the option (21.7 percent response by mail).


Table 1. 2010 CQS Final Participation Rates by Panel and CQS Response Modes


Overall

Initial Mailing

Reminder Postcard

Replacement Mailing

Total

Internet

Mail

Internet

Mail

Internet

Internet

Mail

Internet First

46.5

(0.16)

24.8

(0.13)

21.7

(0.13)

18.9

(0.12)

NA

4.9

(0.07)

1.0

(0.03)

21.7

(0.13)

Internet Mail/Choice

55.1

(0.13)

4.6

(0.05)

50.5

(0.13)

3.7

(0.05)

43.2

(0.13)

<0.1

(0.00)

0.9

(0.02)

7.3

(0.07)

Mail Only

56.0

(0.35)

2.4

(0.11)

53.6

(0.35)

0.1*

(0.02)

45.9

(0.35)

2.3*

(0.10)

NA

7.7

(0.19)

*Internet returns were erroneously received for the Mail Only panel due to a reminder postcard issue. NA=Not applicable. Source: 2010 Census Quality Survey Final Report, estimates are weighted with standard error in parentheses.

2012 National Census Test (NCT)

The 2012 NCT assessed the relative self-response rates and Internet self-response rates across various contact strategies in the presence of the Internet First methodology. In addition to a control panel, five experimental contact strategy panels were tested, all in the presence of the Internet First methodology. The base contact strategy was modeled after the core approach used in the 2010 Census (advanced letter, initial survey request, reminder postcard, and a final survey request sent only to nonrespondents). Across multiple treatments, the strategy of sending a second reminder prior to mailing a paper questionnaire resulted in significant gains in both overall self-response and Internet response and ultimately led to the recommendation to use this panel for self-response in future testing.

The self-response rates by panel and mode are shown in Table 2. The self-response rate is the percent of responses received (by mail, telephone or Internet) divided by the number of households that received the survey materials. The rate excludes households from the denominator for which no response was received and the first mailing was returned as undeliverable as addressed (UAA). Panel 3, which included a second reminder postcard prior to the mail questionnaire, produced the highest Internet self-response rate, at 42.3 percentage points. The overall self-response for this panel was also among the highest (64.8 percent). The Internet self-response rates for the remaining panels ranged from 37.2 percentage points to 38.1 percentage points. In each of the six panels, more than half of all responses were by Internet.

Table 2. 2012 NCT Self-Response Rates by Panel and Response Mode

Panel


Internet

Mail

TQA

Total

  1. Advance letter

38.1

(0.68)

17.2 (0.53)

5.1 (0.33)

60.3 (0.66)

  1. Absence of advance letter

37.2

(0.62)

16.5 (0.48)

4.3 (0.25)

58.0 (0.62)

  1. 2nd reminder prior to questionnaire

42.3

(0.70)

13.6 (0.46)

8.9 (0.40)

64.8 (0.65)

  1. Accelerated questionnaire followed by 2nd reminder

38.1

(0.61)

20.3 (0.51)

5.3 (0.28)

63.7 (0.60)

  1. Telephone number at initial contact, accelerated questionnaire, and 2nd reminder

37.4

(0.64)

17.6 (0.49)

9.4 (0.40)

64.5 (0.65)

  1. Accelerated questionnaire, content tailored to nonrespondents and 2nd reminder

37.6

(0.64)

22.2 (0.59)

5.2 (0.32)

65.0 (0.63)

Source: 2012 National Census Test Contact Strategy Results; Optimizing Self-Response. Note: Estimates are weighted with standard errors in parenthesis.

The 2012 NCT also tested versions of a combined Hispanic origin and race question in the Internet instrument. The open-ended text boxes in the Hispanic origin and race question produced a dynamic drop-down list of suggested options based on the initial text string entered in the box. This test also provided the opportunity to collect additional research on paradata related to respondent navigation, break-off rates, use of help screens, answer changes, access failures, edit message triggers and completion times.

2014 Census Test

For Internet First strategy testing, the 2014 Census Test used the most successful contact strategy from the 2012 NCT as the control panel, since this panel had the highest overall weighted response rate of the six panels tested. Other Internet first panels for this test were modified versions of this panel. Table 3 summarizes the eight different panels that were experimentally tested in the 2014 Census Test. The goal of each of the strategies was to push households to respond online using the Internet survey site. This included using preregistration to receive email text messages in lieu of mail materials as a contact method for response (“Notify Me”), Internet response without a unique User ID, and email invitations and reminders.

Table 3. 2014 Census Test Contact Strategy Panels

Panel

Prenotice

#1

(June 23)

#2

(July 1)

#3*

(July 8)

#4*

(July 15)

#5*

(July 22)

  1. Notify Me

(Preregistration)

Postcard

(June 5)

Email / Text

Email / Text

Email / Text

Mail Q’nnaire


  1. Internet Push Without ID


Letter

(no ID)

Postcard

(no ID)

Postcard

(no ID)

Mail Q’nnaire


  1. Internet Push (Control)


Letter

Postcard

Postcard

Mail Q’nnaire


  1. Internet Push with Email as 1st Reminder


Letter

Email

Postcard

Mail Q’nnaire


  1. Internet Push with AVI as 3rd Reminder


Letter

Postcard

Postcard

Mail Q’nnaire

AVI

  1. Cold Contact Email Invite, and 1st Reminder


Email

Email

Postcard

Mail Q’nnaire


  1. Letter Prenotice, Email Invite, and 1st Reminder

Letter

(June 17)

Email

Email

Postcard

Mail Q’nnaire


  1. AVI Prenotice, Email Invite, and 1st Reminder

AVI

(June 17)

Email

Email

Postcard

Mail Q’nnaire


Source: 2014 Census Test Results for Optimizing Self-Response.

* Targeted only to nonrespondents. AVI = Automated Voice Invitations


Panels 6 through 8 were sent the initial invitation and first reminder by email in lieu of postal mail. None of these three panels had a higher response rate than the panels that received the invitation and first reminder by postal mail. This finding suggested that email is not an effective invitation strategy when used as a replacement for physical mail pieces.

2015 National Content Test

The 2015 National Content Test (NCT) focused on content testing, testing different contact strategies aimed at optimizing self-response (OSR), and testing different approaches for offering language support in mail materials. The 2015 NCT continued to test modifications to the timing, order, format and number of contacts for the Internet First strategy.

A total of nine contact strategy panels were fielded. All but one of these panels used an Internet First approach, the other panel used an Internet Choice approach in which a questionnaire is sent in the initial mailing along with information on how to complete the survey online. Table 4 displays the self-response rates by panel and response mode for the Stateside sample. The overall self-response rate was used to calculate the success of the contact strategy panels. See section 4.1 for further information on the overall self-response rate.

The Stateside sample was made up of three different OSR Strata- Low, Medium, and High- representing low, medium, and high response areas. Housing units in the Low OSR Stratum were randomly assigned to one of the nine contact strategy panels, while housing units in the Medium and High OSR Strata were randomly assigned to the contact strategy panels with the exception of Panels 4 and 5. Panel 6 had a significantly higher overall response rate than all other contact strategies. Panel 8, which did not have the option to respond by mail, had a significant higher Internet response rate, but significantly lower overall response rate than all other variations of the Internet First panel, with the exception of Panel 4. Panels 4 and 5 were assigned only to housing units in Low OSR Stratum, yielded overall response rates and Internet and TQA response rates that were significantly lower than all other panels, and mail response rates that were significantly higher than all other panels.

Table 4. 2015 NCT Self-Response Rates by Panel and Response Mode for Stateside Sample

Panel

Internet

Mail

TQA

Total

1. Internet First (Control)

37.5

(0.19)

9.5

(0.11)

6.5

(0.09)

53.6

(0.18)

2. Internet First with Early Postcard

37.1

(0.16)

9.8

(0.11)

6.5

(0.09)

53.4

(0.18)

3. Internet First with Early Questionnaire

33.7

(0.17)

14.3

(0.12)

5.1

(0.08)

53.1

(0.17)

4. Internet First with Even Earlier Questionnaire (Low OSR Stratum Only)

16.8

(0.23)

17.5

(0.23)

3.2

(0.10)

37.5

(0.28)

5. Internet Choice (Low OSR Stratum Only)

10.8

(0.17)

29.8

(0.28)

2.1

(0.09)

42.6

(0.29)

6. Internet First with Postcard as 3rd Reminder

38.1

(0.18)

10.4

(0.10)

6.8

(0.09)

55.2

(0.18)

7. Internet First Postcard

36.1

(0.17)

9.9

(0.11)

6.1

(0.09)

52.1

(0.18)

8. Internet First with Early Postcard and 2nd Letter instead of Mail Questionnaire

41.0

(0.18)

N/A

7.4

(0.10)

48.5

(0.17)

9. Internet First with Postcard and Email as 1st Reminder

37.8

(0.18)

9.8

(0.10)

6.3

(0.09)

53.9

(0.19)

Source: 2015 National Content Test Optimizing Self-Response Report. Note: Estimates are weighted with standard errors in parenthesis.


2016 Census Test

The 2016 Census Test was the first opportunity to test the use of contact and response materials in languages other than English and Spanish. Table 5 shows the response rates by panel and response mode. The overall self-response rate was used to calculate the success of the contact strategy panels. Five contact strategy panels were fielded. All but one of these panels used an Internet First approach, while the Panel 5 included an Internet Choice approach. Overall, the 2016 Census Test had a response rate of 45.9 percent, and the Internet response rate was 30.3 percent. Panel 4, which included a language insert, had the highest response rate and the highest Internet response rate, 47.8 percent and 34.7 percent, respectively. The control panel, Panel 1, had the lowest response rate of 43.8 percent.

Table 5. 2016 CT Self-Response Rates by Panel and Response Mode

Panel

Internet

Mail

CQA

Total

1. Internet First

31.9

9.5

2.4

43.8

2. Internet First with Reminder Letter

32.8

9.4

2.4

44.6

3. Internet First with Multilingual Brochure

32.5

11.5

2.6

46.6

4. Internet First with Language Insert

34.7

10.5

2.6

47.8

5. Internet Choice

16.9

27.9

1.0

45.8

Overall

30.3

13.4

2.3

45.9

Source: U.S. Census Bureau, 2016 Census Test.


For the 2016 Census Test, the Census Bureau added the tool Google reCAPTCHA to the Register screen. Google reCAPTCHA is a type of security measure that is used to deter spam, bots, and other cyberattack attempts. In addition, it helps protect internet respondents from spam and password decryption. Respondents complete a simple test known as the “CAPTCHA” to tell human, bots and automated processes apart.

The Internet instrument displayed abbreviations for some response choices for race/ethnicity questions. The Census Bureau’s Center for Survey Measurement reported that text-to-speech technologies read these abbreviations phonetically as “African Am,” rather than the intended phrase, “African American.” To comply with Section 508, the Census Bureau modified the display of these response options.

2017 Census Test

The 2017 Census Test was the first test to utilize the Enterprise Census and Survey Enabling (ECaSE) Platform, which allowed for an integrated system of systems. Additionally, the 2017 Census Test was a nation-wide self-response test of 80,000 housing units that allowed testing of all our public-facing systems for self-response together in the field for the first time and in a cloud environment for the first time. This test also tested the feasibility of collecting tribal enrollment information.

The self-response contact strategy consisted of only two panels, Internet First and Internet Choice. Each panel included English Only and Bilingual (English/Spanish) mailings. Table 6 shows the preliminary weighted self-response rates by panel and response mode.

Table 6. 2017 CT Self-Response Rates by Panel and Response Mode

Panel

Internet

Mail

CQA

Total

1. Internet First

37.4

(0.33)

13.0

(0.24)

2.8

(0.11)

53.2

(0.34)

2. Internet Choice

9.0

(0.21)

28.9

(0.34)

0.6

(0.06)

38.5

(0.36)

Overall

31.7

(0.27)

16.2

(0.20)

2.4

(0.09)

50.3

(0.28)

Source: 2017 Census Test Results. Note: Estimates are weighted with standard errors in parenthesis.

The 2017 Census Test also provided the ability for non-English speakers to respond. The Internet instrument was available in English and Spanish, and the Census Questionnaire Assistance (CQA) was available in English, Spanish, Chinese (Mandarin, Cantonese), Korean, Vietnamese, Tagalog, Arabic and French. A language insert was included in all mailing panels explaining how to reach a CQA agent in each of the above languages.

2018 End-to-End Census Test

The 2018 End-to-End Census Test will be a dress rehearsal for most of the 2020 Census operations, procedures, systems, and field infrastructure to ensure there is proper integration and conformance with requirements.

Like in past census tests, the 2018 End-to-End Census Test will include the Internet First and Internet Choice mailing strategies to optimize the rate at which the public self-responds to the decennial census. However, 2018 End-to-End introduces new changes to the contact strategy.

The Internet First panel will implement a staggered approach, which will include three mailing cohorts. The purpose of these cohorts is to prevent the mailing materials being delivered to all of TEA 1 Internet First housing units on the same day, which will help distribute the call volume to CQA call centers. In addition, the contact strategy dates for Internet First and Internet Choice have shifted from mail dates to now in-home dates.2

The 2018 Self-Response Contact Strategy is outlined in Table 7.









Panel



Cohort



Mailing 1

Letter (I.F.) or

Letter + Q’n (I.C.)


Mailing 2

(Letter)


Mailing 3*

(Postcard)


Mailing 4*

(Q’n + Letter)



Mailing 5*

(Postcard)


Internet First

1

Friday 3/16/2018

Tuesday 3/20/2018

Friday 3/30/2018

Thursday 4/12/2018

Monday 4/23/2018

2

Tuesday 3/20/2018

Friday 3/23/2018

Tuesday 4/3/2018

Monday 4/16/2018

Thursday 4/26/2018

3

Friday 3/23/2018

Tuesday 3/27/2018

Friday 4/6/2018

Thursday 4/19/2018

Monday 4/30/2018

Internet Choice

N/A

Friday 3/16/2018

Tuesday 3/20/2018

Friday 3/30/2018

Thursday 4/12/2018

Monday 4/23/2018

*= targeted only to non-respondents

Table 7: 2018 Self-Response Contact Strategy




3. Scope of Assessment Content and Questions-To-Be-Answered

The 2018 End-to-End Census Test Internet Self-Response Operational Assessment will answer the following questions:


Response Rates Assessment

  1. What were the response rates by:

    1. Overall

    2. By Mode (Internet, CQA, or Mail)

    3. By Device Type

    4. By Date

    5. By In-home Delivery date

    6. By Internet First (overall and within each cohort)

    7. Internet Choice by internet, by mail

    8. Language of Mailing (English or Bilingual)

  2. What are the Internet item nonresponse rates by question?

    1. By Device Type

    2. By Internet

    3. By Mail

  3. What are the break-off rates for each screen?

    1. By Device Type?

    2. By related/unrelated households?


Instrument Performance

The following research questions will assist us in evaluating the instruments availability and performance.

  1. Were there any outages?3 How long did an outage last? What was the cause?

    1. How many respondents were unable to access the application during each outage?

    2. Were there any respondents who had their session terminated (before they finished providing their responses) because of an outage? If so, how many respondents who experienced an outage responded online anyway, and how many never responded online?

  2. What was the average length of time for each screen to load?

    1. By day and time of day.

  3. How many concurrent respondents were logged into the ISR application? By day and time of day.

    1. What was the largest load experienced by the instrument? When did that occur?



Respondent Experience

The following research questions will assist us in evaluating time spent within the ISR application, cases submitted with or without a User ID, reCAPTCHA activity, and the order in which households completed their survey.


  1. How long did respondents spend in the ISR application? By day and time of day.

    1. Including those respondents who experienced outages?

    2. By household size, time of day, average screen loading time, and device type.

  2. What was the average length of time spent on each screen independent of screen loading time? By device type?

  3. What percentage of responses were submitted with a User ID? Without a User ID (Non-ID)? By demographics? By device type?

  4. How many reCAPTCHA attempts did the respondent take to get through to the instrument? By Internet First, Internet Choice, and languages (English and Spanish.) Did the reCAPTCHA attempts have any correlation to demographic characteristics?

  5. How many respondents broke off during the Google reCAPTCHA validation?

  6. How often were persons added at the dashboard? How often were persons deleted at the dashboard?4

  7. For households with more than one person, how often did respondents complete demographic information for each rostered person in roster order?

  8. How many respondents started in Spanish? How many respondents switched to Spanish during the survey? How often did respondents switch between English and Spanish by screen?



Data Quality

The following research questions will assist us in evaluating the possible effects of respondent burden due to triggered edit messages, break-offs and completing questions within the demographic section of the survey.


  1. Which screens triggered the most/least edit messages?5 Did respondents change their responses based on the edit message?

  2. Are these edits messages correlated to any demographic characteristics?

  3. Did edit messages cause break-offs?

  4. How often did breakoffs occur after completing one person’s demographic data but before beginning the demographic for the next person?

  5. How many respondents completed the demographic section, but did not answer the overcount question?

  6. How many respondents completed the survey, but never “submitted” their survey?

  7. How many respondents logged into the ISR application multiple times using the same User ID? Of those who restarted, how many ultimately submitted their survey



Additional Assessment

The following research questions will assist us in evaluating internet submissions, login activity, help text views and which browsers, operating systems and devices were used to respond.


  1. What was the number of insufficient partials, sufficient partials, submits, and self-reported vacants? What was the average time of an insufficient partial, a sufficient partial, a submit, and a self-reported vacant?6

  2. What were the number of logins by each date? In-home delivery date? What in-home delivery date received the most/least logins? What time of day did these logins occur?

  3. What devices were used to respond?7 How often did the respondent switch device types?

  4. What operating systems were used to respond? What browsers were used to respond?

  5. How many times was the Help text viewed on each screen? How many times was Help text accessed before/after completing various screens (DOB, Popcount, etc.?) By Screen time?

  6. How many persons had a name containing only one character?

  7. How often were the demographic checks invoked? By User ID? By Non-ID?

  8. What was the language toggle rate by question?



4. Methodology

The following sections present the equations, the table shells, and the methodology that the authors will use to answer the questions presented in the previous section of this study plan.

4.1. Questions-To-Be Answered Equations

Response Rate Assessment

The following paragraphs describe the equations needed to conduct the Response Rate Assessment.

The overall self-response rate is the primary measure to evaluate respondent cooperation and reflects the sample housing units that respond to the survey by one of the three self-response modes: (1) responding online to the internet survey site, (2) providing information to a phone interviewer via CQA, or (3) completing and returning the mail questionnaire. ISR will utilize the following equations to answer the response rate assessment questions identified in Section 3.

By definition, the self-response rate is the number of responses received by any self-response mode divided by the number of sampled housing units.

Overall self-response rate =

Unduplicated sufficient responses

(Internet, CQA, or Mail)

* 100 percent

Total sample size

Households providing more than one self-response are counted in the response rate calculation only once. For an ID Case, an internet response is sufficient if name is provided and the number of people residing in the housing unit on Census day is greater than one. For a Non-ID case, an internet response is sufficient if name and a valid address is provided, and number of people residing in the housing unit on Census day is greater than one.

The self-response rate by mode is similar to the overall self-response rate, but it focuses on each individual response method rather than combining them.

Internet response rate =

Unduplicated sufficient internet responses

* 100 percent

Total sample size



CQA response rate =

Unduplicated sufficient CQA responses

* 100 percent

Total sample size



Mail response rate =

Unduplicated sufficient mail responses

* 100 percent

Total sample size



In addition, the following equations will be used to determine the response rates for:

Device Type response rate =

Unduplicated sufficient responses by device type

* 100 percent

Total sample size

Daily internet response rate8 =

Unduplicated sufficient internet responses by date

* 100 percent

Total sample size



In-Home Delivery Date

response rate =

Unduplicated sufficient responses by

In-Home Delivery Date

* 100 percent

Total sample size



Internet First Panel

response rate =

Unduplicated sufficient responses in

Internet First Panel

* 100 percent

Total sample size



Internet First Panel by Cohort response rate =

Unduplicated sufficient responses in

Int. First Panel by each Cohort (1, 2 or 3)

* 100 percent

Total sample size



Internet Choice Panel

response rate =

Unduplicated sufficient responses in

Internet Choice Panel

* 100 percent

Total sample size



Language of Mailings

response rate =

Unduplicated sufficient responses by

Language (English or Bilingual)

* 100 percent

Total sample size



Item nonresponse is a source of missing data. It occurs when a respondent completes most of the questionnaire but does not answer one or more individual questions.

Internet item nonresponse rate

by Device =

Number of missing responses

by Device

* 100 percent

Total records



Internet item nonresponse rate

by Internet =

Number of missing responses

by Internet

* 100 percent

Total records



Internet item nonresponse rate

by Mail =

Number of missing responses

by Mail

* 100 percent

Total records



Internet item nonresponse rate

by question =

Number of missing responses

by question

* 100 percent

Total records



The screen breakoff rate tells us whether respondents are exiting the instrument on a specific screen. The equation below measures whether the response broke off each time they reached the screen.

Screen

Breakoff Rate =

The last screen the respondent visited and event code of 3.011 for the response

* 100 percent

Total number of times the specified screen was seen



Screen

Breakoff Rate by device =

The last screen the respondent visited and event code of 3.011 for the response by device type

* 100 percent

Total number of times the specified screen was seen by device type

4.2. Questions-To-Be Answered Table Shells

Respondent Experience Assessment

To answer the respondent experience questions identified in Section 3, ISR will utilize the following table shells.



Average Time Spent by Internet Instrument Section

3/20

3/21

3/22

etc.

CUMULATIVE AVG TOTAL

Total AVG time spent

 

 

 

 

 

Internet ID

 

 

 

 

 

logging in

 

 

 

 

 

address verification

 

 

 

 

 

rostering

 

 

 

 

 

demo

 

 

 

 

 

Internet Non-ID

 

 

 

 

 

logging in

 

 

 

 

 

address verification

 

 

 

 

 

rostering

 

 

 

 

 

demo

 

 

 

 

 

Phone ID

 

 

 

 

 

logging in

 

 

 

 

 

address verification

 

 

 

 

 

rostering

 

 

 

 

 

demo

 

 

 

 

 

Phone NON-ID

 

 

 

 

 

logging in

 

 

 

 

 

address verification

 

 

 

 

 

rostering

 

 

 

 

 

demo

 

 

 

 

 


Report Source: UTS



Internet Response by Progress Status

3/9

3/10

3/11

etc.

CUMULATIVE TOTAL

Total

 

 

 

 

 

Internet ID

 

 

 

 

 

pending address verification

 

 

 

 

 

pending Roster completion

 

 

 

 

 

pending demographic completion

 

 

 

 

 

completed

 

 

 

 

 

Internet NON-ID

 

 

 

 

 

pending address verification

 

 

 

 

 

pending Roster completion

 

 

 

 

 

pending demographic completion

 

 

 

 

 

completed

 

 

 

 

 

Phone ID

 

 

 

 

 

pending address verification

 

 

 

 

 

pending Roster completion

 

 

 

 

 

pending demographic completion

 

 

 

 

 

completed

 

 

 

 

 

Phone NON-ID

 

 

 

 

 

pending address verification

 

 

 

 

 

pending Roster completion

 

 

 

 

 

pending demographic completion

 

 

 

 

 

completed

 

 

 

 

 


Report Source: UTS



Data Quality Assessment

To answer the data quality questions identified in Section 3, ISR will utilize the following table shells.



ISR response by session type

3/20

3/21

3/22

etc.

CUMULATIVE TOTAL

Total

 

 

 

 

 

Internet ID

 

 

 

 

 

Completed

 

 

 

 

 

Break Off

 

 

 

 

 

Internet NON-ID

 

 

 

 

 

Completed

 

 

 

 

 

Break Off

 

 

 

 

 

Phone ID

 

 

 

 

 

Completed

 

 

 

 

 

Break Off

 

 

 

 

 

Phone NON-ID

 

 

 

 

 

Completed

 

 

 

 

 

Break Off

 

 

 

 

 


Report Source: UTS



Breakoffs by ISR Instrument Section

3/20

3/21

3/22

etc.

CUMULATIVE TOTAL

Total Breakoffs

 

 

 

 

 

ID

 

 

 

 

 

logging in

 

 

 

 

 

address verification

 

 

 

 

 

rostering

 

 

 

 

 

demo

 

 

 

 

 

Non-ID

 

 

 

 

 

logging in

 

 

 

 

 

address verification

 

 

 

 

 

rostering

 

 

 

 

 

demo

 

 

 

 

 

CQA

 

 

 

 

 

logging in

 

 

 

 

 

address verification

 

 

 

 

 

rostering

 

 

 

 

 

demo

 

 

 

 

 

CQA NON-ID

 

 

 

 

 

logging in

 

 

 

 

 

address verification

 

 

 

 

 

rostering

 

 

 

 

 

demo

 

 

 

 

 


Report Source: UTS



Additional Assessment

To answer the additional questions identified in Section 3, ISR will utilize the following table shells.


Self-Response Activity

3/20

3/21

3/22

etc.

TOTAL

Total Mail Packages Sent

0

 

 

 

0

Internet First

0

 

 

 

0

mailing 1

 

 

 

 

0

mailing 2

 

 

 

 

0

mailing 3

 

 

 

 

0

mailing 4

 

 

 

 

0

mailing 5

 

 

 

 

0

Internet Choice

0

 

 

 

0

mailing 1

 

 

 

 

0

mailing 2

 

 

 

 

0

mailing 3

 

 

 

 

0

mailing 4

 

 

 

 

0

mailing 5

 

 

 

 

0

CQA calls

 

 

 

 

0

Responses

0

 

 

 

0

ISR ID

 

 

 

 

0

ISR Non-ID

 

 

 

 

0

CQA ID

 

 

 

 

0

CQA Non-ID

 

 

 

 

0

Paper Qs received

 

 

 

 

0


Report Source: UTS



ISR Application Usage

3/9

3/10

3/11

etc.

CUMULATIVE TOTAL

Respondents that Accessed the Instrument

 

 

 

 

 

successfully completed captcha

 

 

 

 

 

successful login

 

 

 

 

 

ID

 

 

 

 

 

NON-ID

 

 

 

 

 

verified or provided address

 

 

 

 

 

ID

 

 

 

 

 

NON-ID

 

 

 

 

 

provided roster info

 

 

 

 

 

ID

 

 

 

 

 

NON-ID

 

 

 

 

 

provided demographic info

 

 

 

 

 

ID

 

 

 

 

 

NON-ID

 

 

 

 

 

submitted

 

 

 

 

 

ID

 

 

 

 

 

NON-ID

 

 

 

 

 


Report Source: UTS



4.3. Questions-To-Be Answered Methodology

Instrument Performance Assessment

To answer the instrument performance questions identified in Section 3, ISR will utilize the following methodology.


  1. Were there any outages? How long did an outage last? What was the cause? How many respondents were unable to access the application during each outage? Were there any respondents who had their session terminated (before they finished providing their responses) because of an outage? If so, how many respondents who experienced an outage responded online anyway, and how many never responded online?

Information on outages is to be determined. This data will be provided by either ECaSE-ISR (via Akamai), F5 or splunk logs.


  1. What was the average length of time for each screen to load? By day and time of day?



To determine the average load time for each screen, subtract the difference from the next screen’s load time and the previous screen’s load time and calculate the average. The paradata variable for load time is PD_XXXX_LOAD_TIME, where XXXX =screenname. It should also be separated further by day and time of day.



  1. How many concurrent respondents are logged into the ISR application? What was the largest load experienced by the instrument? When did that occur?

Information on concurrent respondents and loads are to be determined. This data will be provided by either tomcat servers or app dynamics.



Respondent Experience Assessment

To answer the respondent experience questions identified in Section 3, ISR will utilize the following methodology.


  1. How long did respondents spend in the ISR application? Including those respondents who experienced outages? By household size, time of day, average screen loading time, and device type. What was the average length of time spent on each screen in the ISR application independent of screen loading time? By device type?



To determine the length of time respondents spend within the ISR application, add the landing page paradata variable PD_LANDING_START_TIME to the confirmation screen’s timestamp variable PD_CONFIRMATION_LOAD_TIME.



  1. What was the average length of time spent on each screen independent of screen loading time? By device type?



To determine the average length of time respondents spend on each screen, subtract the difference of a specific screen’s PD_XXXX_CLOSE_BROWSER_TIME and PD_XXXX_LOAD_TIME, where XXXX =screenname. Once the time difference has been calculate, take the average for that particular screen. For additional information regarding household size, refer to response variable H_SIZE_CALCULATED_INT, and for device type, refer to paradata variable PD_LANDING_DEVICE_TYPE_TEXT.



  1. What percentage of responses were submitted with a User ID? Without a User ID (Non-ID)? By demographics? By device type?



To determine the above assessment, extract all cases with an event code =1.010 (submitted survey). Cases that were submitted with a User ID will contain the response variable REPUNIT_SPONSOR_CASE_ID. For cases that were submitted without a User ID, extract response variable REPUNIT_NONID_IND =1. For additional information by demographics, refer to response variables that contain P_XXXX, where XXXX =AGE, SEX, SEXRELEDIT, HISP, RACE, REL, LOC, etc. For device type, refer to paradata variable PD_LANDING_DEVICE_TYPE_TEXT.



  1. How many reCAPTCHA attempts did the respondent take to get through to the instrument? Did the reCAPTCHA attempts have any correlation to demographic characteristics? How many respondents broke off during the Google reCAPTCHA validation?



Google will provide this information. Methodology for this assessment will be determined later.



  1. How often were persons added at the dashboard? How often were persons deleted at the dashboard?



Cases where persons added at the dashboard will contain the response variable P_ROSTREV_ADD_IND=1. Also, extract the paradata variable PD_DEMO_DASHBOARD_ADD_PERSON_TIME. Cases where persons deleted at the dashboard will contain the response variable P_ROSTREV_REMOVE_IND=1. Also, extract the paradata variable PD_DEMO_DASHBOARD_DELETE_TIME.



  1. For households with more than one person, how often did respondents complete demographic information for each rostered person in roster order?



When a respondent inputs or selects information into a given field, timestamp data is captured within the paradata. To determine the order of each rostered person, review the timestamp variable for each rostered person, on all screens within the Demographic section.



  1. How many respondents started in Spanish? How many respondents switched to Spanish during the survey? How often did respondents switch between English and Spanish by screen?



To determine how many respondents started their questionnaire in Spanish, total the amount of cases where paradata variable PD_LANDING_LANG_ENTRY_IND =2. If respondents switched from English to Spanish during their survey, extract the total cases where PD_LANDING_LANG_ENTRY_IND =1 and any other screen that produced the paradata variable PD_XXXX_LANG_EXIT_IND =2 (where XXXX=screenname; 2=Spanish.)



If a respondent switched between English and Spanish within a screen, extract cases where PD_XXXX_LANG_EN_TIME and PD_XXXX_LANG_SP_TIME was produced.





Data Quality

To answer the data quality questions identified in Section 3, ISR will utilize the following methodology.


  1. Which screens triggered the most edit messages? Screen time specified. Did respondents change their responses based on the edit message?



Each screen contains an edit message paradata variable, which is a timestamp variable. The format of this variable is PD_XXXX_EDIT_TIME, where XXXX equals the screen name. To determine which screens triggered the most edit messages, extract each screen’s edit paradata variable, and total the amount of records.



  1. Are these edits messages correlated to any demographic characteristics?



Age, Race, Sex and Relationship are some of the demographic characteristics ISR collects. To determine the correlation between the two, extract the PD_XXXX_EDIT_TIME variable and the desired demographic characteristic/group.



  1. Did edit messages cause break-offs?



To determine if edit messages caused break-offs, extract cases where PD_XXXX_EDIT_TIME was produced and the event code =3.011.





  1. How many respondents completed the demographic section, but did not answer the overcount question?



If respondents completed the demographic section but did not answer the overcount questions, the paradata variables beginning with “PD_OC_” will not exist in the database.


  1. How many respondents completed the survey, but never “submitted” their survey?



To determine how many respondents completed the survey but never “submitted, extract the total cases where paradata variable PD_SUBMIT_LOAD_TIME was produced and PD_SUBMIT_YES_TIME was not produced.



An analyst may also want to check event codes. For instance, if the respondent broke off during the Submit screen, check for event code =3.011. If the survey was never submitted, the event code =1.010 would not be produced.



  1. How many respondents logged into the ISR application multiple times using the same User ID? How often did they submit after restarting?



The User ID field is captured in the response data as response variable REPUNIT_SPONSOR_CASE_ID. To determine how many respondents logged into the application multiple times with the same User ID, extract the total cases where REPUNIT_SPONSOR_CASE_ID is produced more than once. To determine when these respondents submitted their survey, extract the cases with event code=1.010.



Additional Questions

To answer additional questions identified in Section 3, ISR will utilize the following methodology.


  1. What was the number of insufficient partials, sufficient partials, submits, and self-reported vacants? What was the average time of an insufficient, a sufficient, a submit, and a self-reported vacant?



For insufficient, total the amount of cases where event code =7.102. For sufficient, total the amount of cases where event code =7.101. For submits, total the amount of cases where event code =1.010. For self-report vacant, total the amount of cases where event code =5.047.



To determine the average time for of insufficient, sufficient, submits, and self-reported vacants, first extract each case’s paradata variable PD_LANDING_START_TIME. For insufficient partials, extract each case that has response variable H_SIZE_STATED_CNT null or blank and paradata variable PD_POPCOUNT_H_SIZE_STATED_INT_TIME. For sufficient partials, extract each case where response variable H_SIZE_STATED_CNT ≥ 1 and paradata variable PD_POPCOUNT_H_SIZE_STATED_INT_TIME. For submits, extract each case with an event code =1.010, and paradata variables PD_LANDING_START_TIME and PD_CONFIRMATION_LOAD_TIME. Lastly, for self-reported vacant, extract cases with an event code =5.047 and paradata variables PD_VACANCY_LOAD_TIME and response variable SOLICIT_LINKED_ID.



  1. What were the number of logins by each date? In-home delivery date? What in-home delivery date received the most/least logins? What time of day did these logins occur?



To determine logins by date, total the amount of cases where event code =3.010 and group the paradata variable PD_LOGIN_LOAD_TIME by each desired date (daily, in-home.) The results will indicate which date received the most/least logins and the time of day.



  1. What devices were used to respond? How often did the respondent switch device types?



To determine which device was used to respond, refer to paradata variable PD_LANDING_DEVICE_TYPE_TEXT. To determine how often respondents switch device types, extract the total cases where REPUNIT_SPONSOR_CASE_ID is produced more than once and does not have an event code=1.010. Group by the device type paradata variable PD_LANDING_DEVICE_TYPE_TEXT.



  1. What operating systems were used to respond? What browsers were used to respond?



To determine which operating system was used to respond, refer to paradata variable PD_LANDING_OS_TEXT. This data variable is a text string variable that will list the operating system and browser that was used.



  1. How many times was the Help text viewed on each screen? How many times was Help text accessed before/after completing various screens (DOB, Popcount, etc.?) By Screen time?



To determine how often the Help text was viewed, total the amount of cases PD_XXXX_HELP_TIME was produced (where XXXX=screenname.) To determine whether Help text was viewed before or after answering questions on screens like DOB and POPCOUNT, extract the screen’s timestamp paradata variables and compare them to the PD_XXXX_HELP_TIME variable.



  1. How many persons had a name containing only one character?



To determine how many persons had a name containing only one character, extract all response data where P_FIRST_NAME contains one character string.



  1. How often were the demographic checks invoked? By User ID? By Non-ID?



To determine how often edit messages were produced on each screen within the demographic section, extract the paradata variables PD_XXXX_EDIT_TIME, where XXXX =screen name. Cases that were submitted with a User ID will contain the response variable REPUNIT_SPONSOR_CASE_ID. For cases that were submitted without a User ID, extract response variable REPUNIT_NONID_IND =1.



  1. Data Requirements

Data Source

Description

Date of Availability

The Paradata File

A file that contains paradata for all events.

TBD

Event Data

A file that contains event codes.

TBD

Decennial Response File (DRF)

Contains every response to the census from all sources. The primary selection algorithm is applied to this file to unduplicate people between multiple returns for a housing unit and to determine the housing unit record and the people to include at the housing unit. The DRF is then combined with the Decennial Master Address File to create the census unedited file (CUF).

TBD

Universal Tracking System (UTS) reports

High-level reports used by managers to evaluate the operation.

TBD

Google Admin Console

Contains data on Google reCAPTCHA

TBD

Akamai

Cloud solution to handle traffic spikes

TBD



  1. Assumptions

The Census Bureau put in place the following overarching assumptions related to Internet Self-Response to guide all research and testing. These assumptions may be revised to prepare for 2020 Census design decisions:

  • 2018 Census Test Contact Strategy will begin on March 16, 2018 and end on April 30, 2018.

  • ISR will receive the following data files:

    • Census Unedited File (CUF)

    • Census Edited File (CEF)

    • Decennial Response File (DRF)

  • ISR will receive the performance management reports from UTS.



7. Risks/Limitations

The following sections present the risks and limitations that pertain to the 2018 End-to-End Census Test ISR Operational Assessment.

7.1. Risks

  • ISR has limited time and resources to devote to analyzing data and writing a report for this operational assessment.


  • If 2018 End-to-End test results are not delivered on schedule, analysis of the results could be delayed which could have impacts on the ability to make changes in time for 2020.


7.2. Limitations

  • 2018 End-to-End Census Test will be a dress rehearsal for the ISR operation, as well as most of the 2020 Census operations. The ISR operation will only be conducted in Providence, Rhode Island. Although this site aligns with national demographic characteristics, it is not in the Census’s national advertising environment, which could limit the usefulness of any load estimates based on the results of this report.



8. Division Responsibilities

The following divisions will contribute to the completion of the 2018 End-to-End Census Test Internet Self-Response Operational Assessment:

DCMD:

  • Coordinate review of analysis report.

  • Ensure data are available to all involved parties.

DSSD:

  • Receive response data and paradata from Census Data Lake (CDL) for analysis.

  • Generate numbers for analysis report.

  • Analyze paradata.

  • Write the analysis report.



9. Milestone Schedule

Activity

Finish date

Self-Response data collection begins

March 16, 2018

USPS Delivers Initial Mailout Packages

March 16, 2018

USPS Delivers Second Contact Panel - Mailout Packages (First Reminders)

March 20, 2018

USPS Delivers Third Mailout Contact

March 30, 2018

USPS Delivers Fourth Mailout Contact

April 12, 2018

USPS Delivers Fifth Mailout Contact

April 23, 2018

Census Day

April 1, 2018

Self-Response data collection ends

August 7, 2018

Begin data analysis

TBD

Preliminary results available

TBD

Draft assessment present to DROM

TBD

Assessment report finalized and released

TBD



10. Review/Approval Table

Role

Approval Date

Author’s Division Chief (or designee)


Decennial Management Division Assistant Division Chief for Internet Self-Response


Decennial Research Objectives and Methods (DROM) Working Group


Decennial Census Communications Office (DCCO)




11. Document Revision and Version Control History

Version/Editor

Date

Revision Description

IPT Chair Approval

v. 1.01/Erin Love

9/5/2017

First Draft


v.1.2/Randall Neugebauer

6/28/2018

Revisions from a DROM working group review


v.1.3/Randall Neugebauer/Miranda Chung

TBD

Revisions from Maryann Chaplin’s Review


v.1.4 Jason Machowski/Miranda Chung

TBD

Revisions from Quality Process Reviewer




12. Glossary of Acronyms

Acronym/ Abbreviation

Term

CQA

Census Questionnaire Assistance

CQS

Census Quality Survey

NCT

National Census Test

ISR

Internet Self-Response

DCMD

Decennial Census Management Division

DRF

Decennial Response File

DROM

Decennial Research Objectives and Methods Group

FPD

Forms Printing and Distribution Operation

RPO

Response Processing Operation

ECaSE-OCS

Enterprise Census and Survey Enabling – Operational Control System

MAF

Master Address File



13. References


Bates, N., Vines, M., Virgile, M., Walejko, G., Hagedorn, S., McCaffrey, K., Otmany, J.F., (2017), “2020 Census Research and Testing: 2015 Census Test of Digital Advertising and Other Communications in the Savannah DMA,” U.S. Census Bureau. Decennial Statistical Studies Division 2020 Census Program Internal Memorandum Series: 2017.14.i. May 17, 2017.


Bentley, M., (2016), “2020 Research and Testing: 2014 Census Test Results for Optimizing Self-Response,” U.S. Census Bureau. Decennial Statistical Studies Division 2020 Census Program Internal Memorandum Series: 2016.10.i. June 19, 2016.


Bentley, M., Hill, J., Reiser, C., Stokes, S., Meier, A., (2011), “2010 Census Quality Survey,” U.S. Census Bureau. Decennial Statistical Studies Division 2010 Census Planning Memorandum Series: No. 165 January 9, 2012.


Boone, T. (2017), “2017 Census Test Preliminary Findings,” U.S. Census Bureau. Decennial Census Management Division


Coombs, J., Lestina, F., Phelan, M., (2017), “2016 Census Test Optimizing Self-Response, Language Services, Race and Ethnicity, and Relationship Experiment Analysis Report,” U.S. Census Bureau. Decennial Statistical Studies Division 2020 Census Program Internal Memorandum Series: 2017.22.i. September 8, 2017.


Decennial Statistical Studies Division, (2014), “2020 Research and Testing: 2012 National Census Test Contact Strategy Results; Optimizing Self Response,” U.S. Census Bureau. DSSD 2020 Decennial Census R&T Memorandum Series: #E-04. November 6, 2014.


Decennial Census Programs (2016), “2020 Census Business Solution Architecture,” U.S. Census Bureau. 2020 Census Program Internal Memorandum Series: 2016.06. May 25, 2016.


Phelan, J., (2016), “2020 Research and Testing: 2015 National Content Test Optimizing Self-Response Report,” U.S. Census Bureau. Decennial Statistical Studies Division 2020 Census Program Internal Memorandum Series: 2016.57.i. November 22, 2016.


U.S. Census Bureau (2016), “2020 Census: Outcomes of the 2016 Site Test,” November 16, 2016



1 The most common enumeration method by percentage of households is self-response (TEA 1), where materials will be delivered to each address through the mail, and enumeration data is expected to be returned or submitted by a respondent. Residents receive mail from Census and can respond via internet, telephone, or mail.

2 Prior to 2018 End-to-End Census Test, the self-response contact strategies contained dates in which the United States Postal Service mailed the mailing materials to respondents. For 2018, the self-response contact strategy contains in-home dates. In-home dates better connect with expecting workloads for ISR, CQA, and systems that react to respondent actions.

3 Outages are defined as any period of time that the ISR questionnaire is unavailable to the respondent.

4 The dashboard is the starting point for respondents prior to accessing the demographic questions.

5 Edits are included on every screen in the ISR instrument. There are two types of edit messages in the ISR – soft edits and hard edits. Hard edits are included on those screens where the information is required, for example the NoN-ID address screens, and name. Soft edits are included on all of the other screens. Some screens only have one edit message, while other screens have up to three. You can differentiate which edit was triggered using Paradata.

6 Insufficient partial: a respondent breaks off before providing a popcount. Sufficient partial: a respondent breaks off after providing a popcount. Submits: a respondent navigates through the entire survey and “submits” their response. This results in the respondent receiving confirmation of their questionnaire submission. Self-reported vacants: a respondent indicated that no one was living at the housing unit associated with the ID that was entered on the login screen. The respondent is also asked to provide a “vacancy” reason.

7 The following devices can be used to respond online: desktop, phone and tablets.

8 Internet responses are those responses submitted by internet or phone (CQA).

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorKimberly Small (CENSUS/DCMD FED)
File Modified0000-00-00
File Created2021-01-20

© 2024 OMB.report | Privacy Policy