2016 Census Test Goals, Objectives, and Success Criteria

2016 Census Test - GOSC.docx

2016 Census Test

2016 Census Test Goals, Objectives, and Success Criteria

OMB: 0607-0989

Document [docx]
Download: docx | pdf





2016 Census Test

Goals, Objectives, Success Criteria (GOSC) and Research Questions

Draft as of December 1, 2015

Operation

Approving Official

Approving Official - Signature

Date approved

Geographic Operations

Evan Moffett



Content and Forms Design

Jessica Graber



Language Services

Jessica Graber



Communications

Tasha Boone



Internet Self-Response

Jessica Graber



Non-ID Processing

Evan Moffett



Nonresponse Followup

(Includes, NRFU Production, QC, Admin Recs)

Maryann Chapin



Field Infrastructure

Alexa Jones-Puthoff



IT Infrastructure

Andrea Brinson

Pete Boudriault



Approving all of the above

Deirdre Bishop

























Capture cost for every option.

Test Focus

Integrate Self-Response and Nonresponse Followup operations, include components of reengineered quality control objectives for Nonresponse Followup, finalize adaptive design and use of administrative records and third-party data, and test non-roman character languages (Chinese and Korean) in all modes.

Census Day

April 1, 2016

Scope and Limitations

The 2016 Census Test results will be based on housing units selected from a particular local area, and cannot be generalized to the entire nation. The results do not predict national trends or rate estimates expected in the 2020 Census.

Operations


Support for Census Test Operations have no test objectives or research questions.

Geographic Programs - (support operation)

Content and Forms Design

Language Services

Paper Processing - (support operation)

Communications - (support operation)

Internet Self-Response

Non-ID Processing

Nonresponse Followup

Response Processing - (support operation)

Service Center - (support operation)

Field Infrastructure - (support operation)

IT Infrastructure - (support operation)

Forms Printing & Distribution - (support operation)

Decennial Logistics Management – (support operation)

Site Selection

Assumptions

Include 200,000 - 250,000 housing units from each selected area, for no more than 500,000 housing units. (60,000 NRFU cases is the approximate amount needed per site to test staffing ratios and 6,000 case per site to test NRFU reinterview). The site will comprise a contiguous area within each location. More than one site is preferred to ensure a variety of situations for NRFU optimization. Block groups will be the basis of selection. (4000 cases for response validation and 24,500 Coverage Reinterview workload)

Site Selection Criteria

Urban setting (for field)

Language Diversity (to represent limited English proficiency)

Hard to Count Population

Site Selection Decision

The Census Bureau will conduct a 2016 Census Test in selected areas of Harris County, Texas and Los Angeles County, California. Each site includes approximately 225,000 housing units.


Operations with Test Objectives and Research Questions

Goals

Objectives

Success Criteria

(should be quantifiable)

Research Questions

Identify which are implementation vs research questions?

Content and Forms Design (Jessica Graber and Jenny Kim)

Test methods to create an accurate household roster.

  • Determine whether the presentation of the residence rule or the collection of the population count improves within-household coverage.

  • Test alternative versions of the undercount coverage questions in an effort to balance respondent burden with data quality.

  • Test alternative versions of the overcount coverage questions, in an effort to balance respondent burden with data quality.

  • The coverage reinterview shows that a particular roster collection method creates the most accurate roster.

  • The coverage reinterview shows that a particular format of undercount probes creates the most accurate roster.

  • The coverage reinterview shows that a particular format of overcount probes creates the most accurate roster.

  • Does collecting a household population count improve within-household coverage?

  • Does a combined undercount question negatively affect within-household coverage?

  • Does varying the presentation of the overcount probes based on household size negatively affect within-household coverage?

  • Does the use of household-based overcount questions that combine the probes by type negatively affect within-household coverage?

Test updates to the non-relative relationship response categories

  • Test alternative versions of relationship response categories, based on previous cognitive testing results, to minimize respondent confusion on the types of non-relative relationships to the householder.

  • There is no data quality loss due to deletion of certain non-relative response categories.

  • Does the deletion of "roomer or boarder" and/or "housemate/roommate" have a negative impact on respondent reporting of non-relative relationships?

Evaluate the distribution of the Black population for two variations of the race and ethnicity question: Internet only

  • Black or African Am. (control)

  • Black or African Am. (test)

  • Evaluate whether the estimate of the Black population in the test version is equal to, less than, or greater than the control version.

  • Data for the distribution of the Black population are received.

  • Do the estimates of the major OMB group of Black or African American differ between the control and the test versions?

  • Do the estimates for the detailed checkboxes (African American, Jamaican, Haitian, Nigerian, Ethiopian, Somali, and a combined tally of the write-ins) differ between the control and the test versions?

  • Does the distribution of the other major race and ethnicity groups (White, Hispanic, Asian, American Indian or Alaska Native, Middle Eastern or North African, Native Hawaiian or Other Pacific Islander, and Some Other Race) differ between the control and the test versions?

Language Services (Jessica Graber and Jenny Kim)

Test contact strategies in non-English/non-Spanish languages (Chinese, Korean).

  • Test alternative contact strategies to encourage non-English/non-Spanish speaking respondents to self respond.

  • Non-English/non-Spanish contact materials encourage Limited English Proficiency (LEP) households to respond.

  • Does providing mail materials in non-English/non-Spanish languages (Chinese, Korean) increase self response from LEP households, thereby reducing the NRFU and TQA workloads?

  • Which variations of mail materials in non-English/non-Spanish languages (Chinese, Korean) increase self response from LEP households, thereby reducing the NRFU and TQA workloads?

Assess utilization of non-English/non-Spanish (Chinese, Korean) data collection instruments: Internet, paper, NRFU.




  • Assess response via non-English/non-Spanish data collection instruments: Internet, paper, NRFU.

  • Data collected in non-English/non-Spanish language via internet, paper and NRFU are captured and processed.


  • Will respondents use non-English/non-Spanish self-response options?

  • Will enumerators use non-English/Spanish options on the NRFU instrument?

Internet Self-Response (Jessica Graber and Jane Ingold)

Test the impact of alternate language in the mail materials on response.

  • Determine if the content of the mail materials (letters and postcards) impacts the response rate.


  • Rates of response are higher for those receiving the “nicer” materials. (“Nicer” postcard means use softer language, state the benefits of responding versus the penalty is you do not respond).

  • Does the content of the mail materials impact survey response?

Test the impact of including a foreign language brochure in the mailing package


  • Determine whether the addition of a foreign language brochure has an impact on response rates.


  • Rates of response are significantly higher for respondents that receive a brochure.


  • Does the addition of a brochure increase response rates overall?

  • Does the brochure have more of an impact when provided in place of the letter alone, or in place of the letter but with the survey URL provided on the envelope?

Test the impact of including a Frequently Asked Questions insert in the mailing package

  • Determine whether an FAQ insert has an impact on response rates.

  • Rates of response are significantly higher for respondents that receive the insert.

  • Does the addition of an FAQ insert increase response rates overall?

Non-ID Processing (Evan Moffett and Frank McPhillips)

Provide an option for respondents to participate in the 2016 Census Test without a Census ID.

  • Implement Non-ID Processing during the 2016 Census Test.

  • Consistent with results from the 2013-2015 Census Tests, > 80% of Non-ID responses match to a valid address record in the Census universe during real time processing (address characteristics are similar in the 2016 Census Test sites as previous tests).

  • Implementation

Determine the utility of the CARRA Response Validation methodology for response data, which results from Non-ID Processing.

  • Implement a response validation workflow for the 2016 Census Test.

  • Validation of a sample of Non ID respondent’s data via a Reinterview operation.

  • To what extend does the CARRA Response Validation methodology accurately predict the presence/absence of fraud for response data, which results from Non ID Processing.

Implement methods to increase the match and geocode rate of Non-ID cases.

  • Implement a non-automated method to increase the match and geocode rate of Non-ID cases (e.g., Manual Non-ID Processing).

  • While one-day turnaround will not be required for cases delivered to NPC each day during the first few weeks of self-response (i.e., peak time), there will be an attempt complete all outstanding manual processing before the first NRFU cut, and to keep up with daily turnaround once NRFU starts. This will simulate a 2020 environment where Non-ID is reducing NRFU workload as quickly as possible.

  • Manual processing catches up with the backlog of cases not resolved by automated means (real time or batch) by the time the initial cut is taken to establish the NRFU universe. This will give us a measure of how much Non-ID can reduce the NRFU workload before the operation even starts.

  • What additional matches can be derived during manual Non-ID processing that reduced the NRFU workload (e.g., how many, geographic distribution, address characteristics, etc.)

  • What additional/updated geocodes can be derived during manual processing?

Implement a methodology to validate In-Office those cases, which in 2010 were sent to Field Verification.

  • The OBAV operation shall verify the existence and census block location of all eligible addresses from Non-ID processing. Similar to the manual Non-ID processing operation, while one-day turnaround is not required during peak self-response, all outstanding cases must complete the office-based check prior to the start of NRFU. The idea is to simulate a 2020 situation where OBAV is attempting to reduce the field verification workload (a type of NRFU assignment).

  • 75% of eligible Non-ID cases are verified using the OBAV methods.

  • How many of the Non-ID cases eligible for address verification could be verified in an office-based operation as opposed to fieldwork?

Nonresponse Followup (Maryann Chapin)


Reengineered Field Operations -- Maryann Chapin



Continue refining reengineered field operations.


Improve efficiency and effectiveness of staff and workload management.

  • Incorporate streamlined contact procedures for multi-units


  • We successfully contact multi unit addresses and minimize the respondent burden.


  • How can we streamline the contact procedures for multi-units in order to incorporate the optimization strategy? (Technical Implementation)


Continue refining reengineered field operations.


Improve efficiency and effectiveness of staff and workload management.

  • Incorporate procedures and questionnaire enhancements for situations other than a face-to-face contact with household member or proxy?

  • COMPASS question paths guide the user through special situations with minimal training required. No more than one day of in-class training.

  • What procedures and questionnaire enhancements need to be in place for situations other than a face-to-face contact with household member or proxy? These include situations such as apartment labeling problems, not housing unit situations, refusal situations, in-mover and out-mover? (Technical Implementation)

Continue refining reengineered field operations.


Improve efficiency and effectiveness of staff and workload management.

  • Further investigate the staffing ratios for Local Supervisor of Operations (LSO) to Field Management of Operations (FMO) and enumerators to LSOs through the testing of two different staffing ratio scenarios:

  • Scenario 1: 15 LSOs per FMO; 20 enumerators per LSO.

  • Scenario 2: 15 LSOs per FMO; 30 enumerators per LSO.

  • Staffing ratios used in the test are validated as feasible.

  • Under the two different staffing ratios, what is the cost/quality tradeoffs associated with each staffing scenario?


Note: Evaluation criteria are under discussion but may include attrition rates, enumerator productivity, and LSO cost per enumerator, workload, and number of incoming/outgoing calls, event completion, response accuracy, and response time.

Continue refining reengineered field operations.


Improve efficiency and effectiveness of staff and workload management.

  • Lessons learned from 2015 Census Test results including such things as:

    1. Addition of validation rules on certain fields, such as e-mail address, ZIP code, etc.

    2. Updating help screens and descriptions.

    3. Changing the way certain behind the scenes variables are set in the course of the interview, like UNIT STAT, the status assigned to a housing unit.

  • Lessons learned from the 2015 Census Test are incorporated.

  • Can lessons learned from 2015 Census Test be incorporated? (Technical Implementation).




Continue refining reengineered field operations.


Improve efficiency and effectiveness of staff and workload management.

  • Update contact situations as needed (for example, incorporation of appointments)

  • Updated contact situations are incorporated as needed.

  • Can updated contact situations be integrated? (Technical Implementation).

Continue refining reengineered field operations.


Improve efficiency and effectiveness of staff and workload management.

  • Incorporate recruiting reporting and functionality into MOJO.

  • Recruiting functionality is included into MOJO.

  • Can recruiting functionality be incorporated into MOJO?

Continue refining reengineered field operations.


Improve efficiency and effectiveness of staff and workload management.

  • Incorporate further reporting, roll-ups to higher levels (e.g., AOSC, RCC, HQ), and dashboard capabilities into MOJO.

  • Reporting and dashboard capabilities are incorporated into MOJO.

  • Can reporting and dashboard capabilities be incorporated into MOJO?

Continue refining reengineered field operations.


Improve efficiency and effectiveness of staff and workload management.

  • Further refine and expand alerting capabilities (production and QC).

  • Alerting capabilities are expanded and refined.

  • Can alerting capabilities be expanded and refined?

Validate assumptions associated with key NRFU cost parameters.

  • To capture additional data points in the 2016 Census Test that enable an understanding of NRFU cost parameters such as: late self-response rates (prior to and after the start of NRFU), NRFU workload completion rates by contact attempt, enumerator efficiency, staffing ratios, etc.


  • Data resulting from the 2016 Census Test produce data points for key NRFU cost parameters.


Building on 2015 Census Test experiences, evaluate the impacts on cost and quality of a NRFU contact strategy that allows for a maximum of six contact attempts (proxy eligible on the third attempt) in combination with added enumeration application (COMPASS) capabilities and enhancements (such as handling noninterviews and proxies) that could impact enumerator productivity and efficiency.

  • Collect data associated with enumerator productivity, efficiency, and case outcomes to enable an assessment of a six contact attempt strategy in combination with enhancement enumeration application capabilities

  • Collect data on case outcomes to evaluate the unresolved rate (cases that reach maximum contact attempts without a successful respondent or proxy provided enumeration) using a six contact attempt strategy.

  • The 2016 Census Test data analysis of enumerator efficiency and productivity parameters – when applied to the 2020 Census cost model – enables an understanding of the projected NRFU costs against cost avoidance targets.

  • A reduction in the unresolved rate (compared, in general, to the 2015 Census Test unresolved rate where the number of contact attempts varied by block group).

  • In the 2016 Census Test sites, what impact does an across the board maximum of six contact attempts have on reducing the unresolved rate?

  • In the 2016 Census Test sites, what effects do enhancements to the enumeration application and an across the board maximum of six contact attempts (proxy eligible on the third attempt) have on enumerator efficiency and productivity?


Administrative Records – Tom Mule



Test sending additional mail contacts to administrative record vacant and administrative record occupied.






  • For administrative record occupied and vacant cases, testing switching modes to continue to contact unit by mail to try to obtain self-response before using administrative records.




  • Test if we can obtain more self-responses for administrative record cases by NRFU visits.



  • How many self-responses were received after conducting additional mailings after the start of NRFU?

  • How many cases were we able to utilize self-responses instead of having to use administrative record occupied or vacant information?

  • For self-responses after NRFU started, how do the counts and characteristics compare to the administrative record information already available?

Learn more about USPS undeliverable as addressed processing.

  • See if we can observe USPS Undeliverable as Addressed delivery and processing. Observe what postal carriers do and what the workers at the processing facility do. This is something we would have to approach USPS about doing these observations. We will also need an MOU with the USPS.

  • The Census Bureau is confident in depending on the delivery and processing by USPS.

  • In this part, we are looking to get some qualitative results about the determination of UAAs based on delivering Census Questionnaires. Depending on observations or focus group, this can be a qualitative summary to be used to help with future planning.

Test using supplemental nutrition assistance program in the administrative record processing.


(Note: The Fitness For Use Team is charged with acquiring state-level administrative records files and assessing the files from a coverage and quality perspective. As a result, this is not a major objective of the 2016 Census Test. However, if the data are available, we will include them in the test.)

  • Test using Supplemental Nutrition Assistance Program data if agreement is in place and data is available

  • First attempt to use SNAP data in production setting. We can learn about implementation and results to help determine how to use in future tests and 2020.

  • For SNAP, how did SNAP contribute to the building of households to be utilized during the 2016 test?

  • What SNAP data was available for the 2016 test area? How close to Census Day were the records?


To reduce the NRFU workload using predictive models to identify vacant addresses and remove the addresses from the NRFU workload prior to any contact attempts and to understand differences between the vacant prediction and what is found in the field.

  • Based on USPS Undeliverable As Addressed detailed reason codes corroborated with the lack of comparable address information from other administrative records and third-party data sources, remove addresses, predicted to be vacant, from the NRFU workload.

  • Based on seeding of the 2016 Census Test NRFU workload with addresses identified as vacant, but retained in the workload, quantify the distribution of fieldwork status outcomes (occupied, vacant, delete) for the administrative records vacant identified cases.

  • Removal of vacant addresses from the overall NRFU workload.

  • Collection of data and completed analysis to quantify the differences between the vacant predications and what was found in the field associated with the identification and removal of vacant addresses from the NRFU workload.

  • In the 2016 Census Test sites, at what rate can the NRFU workload be reduced through the removal of vacant addresses?

  • In the 2016 Census Test, at what rate do we identify an address as vacant based on administrative records and third-party data but receive a response?

To reduce the NRFU workload using predictive models to identify occupied units.

  • Based on seeding of the NRFU workload with addresses identified as administrative records occupied, but subject to the six contact attempt strategy, produce a measure of the difference between the administrative record population predication and what was collected from the respondent.

  • Based on administrative records and third-party data, used in accordance with data use agreements, remove addresses, predicted to be occupied, from the NRFU workload. For the 2016 Census Test, implement three identifications/removals: at the beginning, middle, and end of the NRFU fieldwork period.

  • Collection of data and completed analysis with identification and removal of occupied housing units from the NRFU workload.

  • Implementation of a modified predictive modeling approach to identify administrative record occupied cases where the administrative records population count and the population count resulting from the census enumeration are more likely to agree.

  • In the 2016 Census Test sites, at what rate can the NRFU workload be reduced through the removal of occupied addresses?

  • In the 2016 Census Test sites, at what rate did we identify an address as occupied based on administrative records and third-party data for which we have a respondent provided enumeration where the population count differs from the administrative records population count?


Reengineered Quality Control -- RJ Marquette



Collect data for improvements to the QA program in order to reduce field/telephone costs in future tests and the 2020 Census.

  • Use of GPS and interview paradata for QC (e.g., enumerator location, length of interview, time of interview, etc.)

  • Centralized case resolution.

  • We are able to demonstrate that GPS and interview paradata are good indicators of cases that do or do not need to be reinterviewed. We will use this information to improve for the 2017 test and beyond.

  • NPC is able to complete discrepant case resolution, instead of having it done in the LCOs as in 2010.

  • Can we use GPS and interview paradata to improve our reinterview sampling and falsification/error detection?

  • Can we save money by moving the discrepant case resolution from the LCOs to the NPC with no reduction in quality?

Provide a QA program to detect and deter falsification in the 2016 Census Test. (Ties into objectives/success criteria/questions #3)


  • Reinterview functionality within COMPASS and Integration of QC and MOJO (QC design and management).

  • NRFU interviewers collect interview and reinterview data without a negative impact to (a) the NRFU item allocation rates for Race, Hispanic Origin, and Age, and (b) the falsification rates.

  • Can we use NRFU interviewers to conduct the reinterview (with the stipulation that someone cannot reinterview their own work)?

Field Infrastructure (Alexa Jones)

Test training methods and infrastructure to train FLD Decennial staff to effectively conduct NRFU.


  • Evaluate the effectiveness of enumerator,  LSO, and FMO training program.

  • Implement/Evaluate simulated respondent interview assessment tool.

  • Implement/Evaluate a learning management solution.


  • Enumerators and LSO demonstrate acquisition of knowledge and skills to perform at desired level.

  • Number of initial field observations is reduced. 

  • What performance standards are expected for enumerators, LSOs, and FMOs?

  • Still deliver training. How effective is the new training? Does a simulated respondent interview assessment tool provide a reasonable substitution for initial field observations?  

  • What cost savings might be realized by utilizing a simulated respondent interview assessment tool?

Gain experience utilizing multiple Third-party vendors with the pre-employment process.

  • Implement Third party fingerprinting.

  • Third-party fingerprinting-meets all functional and non-functional requirements.

  • Can Third party fingerprinting integrate effectively with Census operations and systems?







Support Operations

Geographic Programs (Evan Moffett and Carrie Butifoker)

  • MAF/TIGER system will be updated with the results of the 2016 Census Test results.

Paper Processing (Andrea Brinson and Mark Matsko)

  • Forms, Printing, and Distribution

  • Data Capture and Integration

Integrated Partnerships and Communications (Tasha Boone)

  • Partnership Support- Conduct partnership surges in hard to count tracts.

Response Processing (Andrea Brinson and Chuck Fowler)


Decennial Service Center (Andrea Brinson, Mark Markovic, Renae Wallace, Rusty Richards)

  • Enumerator Help Desk -- A centralized process to accept, track, and resolve problems and issues from field staff.

IT Infrastructure - CEDCaP ( Pete Boudriault, Doug Curtner, Justin McLaughlin)

  • The internet self-response and the real-time Non-ID processing systems will be hosted in a commercial, Fed Ramp, certified cloud - Gain experience moving to new IT infrastructure – cloud computing

  • Stand up a program-level data repository for administrative records - Gain experience moving to new IT infrastructure – administrative records

  • Alternative NRFU Laptop Support – To assess laptop alternatives currently existing in the marketplace that could be explored by the agency for use by LSOs.

  • Provide Fed Ramp certified cloud - The internet self-response and the real-time non-id processing systems will be hosted in a commercial, Fed Ramp, certified cloud.

  • Implement device as a service - Gain experience moving to new IT infrastructure – services

  • Utilize an enterprise development, integration and test environment (EDITE) - Gain experience using an Enterprise shared service

  • The self-response data capture systems will support the language options for the Census Test self-response - Gain experience moving to new IT infrastructure

Cost and Quality (Andreana Able)

  • Identify and collect relevant quality and cost metrics to facilitate refinements of the 2020 Census design – To assess the cost effectiveness and quality implications of Self-Response and NRFU activities, as well as the interaction between the two.



Page 7 of 7


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorDarlene L Monaco (CENSUS/DMD FED)
File Modified0000-00-00
File Created2021-01-24

© 2024 OMB.report | Privacy Policy