OMB CPEX SUPPORTING STATEMENT PART A - 05-11-09 - Final Draft For OMB Review_v2

OMB CPEX SUPPORTING STATEMENT PART A - 05-11-09 - Final Draft For OMB Review_v2.doc

Generic Clearance for 2010 Census Program for Evaluations & Experiments (CPEX)

OMB: 0607-0952

Document [doc]
Download: doc | pdf

SUPPORTING STATEMENT – PART A

U.S. Department of Commerce

U.S. Census Bureau

2010 CENSUS PROGRAM FOR EVALUATIONS AND EXPERIMENTS

OMB Control Number 0607-XXXX



1. Necessity of the Information Collection

The U.S. Census Bureau began development of the 2010 Census Program for Evaluations and Experiments (CPEX) in August 2006. Projects under the umbrella of the 2010 CPEX will guide future census design, as well as benefit other ongoing programs conducted by the Census Bureau, such as the American Community Survey. Furthermore, Title 13, United States Code, Section 141, directs the Secretary of Commerce to conduct a decennial census of the population, and Section 193 authorizes the Secretary to conduct surveys to gather supplementary information relating to the census.


As with previous decennial censuses dating back to 1950, the Census Bureau conducts a formal program to assess the census and experimental tests that examine methodologies, techniques, and strategies that will potentially improve the way it conducts the next decennial census. For experimental studies, the actual decennial census is required because it provides the best conditions (or only) to learn the true effects of new ideas within the context of the actual effects of national advertising, outreach partnerships, and other things that occur only during an actual census.


The 2010 CPEX includes four experiments and over 20 evaluations. This supporting statement covers all four experiments and four of the 20 evaluations. For the remaining 16 evaluations in the 2010 CPEX, some do not involve data collections and do not require OMB approval; others are still in development, and will be submitted for OMB clearance at a later time. The four experimental studies are as follows: 2010 Alternative Questionnaire Experiment (AQE), 2010 Nonresponse Followup (NRFU) Contact Strategy Experiment, 2010 Deadline Messaging (DM)/Compressed Schedule (CS) Experiment, and 2010 Confidentiality/Privacy Notification (C/PN) Experiment. The four evaluations are as follows: 2010 AQE Reinterview Evaluation, 2010 Content Reinterview Evaluation, 2010 Alternative Group Quarters (GQ) Questionnaire Evaluation, and the 2010 Interactive Voice Response (IVR) Customer Satisfaction Survey Evaluation. The Census Bureau identified the need to include the IVR Customer Satisfaction Survey Evaluation in this generic clearance package after the publication of the pre-submission notice in the Federal Register on September 24, 2008. The Census Bureau and the Office of Management and Budget (OMB) have discussed and agree on the inclusion of the IVR Customer Satisfaction Survey Evaluation in this package.


This request is for a generic clearance, which seeks to gain pre-approval for a block of burden hours. The scope of the activities is defined in this document. The details presented for each data collection are as complete and accurate as possible at this time. When plans for a data collection are still under development, the appropriate descriptions indicate so. All sample sizes presented in this document are preliminary, but represent the size currently under consideration by the Census Bureau. In some cases, sample size ranges are presented. Challenges associated with sample selection are addressed below and in Part B: Collections of Information Employing Statistical Methods. This generic clearance request will be followed up with individual requests for clearance for each of the activities to be submitted as generic individual clearances (ICs).


A. Alternative Questionnaire Experiment (AQE). The AQE has several objectives. The overall goal of the study is to continue efforts to improve a user-friendly mailout questionnaire that can be accurately completed by respondents. Specifically, the Census Bureau would like to determine ways to reduce item nonresponse to the race and Hispanic Origin questions. This experiment will test various designs on how the Census Bureau asks respondents to provide their data on the paper form. This experiment contains a total of 19 panels (17 experimental treatment panels and two control panels).


A large focus of the 2010 AQE is on methods to improve the completeness and accuracy for respondent reporting on race and Hispanic origin data. A total of 15 panels are devoted to race and Hispanic Origin research. Seven panels will be devoted to potential refinements of the current separate-question approach to collecting race and Hispanic Origin data. In general, these treatments include providing different examples for various response categories in each question; modifying the checkbox response for the “Black, African Am., or Negro” category; and permitting respondents to report multiple origins. Four panels will test the performance of a combined race and Hispanic origin question (as mentioned above, these data are currently collected by asking two separate questions: one on race and one on Hispanic origin). Finally, four panels test questionnaire changes aimed at limiting the use of the term “race” in the race question, in addition to testing questionnaire designs intended to better convey that both the Asian and Pacific Islander checkbox response categories are part of the broader OMB race categories.


Other topics covered by the AQE include an examination of true residence status by collecting additional information on the initial census return pertaining to household coverage (one panel) and an examination of the combined effects of all questionnaire changes made since Census 2000 (one panel). The Census Bureau accomplishes the latter by sending the Census 2000 form to a sample of respondents.


The total targeted sample size for the 2010 AQE is 560,000 housing units. The targeted sample size for each panel dealing with race and Hispanic origin is 30,000. A targeted sample size of 30,000 is planned for the panel that examines the collection of additional address information on the mail form to effectively identify census coverage errors for followup. The targeted sample size of the panel examining combined effects on the data of all questionnaire changes made in the 2010 mail questionnaire is 20,000. For the control panels, the targeted sample size is 30,000 for each panel.


The households in the first control panel will receive the standard 2010 Census form. The primary motivation for designating a separate control panel (rather than using responses from all housing units in the mailout/mailback universe receiving the standard 2010 Census form) stems from the replacement questionnaire mailing strategy being employed for Census 2010. The mailing of replacement questionnaires in Census 2010 will vary across the country. For some areas, the Census Bureau will implement a blanket mailing of replacement questionnaires (that is, all housing units in a given area receive a replacement questionnaire). For other areas, the Census Bureau will use a targeted mailing of replacement questionnaires (that is, some housing units receive a replacement questionnaire while others do not). Finally, there will be some areas in which none of the housing units receive a replacement questionnaire. Cases in this separate control panel (along with cases in each experimental treatment panel) will receive a targeted mailing of replacement questionnaires. Specifically, the Census Bureau will send replacement questionnaires to those in the control (and treatment panels) who have not returned the initial questionnaire. A consistent implementation of initial and replacement mailing strategies is necessary for valid comparisons between treatment and control panels.


The households in the other control panel will have the same initial and replacement mailing strategies, however the 2010 Census form will not contain the overcount detection item. This is done to serve as a bridge between the standard 2010 Census form and the selected race and Hispanic origin panels that also exclude that item (These experimental panels have the overcount detection item removed in order to accommodate the increased space needs of these items).


The overcount detection question asks, for each household member, if this person sometimes lives or stays someplace else. This question allows the Coverage Followup (CFU) operation to follow up with households to determine if a possible overcount for that household occurred. Removal of the item allows for the goals of the research objectives to remain intact, while not increasing workloads for CFU. This option removes about 5,000 households from being eligible for the 2010 CFU operation, resulting in the possibility of about 1,600 individuals being double-counted in the Census. These 1,600 individuals, however, will be spread randomly across the country, and represent about 0.005 percent of the total U.S. population.


The performance of the experimental treatments will be measured in two ways. First, the Census Bureau will compare item nonresponse statistics and overall mail response rates to the 2010 Census “control” panel (i.e., the standard 2010 census form which excludes experimental factors). The statistics include mail response rates, item nonresponse rates, and response distributions. Second, the Census Bureau will analyze findings from the AQE Reinterview Evaluation described later in this document.


  1. 2010 Non-response Followup (NRFU) Contact Strategy Experiment.

The primary objective of this experiment is to understand the effects of changing the number of NRFU contacts in a census environment. This study has the potential to provide large cost savings in future censuses through a possible reduction in NRFU contacts. The goal is to determine whether the Census Bureau can maintain data quality by reducing the number of NRFU contacts.


In recent decennial censuses (including Census 2010), enumerators have attempted up to six contacts. Enumeration procedures specify that field staff should make up to six contact attempts (3 personal visit, 3 by telephone) to collect data from households that failed to return their census forms. After six contacts, enumerators can collect data from a proxy respondent (such as a neighbor or a landlord).


The 2010 NRFU Contact Strategy Experiment is comprised of two treatment panels. For the first treatment panel, enumerators will use a NRFU questionnaire that allows for a maximum of five contacts. The questionnaire for the second treatment panel allows for a maximum of four contacts. Results from the treatment panels will be compared against the control, which consists of all standard 2010 NRFU questionnaires (that is, the enumerator questionnaire allowing for a maximum of six contacts administered to all non-experimental housing units in the NRFU operation).


The Census Bureau is considering various designs for this experiment because of numerous and complex logistical challenges associated with implementing this experiment in a decennial environment. Sample sizes for each panel range from 40,000 to 100,000, which would yield a total sample size range from 120,000 to 300,000.


C. 2010 Deadline Messaging (DM)/Compressed Schedule (CS) Experiment. The goal of the 2010 DM/CS Experiment is to assess two strategies aimed at increasing mail response and/or speed of response. The first strategy is to include a message on various mailing pieces that indicates a due date for when the respondent should return the form. The second strategy is to implement a delayed mailing schedule, which results in the respondent receiving a census form closer to Census Day, April 1st (also referred to as the reference date). The purpose of this experiment is to determine whether the deadline message and compressed schedule strategies increase form mailback rates (mail response rates) and speed of responses in a decennial census environment.


The 2010 DM/CS Experiment is comprised of seven experimental treatment panels plus a control panel. The control panel uses the standard set of 2010 mailing materials and the standard 2010 Census form sent on the standard mailing schedule. As with the control panels described for the AQE, the primary motivation for designating a separate control panel (rather than using responses from all housing units in the mailout/mailback universe receiving the standard 2010 Census form) stems from the replacement questionnaire mailing strategy being employed for Census 2010. The mailing of replacement questionnaires in Census 2010 will vary across the country. For some areas, the Census Bureau will implement a blanket mailing of replacement questionnaires (that is, all housing units in a given area receive a replacement questionnaire). For other areas, the Census Bureau will use a targeted mailing of replacement questionnaires (that is, some housing units receive a replacement questionnaire while others do not). Finally, there will be some areas in which none of the housing units receive a replacement questionnaire. Cases in this separate control panel (along with cases in each experimental treatment panel) will receive a targeted mailing of replacement questionnaires. Specifically, the Census Bureau will send replacement questionnaires to those in the control (and treatment panels) who have not returned the initial questionnaire. A consistent implementation of initial and replacement mailing strategies is necessary for valid comparisons between treatment and control panels.


The compressed schedule treatment panel uses the 2010 Census standard materials, but the materials are sent on a ‘compressed schedule’ (that is, the advance letter, initial questionnaire package and reminder postcard are all slightly delayed, to reach the respondent closer to Census Day). The remaining six treatments test one of three messages. Each message is tested using the standard mailing schedule and also using the compressed mailing schedule. The targeted sample size for each panel is 20,000; this results in a total sample size of 160,000.


The three messages that will be tested are as follows:

  1. Deadline Messaging 1 tests a “mild” deadline message. The mild message simply indicates the date by which the respondent should mail back the form.

  2. Deadline Messaging 2 tests a deadline message with “progressive urgency.” The stricter and progressively more urgent message emphasizes the “deadline” date and also provides a reminder that census response is required by law.

  3. Deadline Messaging 3 tests a “NRFU motivation” deadline message. This message reminds respondents about interviewers coming to their home, if they do not mail back the form by the indicated due date.


The Census Bureau is determining the feasibility of testing a fourth message in this experiment. The additional message indicates that returning a census form on time will save resources that would be used to follow up with the respondent via a personal visit interview. As with the three messages presented above, this additional message would be tested with both the standard mailing schedule and the compressed mailing schedule, which would bring the total number of experimental panels to nine plus a control panel. The sample size for each additional panel is 20,000, which would increase the overall sample size by 40,000 to 200,000. If the Census Bureau decides to test this fourth message, the experimental design and sample size will reflect this addition in the forthcoming detailed OMB package pertaining to the 2010 DM/CS Experiment data collection.


D. 2010 Confidentiality/Privacy Notification (C/PN) Experiment. The 2010 C/PN Experiment will test alternative presentation and placement of privacy and confidentiality messages in the cover letter for the initial and replacement questionnaires. The Census Bureau developed these experimental treatments with the intent of including a message that is more easily understood by respondents. At the same time, the content of the message is expanded to meet an overall goal of the Census Bureau, which is to increase the transparency of its work for the public. The research goal is to examine differences in response rates and data quality among the treatment panels and the control panel. The inclusion of each message is not expected to decrease mail response rates or data quality.


The control panel will receive the standard set of 2010 mailing materials and the standard 2010 Census form. The cover letters contain the following statement: “Your answers will only be used for statistical purposes, and no other purpose.” As described for the AQE and the DM/CS control panels, the primary motivation for designating a separate control panel (rather than using responses from all housing units in the mailout/mailback universe receiving the standard 2010 Census form) stems from the replacement questionnaire mailing strategy being employed for Census 2010. The mailing of replacement questionnaires in Census 2010 will vary across the country. For some areas, the Census Bureau will implement a blanket mailing of replacement questionnaires (that is, all housing units in a given area receive a replacement questionnaire). For other areas, the Census Bureau will use a targeted mailing of replacement questionnaires (that is, some housing units receive a replacement questionnaire while others do not). Finally, there will be some areas in which none of the housing units receive a replacement questionnaire. Cases in this separate control panel (along with cases in each experimental treatment panel) will receive a targeted mailing of replacement questionnaires. Specifically, the Census Bureau will send replacement questionnaires to those in the control (and treatment panels) who have not returned the initial questionnaire. A consistent implementation of initial and replacement mailing strategies is necessary for valid comparisons between treatment and control panels.


A sample of households will receive the alternative wording included on the cover letters that arrives with their 2010 Census questionnaire in one of two treatments:

  1. Privacy 1: Tests adding a message to the back of each cover letter explaining that other government agencies may give the Census Bureau additional data about a respondent’s household to improve census results.

  2. Privacy 2: Tests the following revised statement on the front of the cover letter: “Your answers will only be used to produce statistics” in addition to adding the message described in Privacy 1 to the back of the cover letters.


The targeted sample size is 20,000 for each panel, for a total of 40,000. The control panel for the 2010 C/PN Experiment is shared with the control panel for the 2010 DM/CS Experiment. Therefore the sample size for it is not shown here to avoid counting it twice.


E. 2010 AQE Reinterview Evaluation and 2010 Content Reinterview Evaluation.

In addition to the 15 AQE panels devoted to race and Hispanic origin research, the Census Bureau will collect data via a followup reinterview from a subset of AQE respondents that focuses on the race and Hispanic origin questions. This reinterview will ask probing questions in addition to the various race and Hispanic origin treatments tested in the AQE. For the experimental AQE race and Hispanic origin questions, the data obtained in the probing reinterview will be critical to measuring response bias (a systematic pattern in the difference between respondent answers and the correct response). As mentioned above, the AQE Reinterview Evaluation will be administered to two subsamples. The first subsample will ask probing questions on the design of separate race and Hispanic Origin questions, while the second subsample will ask probing questions on the design of a combined race and Hispanic Origin question.


The Census Bureau will also conduct a Content Reinterview Evaluation to measure simple response variance. The reinterview consists of asking respondents the same set of questions on the Decennial Census Questionnaire (D-1) to determine if questions are worded properly to produce consistent responses.


The Census Bureau will combine the 2010 AQE Reinterview Evaluation and the 2010 Content Reinterview Evaluation into one statistical design for purposes of operational efficiency. The overall sample under consideration for the 2010 AQE Reinterview Evaluation and the 2010 Content Reinterview Evaluation is 90,000. The sample size for the 2010 AQE Reinterview Evaluation is approximately 60,000 and the sample size for the 2010 Content Reinterview Evaluation is approximately 11,000, but the Census
Bureau may increase it to approximately 30,000.


F. 2010 Alternative Group Quarters (GQ) Questionnaire Evaluation.

The purpose of this research is to test if it is possible to collect enough information on the Individual Census Report or ICR (used in GQ enumeration) to determine true residence status and avoid doing an additional followup. By collecting an alternative address for all GQ respondents, this test examines additional ways of correcting duplicates and other erroneous enumerations in the Census without a costly followup operation. The address, previously asked of only GQ respondents who are allowed to claim a “usual home elsewhere,” would be asked of everyone. Results will be compared against the “control,” which consists of the standard 2010 Census GQ ICRs.


This test is comprised of one treatment with a targeted sample size of 60,000. For this test, researchers will select whole GQs rather than selecting a subsample of residents from many GQs. For example, an entire college dormitory would be in sample as opposed to sampling residents of the dormitory who stay in rooms ending with an even number.


G. 2010 Interactive Voice Response (IVR) Customer Satisfaction Survey Evaluation. The goal of the 2010 IVR Customer Satisfaction Survey Evaluation is to collect voluntary feedback from respondents who use the Telephone Questionnaire Assistance (TQA) help lines to assess the quality of service provided to respondents and to guide the development of future census questionnaire assistance help lines.


The 2010 IVR comprises two treatments: a five question survey with an estimated sample size of 425,000, used for callers who end their call in the self-help IVR application; and a seven question survey with an estimated sample size of 240,000, used for callers who are transferred from the self-help IVR application to a live customer service representative, for a total estimated sample size of 665,000. There is an estimated response rate of 7.6%, bringing the estimated number of respondents to 5,016.


2. Needs and Uses

All of the experiments and evaluations are primarily designed for use by the Census Bureau and will inform early 2020 testing and planning. These experiments and evaluations are designed to identify improvements for the next decennial census. Census Bureau managers and planners will use results from these studies to focus 2020 decennial census planning and research.


The Census Bureau plans to make results of the research reports using data from the collections described in this document available to the general public. Information quality is an integral part of the pre-dissemination review of the information disseminated by the Census Bureau (fully described in the Census Bureau’s Information Quality Guidelines). Information quality is also integral to the information collections conducted by the Census Bureau and is incorporated into the clearance process by the Paperwork Reduction Act.


A. AQE

Since 1970, the Census Bureau has implemented an experimental program to evaluate different questionnaire design strategies. Several different questionnaire designs were tested as part of the Census 2000 AQE, including the 1990-style race and Hispanic origin questions to allow comparison to the multiple-response version included in Census 2000.


In 2003 and 2005, the Census Bureau conducted national mail-out tests of the use of examples and other instruction or wording changes to the separate race and Hispanic origin questions. The primary objectives of those tests were to improve the accuracy of race reporting and improve the reporting of detailed Hispanic origins within the Hispanic origin question.


The 2000 AQE, the 2003 National Census Test (NCT), and the 2005 NCT results were all instrumental in guiding the selection of the 2010 Census Hispanic origin and race questions. There are several aspects of these tests that we are exploring and trying to improve upon through the 2010 AQE. These include efforts to clarify the meaning of our response categories, simplify the categories used to collect race and ethnic data, and increase respondent identification with the OMB race and ethnic categories.

A broad goal of the Census Bureau is to reduce item nonresponse to the race and Hispanic origin questions as well as increase reporting of the five race categories defined by the Office of Management and Budget (OMB). These responses that are missing or cannot be categorized according to the OMB directive must be imputed for all uses other than release of products from the census. A secondary goal is to improve the reporting of detailed Asian groups, Pacific Islander groups, American Indian and Alaskan Native tribes, and specific Hispanic origins in the race and Hispanic origin questions.


In general, we plan to test modified examples, a Hispanic origin question that accepts multiple responses, and various combined race and Hispanic origin question strategies.


The combined race and Hispanic origin item was tested for the decennial application in the 1996 Race and Ethnic Targeted Test. The motivation for testing this strategy was improved respondent self-identification. This survey was designed, in part, to evaluate the effect of asking race and Hispanic origin together in one question with an added “multiracial or biracial” category, as well as asking a combined question with instructions to “mark one or more boxes.” The combined question was asked as a two-part item, which included a request for detailed ancestry or ethnic write-in responses. The combined question performed well in terms of lower item nonresponse and percent reporting being of Hispanic origin, but it did not provide the comparable levels of detail on type of Hispanic origin compared to the separate questions. Census 2000 data show that ‘Some other race’ was the third largest race group, when the intent was to use this as a residual category. With the projected continued growth of the Hispanic population, it is possible that ‘Some other race’ will become the second largest race group in the future.


It is clear that, although the race classification system works well for many respondents, there are others, particularly those of Hispanic origin, who do not identify with our categories. Instead, they see Hispanic origin and race as part of the same overall concept. Because we have evidence that a growing, significant proportion of the population does not identify with any of the OMB race categories, we felt compelled to explore alternative approaches to collecting race and ethnic data.


After considering a number of designs, we are cognitively testing four versions of a combined race and Hispanic origin question. It is our goal that one of the combined question treatments will result in lower item non-response, maintain or improve reporting of detailed information by Hispanic and non-Hispanic respondents, and significantly reduce ‘Some other race’ reporting.


Although the race and Hispanic origin item treatments are the main focus of the 2010 AQE, a historic series content comparison and a within-household coverage treatment are also included as follows:


1. Modified Race Examples Panel


One goal for the race and Hispanic origin portion of the AQE is to clarify the meaning of response categories through the use of examples. Over the years, the Census Advisory Committees, particularly the Race and Ethnic Advisory Committees, have requested that we test different examples in the race and Hispanic origin questions. These requests stem from concerns that segments of the population don’t identify with certain OMB categories, and that we need a modified set of examples to clarify where they should report, according to OMB standards.


When considering how to modify the examples in the race question, a number of approaches were considered: 1) list groups by population size; 2) list groups by growth since 2000; 3) list groups to represent relevant geographic areas included in the 1997 OMB race and ethnic standards. We decided to focus on the third approach.


In this panel, examples have been added to the ‘White’ and ‘Black’ response categories. The addition of these examples may reduce the amount of ‘White’ and ‘Black’ ethnicities being reported in the race write-in lines. ‘White’ examples have been added to reflect the geographic areas referenced in the OMB definition of ‘White.’ Afro-Caribbean and African examples have been added at the specific request of the African American Census Advisory Committee.


The ‘Bangladeshi’ example has been added as an ‘Other Asian’ example in an attempt to better balance the representation of the geographic areas mentioned in the 1997 OMB definition of ‘Asian.’ The majority of the examples represent Southeast Asia, so the addition of ‘Bangladeshi’ increases the Indian subcontinent representation.


The ‘Marshallese’ example was added as an ‘Other Pacific Islander’ example at the specific request of the Native Hawaiian and Other Pacific Islander Census Advisory Committee. The addition of ‘Marshallese’ brings balance to the list of examples because all three Pacific Islander cultural groups are now represented – Polynesian (Tongan), Melanesian (Fijian), and Micronesian (Marshallese).


2. Modified Hispanic Origin Examples Panel


The Hispanic Advisory Committee specifically requested revisiting the Hispanic origin examples.


When considering how to modify the examples in the Hispanic origin question, a number of approaches were considered: 1) list groups by population size; 2) list groups by growth since 2000; 3) list groups to represent relevant geographic areas included in the 1997 OMB race and ethnic standards. We decided to focus on a combination of the first and the third approaches. Consequently, the ‘Argentinean’ and ‘Nicaraguan’ examples used in the 2010 Census Hispanic origin question were dropped for the AQE panel. While these groups represent Central and South America, their populations in the U.S. are small compared to other groups. The ‘Ecuadorian,’ ‘Guatemalan,’ and ‘Honduran’ were added as examples. These groups represent Central and South America, but these populations are relatively large and growing in the U.S.


3. Multiple Hispanic Origin Responses Panel


Testing a Hispanic origin question that accepts multiple responses in the 2010 AQE will be our first opportunity to follow up on this research recommendation made in the 1997 OMB standards.


The purpose of this experimental panel is to test an instruction that permits the reporting of multiple origins (e.g., ‘Mexican’ and ‘Cuban’) and mixed origins (e.g., ‘Not Hispanic’ and ‘Puerto Rican’). This issue is commonly raised, since multiple race reporting is permitted when multiple/mixed origin reporting is not. In order to keep the Hispanic origin question instruction comparable to the race question instruction, identical language was used. Additionally, for the write-in area, respondents are instructed to ‘print one or more origins.’


4. “Detailed” Combined Race and Hispanic Origin Question Panel


This panel combines the two questions, has write-in areas for each OMB group and ‘other,’ and retains all of the checkbox groups on the 2010 Census form. This version brings equity to all OMB race/ethnic groups by providing write-in areas for each major response category, for which many groups are currently lobbying the Census Bureau and Congress. The same examples discussed earlier that we are testing for ‘White,’ ‘Black,’ and ‘AIAN’ are included in this panel. A simple instruction is used that instructs respondents to mark one or more boxes and to write in a specific race or origin. ‘Race’ and ‘origin’ represent both OMB concepts. The race category ‘Some other race’ has been modified to read ‘Some other race or origin,’ which is appropriate for a combined approach. Additionally, ‘Other Hispanic’ is used to be on par with the categories ‘Other Asian’ and ‘Other Pacific Islander.’


5. “Streamlined” Combined Race and Hispanic Origin Question Panel


This version of the combined question also brings equity to all OMB race/ethnic groups by providing write-in areas for each major response category – for which many groups are currently lobbying the Census Bureau and Congress. This approach removes all national origin checkboxes, which simplifies and streamlines the combined question. All groups that are national origin checkboxes on the 2010 Census form have been added as examples. The expectation is that this will offset any decrease in the reporting of these groups. A simple instruction is used that instructs respondents to mark one or more boxes and to write in a specific race or origin. ‘Race’ and ‘origin’ represent both OMB concepts. The race category ‘Some other race’ has been modified to read ‘Some other race or origin,’ which is appropriate for a combined approach.



6. “Very Streamlined” Combined Race and Hispanic Origin Question Panel


This panel only includes check boxes for the OMB race and ethnic groups and ‘Some other race or origin.’ This approach removes all national origin check boxes, which simplifies and streamlines the question. Additionally, a two-part approach is used. Part A is to record the OMB group(s) the individual identifies with, using an additional category for ‘Some other race or origin.’ Part B is to record the individual’s detailed race or ethnic group. So, this panel also brings equity to all OMB race/ethnic groups by providing one shared area for all detailed race and ethnic responses, for which many groups are currently lobbying the Census Bureau and Congress.


We do not have examples next to the OMB race and ethnic group categories. In this panel, we don’t want to associate race groups with specific countries. International migration is diversifying many countries in the world. We know that there are people who would say that their origin is Europe – but, are not ‘White’ (e.g., ‘Africans’ living in England). We know that there are people who would say that their origin is in the Pacific Islands – but, are not Pacific Islander (e.g., ‘Chinese’ living in Marshall Islands). So, to list examples that make assumptions about a person’s race or ethnicity based on national origin can sometimes be presumptuous. Instead, a list of examples is added to Part B. Examples were selected to represent each OMB race or ethnic group and listed alphabetically, which is a way to equitably present the groups. At the same time, we acknowledge that there are some aggregate terms (e.g., ‘Asian’ and ‘Hispanic or Latino’) that have traditionally been clarified by presenting national origin groups.


A special note is added above the question instruction, which directs respondents to complete both Part A and Part B. We expect that this visual cue, along with the ‘A’ and ‘B’ markers, will connect the marking of a race/ethnic checkbox with the request for critical detailed information. This panel also instructs respondents to mark one or more boxes for Part A. Part B asks respondents to write in a specific race or origin or tribe.


The race category ‘Some other race’ has been modified to read ‘Some other race or origin,’ which is appropriate for a combined approach. While this approach may be reminiscent of the three-question approach tested in the 2005 National Census Test, there are important differences. It is expected that this approach won’t confuse respondents by seemingly asking for the same information three times (i.e., a person reporting ‘Mexican’ for the Hispanic origin question, the race question, and the ancestry question). Further, this approach does not use the term ‘ancestry’ to capture detailed race and ethnic groups, which can be confusing to differentiate from race and Hispanic origin questions.


7. “Alternative Control” Combined Race and Hispanic Origin Question Panel


Our combined question experimental panels represent a huge departure from the 2010 Census form – which is our control. We felt that a panel was needed to serve as a type of ‘bridge’ between the control and the combined question approach. Our expectation is that this ‘alternative control’ will be useful in reducing confounding effects when analyzing results. This version basically removes the ‘false separation’ between the Hispanic origin and race questions, combining the two concepts into one question – leaving most aspects the same as the 2010 Census control. The race category ‘Some other race’ has been modified to read ‘Some other race or origin,’ which is appropriate for a combined approach. Additionally, ‘Other Hispanic’ is used to be on par with the race categories ‘Other Asian’ and ‘Other Pacific Islander.’


8. Race and Hispanic Origin Treatment Interaction Panels


We included four panels that are the combinations of the separate question treatments to make a fully factorial design. Specifically, we are testing the following treatment interaction panels:


  • Modified race examples with modified Hispanic origin examples

  • Modified race examples with multiple Hispanic origin responses

  • Modified Hispanic origin examples with multiple responses

  • All three separate item treatments together


  1. Clarification of the Asian and Pacific Islander categories with alphabetization of the “Other Asian” examples panel


The presence of the Asian and the Pacific Islander national origin and ethnic checkbox response categories in the race question often confuses respondents. This panel was designed to clarify that the Asian and the Pacific Islander checkbox response categories are part of two broader OMB race groups and are treated as national origins and ethnic groups. Spanners were added over the “Other Asian” and “Other Pacific Islander” checkbox response categories. The term “race” was removed from the instructions to further support that “Other Asian” and “Other Pacific Islander” groups are truly national origins and ethnic groups. Additionally, the “Other Asian” examples were alphabetized, providing an opportunity to test the approach of using examples.


  1. Clarification of the Asian and Pacific Islander categories panel


This intent of this panel is to obtain a clean test of including spanners for Asian and Pacific Islander checkbox response categories.


  1. Limiting the use of the term “race” panel


The presence of the term “race” was removed from the instructions to print “Other Asian” and “Other Pacific Islander” groups, as well as from the race question stem. This design clearly shows that these groups are treated as national origins and not separate races.


The term “race” is retained for the “Some other race” checkbox per previously enacted legislation (P.L. 109-108, Title II, 2005).


  1. Clarification of the Asian and Pacific Islander categories with alphabetization of the “Other Asian” examples panel


This panel includes all the features discussed in Panel 9 and removes the term “Negro” from the “Black, African Am., or Negro” checkbox response category. The term “race” was also deleted from the race question stem, therefore, simply asking respondents “Is this person …” and are instructed to mark one or more boxes.

  1. 2000 versus 2010 Content Panel


As in the 2000 AQE, we will compare questionnaire content from two censuses. Specifically, we will compare the 2010 Census race and Hispanic origin results to those obtained from a questionnaire that replicates the Census 2000 questionnaire wording, categories, order, and other essential design features. By comparing the 2000 and 2010 questionnaire results in the same time frame, we will eliminate the impact of real changes to the population and can more clearly assess the combined effects of the questionnaire design changes.


  1. Overcount/Coverage Panel


The National Academy of Sciences suggested collecting “any residence elsewhere” information: allowing respondents to specify a specific street address for another location at where they live or stay, then asking followup questions on the questionnaire that attempt to resolve any residence issues. The current 2010 questionnaire includes an overcount coverage question for persons 1 through 6. A housing unit with a person that has an eligible overcount category flag is contacted over the phone in Coverage Followup to determine where the other place is and how often the person(s) lived there.


One of the overcount questions tested in the 2005 National Census Test provided a corresponding city and state field if a person had any overcount category selected. This was a starting point for helping determine “any residency elsewhere.” However, to resolve residence issues, we may need to collect more address information and have followup questions on where the person lived or stayed most of the time or on April 1.


This alternative approach will be tested in the 2010 AQE. The intent is to save money by getting coverage right during the initial enumeration for as many households as possible by collecting an address for the other place and information on whether the person lives or stays at the address of the other place most of the time. Since this additional information is collected up front, the Census Bureau can then use business rules to resolve the residence issue and avoid sending that case to the Coverage Followup Operation.


B. Non-response Followup (NRFU) Contact Strategy Experiment

For several decades, the decennial census enumerator form has provided space for up to six contact attempts with households in the record of contact section of the census questionnaire. After Census 2000, the OMB was interested in testing whether the Census Bureau could feasibly use fewer than six contact attempts to collect data during NRFU.


Researchers used the Census 2000 Master Trace Sample Database to study this issue and found that the rate of successfully conducting interviews decreased with each consecutive contact attempt up to the fifth contact. At the sixth contact attempt, the success rate increased, which was most likely a result of last chance efforts to interview proxy respondents. However, without a proper experimental design, we could not definitively estimate the effect of less than six contact attempts on response and data quality.


It is important to conduct this long-awaited study in 2010 to understand the effects of reducing the number of NRFU contacts in a census environment. For example, the potential impact of advertising on respondent cooperation would not be reflected if this study were conducted as part of a mid-decade test.


C. Deadline Messaging (DM)/Compressed Schedule (CS) Experiment

In the 2003 National Census Test, a due date on census mailing materials was tested in one experimental panel and showed no effect on mail response rates, though it did show a positive increase in the speed of responses.


In a 2006 experiment, a deadline date was combined with a compressed schedule where mail pieces were sent one week later than the control panel. This showed significantly higher response rates on the order of two percentage points.


The Deadline Messaging/Compressed Schedule Experiment was, in part, inspired by the results of the 2006 short form experiment, as we hope to replicate those promising findings in the decennial environment. Also, this experiment will allow us to separate and tease out any main effects between the compressed schedule and the deadline messages, which the design for the 2006 test was unable to do.


D. Confidentiality/Privacy Notification Experiment

The goal of the Confidentiality/Privacy Notification Experiment is to provide quantitative information that will aid in the development of statements for potential use in the 2020 census questionnaire mailing packages. We plan to measure the effect on housing unit response rates of an administrative records use message and alternative statistical purpose message on the initial questionnaire cover letter.


We are focused on developing statements that do not lower mail response, are less confusing to respondents, and yet inform respondents about the potential for data linkage. It is expected that the results of this experiment will inform future policy decisions regarding data linkage statements.


Research has consistently indicated that the privacy and confidentiality messages in the decennial mailing packages are not well understood by respondents in the context of census data. In the Census 2000, we conducted a mailout experiment with an administrative records use message and results were mixed.


In 2006, the Census Bureau conducted research on whether the privacy and confidentiality messages in the 2010 Census mailing package could be improved. They specifically focused on testing language indicating that census data could be linked with data from other agencies. The 2006 Privacy and Confidentiality Working Group examined Census 2000 research results on this topic, developed proposed messages for testing, conducted 50 cognitive interviews, and recommended various revisions to the existing letter package. Many of those recommendations were adopted for inclusion in the 2010 questionnaire cover letter. However, the proposed language informing respondents about possible linkage of their answers to data from other agencies was rejected because it had not been field tested, and could negatively affect mail response.


We plan to test this data linkage phrasing in the 2010 Confidentiality/Privacy Notification Experiment.

In addition, we plan to test alternative ‘census data use’ language which we refer to as the ‘statistical purposes’ treatment. Note that the alternative wording is tested only on the cover letter and does not alter the questionnaire content.


E. 2010 AQE Reinterview Evaluation and 2010 Content Reinterview Evaluation

Response error in the decennial census has traditionally been measured through a content reinterview survey (CRS). The purpose of a CRS is to evaluate the consistency of responses to the census questionnaire. Assessing response error to questionnaire items aids both census planners and data users. Measuring response error for specific items helps census planners improve the quality of the item through testing.


The Census Bureau first began conducting a CRS for the 1950 Census, and continued to conduct one for each of the following censuses. The CRS attempted to measure both simple response variance and response bias. Response variance measures the variation in respondents’ answers to a question when the question is asked repeatedly. Response bias measures a systematic pattern in the difference between respondents’ answers and the correct response. A detailed set of probing questions was included for specific items in an attempt to measure response bias. In Census 2000, only response variance was analyzed. In 2010, both response variance (for 100-percent items) and response bias (for race and Hispanic origin) will be analyzed.


F. 2010 Alternative Group Quarters (GQ) Questionnaire Evaluation

The GQ Individual Census Report (ICR) experimental form will be collecting an additional address for all respondents regardless if they live or stay somewhere else. The address on the regular (non-experimental) ICR is asked only of respondents in GQs that, according to Census’ residence rules, are allowed to claim a “usual home elsewhere”. Research has shown that there is some amount of duplication between persons in GQs not allowed to claim a “usual home elsewhere” and persons in housing units. The purpose of this experiment is to test if it is possible to collect enough information on the ICR to determine true residence status and avoid doing an additional followup. The additional address information will be matched back to the MAF/Tiger database (MTdb) and then compared to the enumerations of the housing units. If successful, collecting this additional address information on the 2020 Census ICR will improve the identification and removal of duplicates in the census, and help to reduce costly followup.


G. 2010 Interactive Voice Response (IVR) Customer Satisfaction Survey Evaluation

As during the 2000 Census, the Census 2010 inbound Telephone Questionnaire Assistance (TQA) operation will handle all incoming calls from the public as follows: provide the public with convenient access to general Census 2010 information; provide self-help in completing the Census questionnaire; field requests for questionnaires and language guides; and, via call center agents, collect data from callers. Census 2010 will implement a large-scale TQA operation to support calls in English, Spanish, Chinese, Korean, Vietnamese, and Russian.


The public will access the TQA system by calling one of the six toll-free numbers for the languages mentioned above or a Telephone Device for the Deaf (TDD) toll-free number printed on the Census forms. Depending on the language assistance the caller needs, different options are available. The English callers will be routed to the English Interactive Voice Response (IVR) system. The Spanish-speaking callers will be routed to the Spanish IVR. Other callers will be routed directly to a customer service representative (agent) who speaks the appropriate language.


When agents receive a call, they will use a web-based instrument, referred to as the TQA application to (1) answer callers’ questions, (2) take requests for census forms, or (3) conduct interviews when appropriate. At the conclusion of the call, selected callers will be routed back to the IVR application and offered the opportunity to complete a customer satisfaction survey.


The purpose of the Customer Satisfaction Survey is to gather information from the public on how well the IVR application and the web-based TQA application addressed their questions and other census-related assistance needs.


3. Use of Information Technology

The 2010 Interactive Voice Response (IVR) Customer Satisfaction Survey Evaluation will utilize automated or electronic data collection.


The AQE Content Reinterview Evaluation and the AQE Reinterview will use an automated Web-Computer Assisted Telephone Interviewing (Web-CATI) system that telephone interviewers will use to collect data.


In addition, the forms used in each experiment will undergo the same data capture and data integration processes as do the production Census 2010 questionnaires. Census 2010 data capture and integration processes utilize several major systems developed by Lockheed Martin and the Census Bureau.


4. Efforts to Identify Duplication

To the best of our knowledge, these experiments and evaluations are all efforts unique to the Census Bureau; therefore, duplicate information is not being collected by any other agency.


5. Minimizing Burden On Small Business or Other Small Entities

The collection of information targets households and should have no effect on small businesses or small entities.


6. Consequences of Less Frequent Collection

Data collections in this request will support 2020 decennial census planning and research. If these data collections were not to occur the Census Bureau would lack quantitative evidence to improve upon the current decennial census design. For experimental studies, the actual decennial census is required because it provides the best conditions to learn the true effects of new ideas within the context of the actual effects of national advertising, outreach partnerships, and other things that occur only during the census. For each data collection in this request, the Census Bureau will collect the information once from the respondents.


7. Special Circumstances

There are no special circumstances.


8. Consultations Outside the Agency

The notice for public comment, entitled, “Generic Clearance for the 2010 Census Program for Evaluations and Experiments,” was published in the Federal Register September 24, 2008 (Vol. 73, No. 186, pp. 55032 – 55034).


A comment was received from Peter Wagner from the Prison Policy Initiative regarding the testing of an alternative Individual Census Report. Mr. Wagner suggested that the Census Bureau increase the sample size of the 2010 Alternative Group Quarters Questionnaire Evaluation to include a greater diversity of group quarters in the study. In addition, Mr. Wagner urged the Census Bureau to include a diverse set of correctional facility types in the evaluation. Since the Federal Register Notice was published, the Census Bureau has revised the sample design for this evaluation. The revised design results in an increase in sample size for the evaluation from 2,500 to 60,000. However, the original evaluation objectives are to focus on group quarters such as college dorms and juvenile institutions. Therefore, correctional facilities remain out of scope for this study.


The Census Bureau received another comment during the 60-day period generally opposing data collections outlined in the Federal Register notice. This comment lacked specific suggestions for altering the Census Bureau’s data collection plans.


In addition to the presubmission notice, other consultations are itemized below. The 2010 Census Program for Evaluations and Experiments was presented to the Committee on National Statistics Methods Panel of the National Academy of Sciences on December 7, 2007. Recommendations stemming from this presentation were incorporated into the AQE, 2010 NRFU, 2010 DM, and 2010 C/PN experiment proposals, as well as the proposals for the 2010 AQE Reinterview and 2010 Content Reinterview, and 2010 GQ Questionnaire evaluations. Also, Census Advisory Committees such as the Race and Ethnic Advisory Committee (REAC) and the Census Advisory Committee of Professional Associations provided feedback throughout the decade that shaped the 2010 CPEX. Specifically, comments from the REAC helped to shape the design of the 2010 AQE.


9. Paying Respondents

Respondents will not be paid or provided with gifts for any of these collections.


10. Assurance of Confidentiality

In accordance with Title 13, United States Code, respondents will be informed of the confidentiality of their answers. The experiments and evaluations are in compliance with the requirements of the Privacy Act Notice of 1974 and the Paperwork Reduction Act. All persons involved with the collection of the experimental data, including contractor interviewers, will be either Census Bureau employees or persons appointed to “special sworn status.” All census data, including that collected by contractor, will be kept in a secure environment. All contractors are required to document a data security plan.


11. Justification for Sensitive Questions

For all four experiments and four evaluations, the Census Bureau perceives no questions as being sensitive.


12. Estimate of Burden Hours

Because experimental forms are official census responses and substitute for the standard census form that would otherwise be sent to the households involved in the experiments, most respondent burden hour estimates are already accounted for in the 2010 Census. The burden hour estimate of 11,130 only accounts for burden that is additional to what is already accounted for by the 2010 Census (OMB number 0607-0919).


Experiment/Evaluation

Total # of Respondents

Estimated Response Time

Estimated Burden Hours

AQE

560,000

10 minutes

0 (93,333 already accounted for)

NRFU

200,000

10 minutes

0 (33,333 already accounted for)

DM/CS

200,000

10 minutes

0 (33,333 already accounted for)

C/PN

40,000

10 minutes

0 (6,667 already accounted for)

AQE Reinterview

60,000

7 minutes

7,000

Content Reinterview

30,000

7 minutes

3,500

GQ

60,000

5 min., 30 sec.

500 (5,000 already accounted for)

IVR

5,016

Varies (see below)

130

IVR only

2,006

75 seconds

42

IVR & TQA

3,010

105 seconds

88

Total

1,155,016

*****

11,130 (188,020 already accounted for)


13. Estimate Cost Burden

There is no cost to the respondents selected for these experiments and evaluations other than the time necessary to complete the items.


14. Costs to Federal Government

The detailed requests for clearance of each of the activities will include an estimate of cost to the government.


15. Reason for Change in Burden

This collection is being submitted as new.


16. Project Schedule

The project schedules are still being developed, and depend on receiving OMB Approval of the authorization request. They will be provided on a flow basis.


17. Request to Not Display Expiration Date

None.


18. Exceptions to Certification

None.


40


File Typeapplication/msword
File TitleSUPPORTING STATEMENT
Authorlove0313
Last Modified Bylove0313
File Modified2009-05-12
File Created2009-05-12

© 2024 OMB.report | Privacy Policy