FSS_POMS_OMB_Clearance_Supporting_Statement_Part_A_signed omb comments_rev

FSS_POMS_OMB_Clearance_Supporting_Statement_Part_A_signed omb comments_rev.docx

Federal Statistical System Public Opinion Survey

OMB: 0607-0969

Document [docx]
Download: docx | pdf

A. Justification

  1. Necessity of the Information Collection

From December 2009 through April 2010, the United States Census Bureau contracted with the Gallup organization to conduct a nightly poll of public attitudes toward the 2010 Census, public awareness of Census promotional efforts, and intent to mail back their Census forms. The nationally representative sample of 200 respondents per night was rolled up into 7 day moving estimates that provided nearly immediate feedback on public reaction to national events that had the potential to influence perceptions of the 2010 Census, and on the reach of our communications campaign messaging. The Census Bureau used this feedback to make communication campaign decisions during the 2010 Census that contributed to achieving a mail-back participation rate of 74%, despite increased vacancy rates due to the economic downturn, increased public skepticism about the role of the Federal Government, and a general decline in survey response rates during the decade that crossed both public and private sector surveys (Datta, et al. 2011; Miller and Walejko 2010).

In addition, The Organization for Economic Co-operation and Development (OECD) working group developed a survey for measuring trust in official statistics that was cognitively tested in six of the member countries (Brackfield 2011). The goal of this development was to produce a model survey questionnaire that could be made available internationally to be used comparably in different countries. Many national statistical offices are recognizing the critical role of public trust and robust communication to ensure high quality data, particularly in an era of constrained resources. This international effort recognized that rather than relying on anecdote or no evidence at all, having objective, quantifiable information about public attitudes is needed to inform decision making.

Moving forward the Census Bureau is seeking ways to reverse the decline in response rates for its ongoing surveys to avoid both increasing operational costs and potential declines in data quality. We hypothesize that members of the public would voluntarily cooperate more extensively with federal surveys if they trust the federal statistical system and the resulting federal statistics. Therefore, we propose to collect information about public awareness and attitudes towards federal statistics and the federal statistical system. The information collected will assist the Census Bureau in understanding attitudes, beliefs, and concerns the public may have regarding its trust (confidence) in federal statistics and in the collection of statistical information by the federal government from the public, as well as attitudes toward and knowledge of the statistical uses of administrative records. The data will also provide insights into how current events influence public perception towards federal statistics.

The legal authority under which this information is being collected is Title 13 U.S.C. Chapter 5.

  1. Needs and Uses

These public opinion data will enable the Census Bureau to better understand public perceptions, which will provide guidance for communicating with the public and for future planning of data collection that reflects a good understanding of public perceptions and concerns. Because all federal statistical agencies are also facing these issues of declining response rates and increasing costs in a time of constrained budgets, the Census Bureau will share the results of these surveys with other federal statistical agencies, to maximize the utility of this information collection and ultimately, the quality and efficiency of federal statistics. Specifically, the member agencies of the Interagency Council on Statistical Policy (ICSP) have expressed an interest in this effort. A subgroup of ICSP member agencies have been particularly helpful in developing this proposal. They include the National Agricultural Statistics Service, the National Center of Health Statistics, the Economic Research Service, Statistics of Income Division (IRS), and the Statistical and Science Policy Office, Office of Management and Budget. We refer to this working group as the Federal Statistical System (FSS) Team. The ICSP agencies will use results from this data collection to inform public communication, future public opinion research, and future planning of use of administrative records for statistical purposes.

From February 2012 through September 2013, the Census Bureau will add 25 questions nightly onto an ongoing data collection by the Gallup organization. Approximately nineteen of the 25 questions will be core questions and approximately 6 will be available for rotation. Core questions will focus on awareness of and attitudes towards federal statistics and federal statistical agencies. There will be 1-2 core questions on attitudes towards the statistical use of administrative records. Core questions will be used to explore relationships among the concepts, develop a time series and measure any “shocks” to the system. Shocks could include any current events that may impact awareness or attitudes towards topics being measured. They may include things like data breaches (public or private sector), elections, or any unanticipated news event that may alter public perception. By having a continual data collection, we will be able to look for changes in public perception after any of these types of events occur or look for underlying causes when we see a change in the time series. Attachment A shows the initial set of questions that will be fielded.

Up to 20 times during the data collection, roughly monthly, up to 6 questions may be rotated in the survey. Rotating questions will be used for several distinct purposes:

  • First, rotations will be planned to explore public opinion of different aspects of statistical uses of administrative records. Topics will include public perception of the quality of such records, public perception of privacy and confidentiality implications of such use, and differentiation between types of administrative records and types of statistical uses. These rotations will include using different frames around questions, varying the types of records mentioned and the methods of use in the question, willingness-to-pay/stated preference questions, and so on. These types of questions would use up to 6 questions in the nightly interview and would be fielded for a pre-specified amount of time. These questions will be submitted to OMB by way of update to this submission (specified in more detail below). Some illustrative examples are provided in Attachment B.

  • Second, rotating questions will be used to explore awareness of other statistics, in a format identical to that of the first set of items on the survey (see Attachment A). For example, we may ask additional questions to explore awareness of specific types of statistics, like health statistics, or agricultural statistics. These questions will also be submitted to OMB by way of update to this submission (specified in more detail below).

  • Third, rotating questions will be used around known, planned events to gage awareness of those events and opinions about the relationship (if any) between those events and the federal statistical system. Examples of planned events are the presidential election, release of particular statistics, and any pre-planned public awareness activities. These types of questions would add up to 3 questions in the nightly interview and would be fielded for a limited amount of time surrounding the particular event. These questions will also be submitted to OMB by way of update to this submission (specified in more detail below). In general, they would ask things like awareness of the event, and opinions about the relationship (if any) between those events and the federal statistical system.

  • Finally, we may wish to add rotating questions very quickly after an unanticipated event to gage awareness of those events and opinions about the relationship (if any) between those events and the federal statistical system. These could be events like a data breach (public or private sector), political scandal, or any other unanticipated news event that may alter public perceptions. Gallup can add questions with as little as 48 hours notice. These types of questions would add up to 3 questions in the nightly interview and would be fielded for a limited amount of time surrounding the particular event. These questions would be submitted to OMB for a quick-turn-around approval and would be very limited in scope to address the particular unanticipated event.

OMB and Census have agreed that these rotating questions constitute non-substantive changes to this submission. Census will submit to OMB approximately monthly the request to make these changes through a single tracking document. This tracking document will contain a complete history of all questions asked and the months that each question was asked.

Although the Gallup Daily Tracking Survey is portrayed by Gallup as being nationally representative, it does not meet Census Bureau quality standards for dissemination and is not intended for use as precise national estimates or distribution as a Census Bureau data product. The Census Bureau and the Federal Statistical System will use the results from this survey to monitor awareness and attitudes, as an indicator of the impact of potential negative events, and as an indicator of potential changes in awareness activities. Data from the research will be included in research reports with clear statements about the limitations and that the data were produced for strategic and tactical decision-making and exploratory research and not for official estimates. Research results may be prepared for presentation at professional meetings or in publications in professional journals to promote discussion among the larger survey and statistical community, encourage further research and refinement. Again, all presentations or publications will provide clear descriptions of the methodology and its limitations.

Theoretical Framework

The Federal Statistical System (FSS) Team focused on definitions of trust in statistical products and trust in statistical institutions that are derived from work by Ivan Fellegi (1996, 2004), a model of which is shown in Figure 1 and further defined in Figure 2 below. In addition to considering questions developed and tested by the OECD working group, the FSS working group considered questions used by the Office of National Statistics and the National Centre for Social Research in the United Kingdom and by the Eurobarometer. Based on the U.S. cognitive laboratory results from NCHS work on the OECD survey, we started from the premise that we needed to measure awareness of statistics and statistical institutions first, and then assess level of knowledge/data use, before proceeding to questions addressing trust. We consulted additional previous research that examined the U.S. public’s knowledge of statistics (Curtin, 2007) and sought to create a questionnaire that would be comprehendible by the general population. More detail on questionnaire development is available in Childs, et al. (forthcoming).



[]

Figure 1. Fellegi’s Model of Trust in Official Statistics

Reproduced from “Report of the electronic working group on measuring trust in Official Statistics”, STCCSTAT/BUR (2010)2, January 20 2010, OECD 2010











  1. Trust in Statistical Products

    1. Accuracy- Accuracy refers to how well the statistical value presented in the data set matches up to the phenomena which it is trying to measure (VALIDITY). If the data is not properly measured and does not map onto the reality to which it purports, then the data is of no use.

    2. Credibility- Users rely on the reputation of the provider of the information to validate the data it produces. How credible the statistical products are determines the intrinsic value and usability of the information.

    3. Objectivity- The agency must be seen as having no interest in producing particular figures or being linked to a partisan entity

    4. Relevance- It is important that the statistical product be seen to be meeting the needs of the nation, and not just those of the government. Relevance could be examined at national, local or personal levels.

  2. Trust in Statistical Institution

    1. Confidentiality Protected- The statistical agency requires a clear and visible mandate providing it with the authority to collect data for statistical purposes (and only statistical purposes) and the obligation to protect the confidentiality of individual responses.

    2. Integrity- The reputation of the statistical agency is based upon a number of factors which cumulatively rely upon an ethic of consistent honest research. The nation should come to expect consistent, accurate, honest, non-partisan data from an organization with a clean record, or at least who has addressed previous events that may have tarnished their reputation.

    3. Openness/Transparency- The level to which the statistical agency makes known the behind-the-scenes details of the data collecting process. This involves making the following information publically available: financial and political liaisons, statistical methodology and the data produced. Statistical agencies should publish information about the methods and procedures used in producing statistics.

    4. Impartiality—the extent to which the statistics are perceived to be objective and independent, unbiased and non-partisan (i.e. not subject to political interference). This is often achieved through a number of the other crucial features of trust, but most notably of the statistical agency straying from any sort of partisan connections or values

  1. *These are the specific construsts that we sought to measure. All definitions adapted from Fellegi (1996; 2004).







Figure 2. Definitions of Constructs from Fellegi’s Model of Trust in Official Statistics

The second overarching goal of this public opinion effort was to gauge public opinion towards the use of administrative records for statistical purposes. For this part of the questionnaire development, we reviewed previous questions on the topic, mostly from the perspective of the Census Bureau (Miller and Walejko, 2010; Singer, Bates and Van Hoewyk, 2011; and Conrey, F.R., ZuWallack, R., and Locke, R., 2011). We also considered work conducted internationally. A 2009 study conducted by the ONS revealed that the UK general public is varied in their knowledge about government agencies and their current levels of data sharing. Over fifty percent of respondents were aware that no single government central data base currently exists, but that there are separate databases maintained by individual departments, though this varied by education, age and region. Overall the response received was supportive (approx. two thirds in favor) of data sharing and the creation of a single central population database of UK residents. By including similar questions about knowledge and evaluations of data sharing, the FSS may be able to take measures to increase awareness and/or alter current data sharing practices, which would enable the government to save costs and improve data quality.

The analytic goals of FSS Public Opinion Survey (POS) are to:

  • Examine the relationship between awareness of and trust in the federal statistical system and federal statistics in the United States.

  • Further explore knowledge about and attitudes towards the statistical uses of administrative records.  

  • Examine the relationship between trust in the statistical system and attitudes towards the statistical uses of administrative records.  

  • Observe how current events impact public perception towards the federal statistical system.  

  • Develop a time series of trust in the statistical system.

  • Make comparisons between attitudes observed in the US and those measured in Europe, specifically using a directly comparable item that has been fielded in the Eurobarometer to measure trust in official statistics.

  • Inform a variety of internal management decisions, such as how to provide better information sharing with the public to address any misperceptions or concerns raised by the survey results (see Miller and Walejko, 2010 for an example of how awareness activities could be targeted).

The FSS POS will consist of survey questions added onto the Gallup Daily Tracking Survey for approximately 20 months of data collection.



Gallup Daily Tracking Survey Documentation

Gallup Daily Tracking is a daily survey asking 1,000 U.S. adults about various political, economic, and wellbeing topics. Gallup also routinely incorporates additional questions into the Gallup Daily tracking survey on a short-term basis. These questions cover topical issues, including election voting intentions and views of events in the news. On any given evening, approximately 250 Gallup interviewers conduct computer assisted telephone interviewing with randomly sampled respondents 18 years of age and older, including cell phone users and Spanish speaking respondents from all fifty states and the District of Columbia. The survey questions include many of the standard demographics, including race, income, education, employment status, and occupation. Location data, such as Zip Codes, allow researchers to map the responses to particular parts of the country and accumulate data for local level comparison and interpretation.

Of the 1,000 nightly Gallup Daily Tracking Survey respondents, 200 will be sampled for the 25 Census Bureau questions. These questions will focus on issues of awareness and trust in Federal statistics and in the statistical use of administrative records as described above.

Information quality is an integral part of the pre-dissemination review of the information disseminated by the Census Bureau (fully described in the Census Bureau's Information Quality Guidelines). Information quality is also integral to the information collections conducted by the Census Bureau and is incorporated into the clearance process required by the Paperwork Reduction Act.

  1. Use of Information Technology

All interviews will be conducted using computer assisted telephone interviewing (CATI). Telephone interviews will be conducted with respondents on cell phones and landline telephones.

  1. Efforts to Identify Duplication

This research adds to information collected by the Census Bureau during the 2010 Census, but does not duplicate any other research currently being done by the Census Bureau or other Federal agencies. In the private sector, some organizations periodically measure trust in government, but none measure trust specifically in the statistical sector. This research fills a void identified by the Interagency Committee on Statistical Policy.



The Organization for Economic Co-operation and Development (OECD) electronic working group on measuring trust in official statistics developed a survey for measuring trust in official statistics that was cognitively tested in six of the member countries including the United States (Brackfield 2011). The goal of this development was to produce a model survey questionnaire that could be made available internationally to be used comparably in different countries. Unfortunately, a 2010 National Center for Health Statistics (NCHS) cognitive study revealed that these questions are inadequately understood by U.S. respondents (Willson et.al. 2010) and therefore would be unable to sufficiently measure the trust in the FSS in the United States. As such, the FSS Working Group sought to build upon the theoretical constructs and previous research on this subject (Felligi 2004; OECD Working Group 2011; Willson et.al. 2010) in designing and administering a version of this poll that might adequately measure U.S. public opinion of the FSS. Those questions that were shown to work well in the NCHS cognitive test were included in the survey for cognitive testing.

  1. Minimizing Burden

The data collection does not impact small entities.

  1. Consequences of Less Frequent Collection

A key analytic goal is to measure how current events influence trust in the federal statistical system. While some current events, like a presidential election, may be known in advance, other current events are unforeseen, like a data breach or policy change in another part of the government. In order to measure public opinion immediately before and after any given event – predicted or not – we must maintain a consistent, daily data collection for a period of time.

  1. Special Circumstances

There are no special circumstances. Data collection is conducted in accordance with the Office of Management and Budget (OMB) guidelines.

  1. Consultations Outside the Agency

Within the Federal Government, consultants include the Statistical and Science Policy Office, Office of Management and Budget; National Agricultural Statistics Service, the National Center of Health Statistics, the Economic Research Service, and the Statistics of Income Division, IRS. This group represented the interests of the ICSP. In addition, the Census Bureau also received feedback from the Federal Committee on Statistical Methodology’s Subcommittee on Statistical Uses of Administrative Records.



Outside the Federal Government, consultants include:

Dr. Eleanor Singer

University of Michigan



We conducted cognitive testing with 42 respondents with an early version of this survey and revised the questions according to problems found within that testing. This testing is more thoroughly described in Part B.



On November 1, 2011, we published a notice in the Federal Register (Vol. 76, No. 211, pages 67405-67406) seeking public comment on the necessity, content, and scope of the data collection. We received one comment, dated November 2, 2011, which stated that the government does too much surveillance and that a public opinion poll is not needed because the government is not trustworthy. In fact, heretofore, the Census Bureau and other statistical agencies have been largely limited to anecdotal information such as a small number of letters from the public such as this one to inform it about public attitudes. Part of what we intend to measure in this survey is what are general public opinions towards the federal statistical system. Are they influenced by current events, like those mentioned by the public comment that we received? We do not have such data on pubic opinion towards the statistical sector of government and that is what we wish to measure.

  1. Paying Respondents

Respondents will not be offered any gift or payment.

  1. Assurance of Confidentiality

This survey is being conducted under the authority of Title 13, but the data will not be protected under Title 13, Section 9. Because the Census Bureau is adding questions to an ongoing Gallup Survey, Gallup will introduce a statement indicating that participation in the survey is voluntary and that Gallup will not make the respondent’s information available in any way that would personally identify him or her. The Census Bureau will not receive any information to directly identify survey respondents. We address this during interview consent with the following statement: “Your responses will not be shared with anyone in a way that could personally identify you.”

  1. Justification for Sensitive Questions

The survey does not include questions of a sensitive nature.

  1. Estimate of Hour Burden

The annual respondent burden for conducting 70,000 interviews is estimated at 11,667 hours. The average length for each interview is estimated to be 10 minutes. There will be 200 interviews collected during 350 days per year. Data collection will span 20 months.

  1. Estimate of Cost Burden

There are no costs to respondents other than that of their time to respond.

  1. Cost to Federal Government

The annual cost of this data collection is an estimated to be $2.2 million and is funded by the Census Bureau. Data collection will span 20 months.

  1. Reason for Change in Burden

Not applicable; this is a new data collection.

  1. Project Schedule and Publication Plans

The timeline below is based on receiving OMB approval on 2/21/2012.

Task

Start

Finish

Data collection

February 22, 2012

September 30, 2013

Rotations – monthly

March 22, 2012

September 30, 2013

Data analysis

February 20, 2012

December 30, 2013




Although the Gallup Daily Tracking Survey is portrayed as being nationally representative, it does not meet Census Bureau quality standards for dissemination and is not intended for use as precise national estimates or distribution as a Census Bureau data product. The Census Bureau and the Federal Statistical System will use the results from this survey to monitor awareness and attitudes, as an indicator of the impact of potential negative events, and as an indicator of potential changes in awareness activities. Data from the research will be included in research reports with clear statements about the limitations and that the data were produced for strategic and tactical decision-making and exploratory research and not for official estimates. Research results may be prepared for presentation at professional meetings or in publications in professional journals to promote discussion among the larger survey and statistical community, encourage further research and refinement. Again, all presentations or publications will provide clear descriptions of the methodology and its limitations.

  1. Request to Not Display Expiration Date

We are requesting an exemption to not display the expiration date because these data will be collected in the middle of a series of questions already collected by Gallup, most of which are not collected for the government and not covered under this clearance. See item 18 for further explanation of this justification.

  1. Exceptions to the Certification

In addition to the exemption covered above, we are requesting an exemption to item (g), sections (i), (ii), (iii) and (vi) of the certification. Because this data collection is happening as a part of an ongoing data collection by Gallup, most of which questions are not related to this effort or to this clearance and because the OECD recommends that these types of data be collected by an independent third party (OECD Working Group, 2011), we are requesting an exception to informing all participants of (i) Why the information is being collected; (ii) Use of information; (iii) Burden estimate; and (vi) Need to display currently valid OMB control Number. This information will be available to Gallup interviewers in FAQs, should a respondent request any of this information, however, to maintain the flow of the survey questions from those collected by Gallup to those added by the Census Bureau and to gather data free of bias, we request a waiver from presenting these pieces of information for all participants.



4


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorRiki Conrey
File Modified0000-00-00
File Created2021-01-31

© 2024 OMB.report | Privacy Policy