2014_SIPP_Panel_Supporting_Statement_Section_A FINAL

2014_SIPP_Panel_Supporting_Statement_Section_A FINAL.docx

2014 Survey of Income and Program Participation (SIPP) Panel

OMB: 0607-0977

Document [docx]
Download: docx | pdf

SUPPORTING STATEMENT

U.S. Department of Commerce

U.S. Census Bureau

2014 Survey of Income and Program Participation Panel


A. Justification


1. Necessity of Information Collection


The U.S. Census Bureau requests authorization from the Office of Management and Budget (OMB) to conduct the 2014 Survey of Income and Program Participation (SIPP) Panel.


The Census Bureau plans to conduct the 2014 SIPP Panel in four waves beginning in February 2014. The Census Bureau's SIPP computer-assisted personal interviewing (CAPI) will use an Event History Calendar (EHC) interviewing method and a 12-month, calendar-year reference period in place of the current SIPP questionnaire approach that uses a sliding 4-month reference period. The Census Bureau is re-engineering the SIPP to accomplish several goals including re-engineering the collection instrument and processing system, development of the EHC in the instrument, use of administrative records data where feasible, and increased stakeholder interaction. See Attachment A for the interview questions.


The main objective of the SIPP has been, and continues to be, to provide accurate and comprehensive information about the income and program participation of individuals and households in the United States. The survey’s mission is to provide a nationally representative sample for evaluating: 1) annual and sub-annual income dynamics;

2) movements into and out of government transfer programs; 3) family and social context of individuals and households; and 4) interactions among these items. A major use of the SIPP has been to evaluate the use of and eligibility for government programs and to analyze the impacts of modifications to those programs. The re-engineering of SIPP pursues these objectives in the context of several goals including cost reduction, improved accuracy, increased relevance and timeliness, reduced burden on respondents, and increased accessibility. The 2014 SIPP Panel will collect detailed information on cash and non-cash income (including participation in government transfer programs) once per year.


A key component of re-engineering the SIPP is a shift from the every-four-month data collection schedule of historical SIPP (most recently in the 2008 Panel) to an annual data collection schedule for the re-engineered survey. Providing the same, or better, quality data at a reduced burden to respondents is a high priority for the Census Bureau and for the SIPP program. To accomplish the shift to annual interviewing without a loss in data quality, the Census Bureau will use an EHC based instrument to gather SIPP data. The EHC was previously used in the 2010, 2011, 2012, and 2013 SIPP-EHC field tests. The content of the 2014 SIPP Panel will match that of the 2013 SIPP-EHC very closely. The 2014 SIPP Panel design does not contain freestanding topical modules as in the prior production SIPP instruments; however, a portion of traditional SIPP topical module content is integrated into the main body of the 2014 SIPP interview. The EHC allows recording dates of events and spells of coverage; and will provide measures of monthly transitions of program receipt and coverage, labor force transitions, health insurance transitions, and others.


The 2014 SIPP Panel is a brand new “wave 1” sample with new survey respondents who were not interviewed in the previous 2010-2013 SIPP-EHC field tests. The start of the 2014 SIPP Panel was scheduled at the earliest possible start (February 2014) that would allow the use of a 2010 Census based sample. The 2014 SIPP Panel wave 1 will interview respondents using the previous calendar year 2013 as the reference period and will proceed with annual interviewing going forward. The 2014 SIPP Panel will use a revised interviewing method structure that will follow persons aged 15 years and older who move from the prior wave household. Consequently, future waves will incorporate dependent data, which is information collected from the prior wave interview brought forward to the current interview.


The Census Bureau plans to use Computer Assisted Recorded Interview (CARI) technology during the 2014 SIPP Panel. CARI is a data collection method that captures audio along with response data during computer-assisted personal and telephone interviews (CAPI & CATI). With the respondent’s consent, a portion of each interview is recorded unobtrusively and both the sound file and screen images are returned with the response data to a central location for coding. By reviewing the recorded portions of the interview, quality assurance analysts can evaluate the likelihood that the exchange between the field representative and respondent is authentic and follows critical survey protocol as defined by the sponsor and based on best practices. Additionally, the recordings will be reviewed to develop standards for coaching interviewers and develop options to use them as supplements to both in-person observation and reinterview. The 2014 SIPP Panel instrument will utilize the CARI Interactive Data Access System (CARI System), an innovative, integrated, multifaceted monitoring system that features a configurable web-based interface for behavior coding, quality assurance, and coaching. This system assists in coding interviews for measuring question and interviewer performance and the interaction between interviewers and respondents.



Wave 1 of the SIPP 2014 Panel will be conducted from February to May of 2014. Wave 2 is scheduled to be conducted from January to April of 2015. Wave 3 is scheduled to be conducted from January to April of 2016. Wave 4 is scheduled to be conducted from January to April of 2017. Approximately 52,000 households will be sampled to be interviewed for the 2014 Panel. From these sampled households, we expect approximately 35,000 interviewed households. We estimate that each household contains 2.1 people aged 15 and above, yielding approximately 73,500 person-level interviews per wave in this panel. Interviews take approximately 60 minutes per adult on average, consequently the total annual burden for 2014 SIPP-EHC interviews will be 73,500 hours per year in FY 2014, 2015, 2016, and 2017.


The SIPP is authorized by Title 13, United States Code, Section 182.


2. Needs and Uses


Information quality, as described by the Census Bureau’s Information Quality Guidelines, is an integral part of the pre-dissemination review of information released by the Census Bureau. Information quality is essential to data collections conducted by the Census Bureau and is incorporated into the clearance process required by the Paperwork Reduction Act.


In 2006, the U.S. Census Bureau began a complete redesign of the Survey of Income and Program Participation. The redesign was initiated by a program review initiated by a budgetary crisis. The SIPP program needed to modernize and more efficiently provide the critical information necessary to understand patterns and relationships in income and program participation. The re-engineering set out objectives to reduce respondent burden and costs, to improve data quality and timeliness, and modernize the instrument and processing. The Census Bureau has developed the new and innovative SIPP data collection instrument through a series of field tests. Each test refined and improved the data collection experience for respondents and interviewers, and focused on improvements in data quality and better topic integration. The development and testing has produced an instrument that exceeded expectations and collects very high quality data with a greatly reduced annual respondent burden. While evaluations will be ongoing, the Census Bureau is very pleased with the results and is confident in continuing to provide high quality SIPP data utilizing the new annual SIPP data collection instrument and procedures.


The new survey instrument, called SIPP-EHC, is the instrument being used as the 2014 production survey instrument for the SIPP program. The SIPP-EHC instrument is a complete redevelopment in Blaise and C# of the previous SIPP survey instrument that was implemented in a DOS based CASES instrument. The new survey is built around the change in survey reference period from three interviews per year (interviewing about the prior four months) to a single annual interview with a reference period extending to the beginning of the prior calendar year. The SIPP-EHC incorporates an event-history-calendar design to help ensure that the 2014 panel will continue to collect intra-year dynamics of income, program participation, and other activities with at least the same data quality as earlier panels. The EHC is intended to help respondents recall information in a more natural “autobiographical” manner by using life events as triggers to recall other economic events. For example, a residence change may often occur contemporaneously with a change in employment. The entire process of compiling the calendar focuses, by its nature, on consistency and sequential order of events, and attempts to correct for otherwise missing data.


To develop the instrument and provide information for use in evaluation, five field tests of the SIPP-EHC instrument have taken place (in 2008, 2010, 2011, 2012, and 2013). A new test sample was initiated in 2011, following the successful 2010 feasibility test. The 2012 SIPP-EHC field test is a wave 2 interview of the 2011 SIPP-EHC field test sample. The reference year for waves one and two of the 2011 SIPP-EHC field tests were calendar years 2010 and 2011. An initial evaluation of the field test results from the 2011 and 2012 field tests is attached (Attachment P). The 2013 SIPP-EHC field test is a wave 3 interview of this same sample.


While review and analysis of the test data continues, the evaluation of the 2011 and 2012 field tests provide comparisons with data from the traditional three-interviews-per-year SIPP instrument and with administrative records. With very few exceptions, agreement between survey and administrative data is higher for SIPP-EHC or not statistically different between surveys. While estimates from the two survey instruments (SIPP-EHC and SIPP) do differ statistically in many cases, these differences are typically small and correspond to rates of agreement with administrative data that are better for SIPP-EHC than for SIPP. There is little evidence that key estimates from SIPP-EHC data are less accurate for periods earlier in the one-year reference period as might be expected it respondents had difficulty reporting events further in the past. The review has suggested that reported transitions in program participation or other status may fall disproportionately at the beginning of reference periods. It also appears that this bias in the measurement of transitions can be improved by using information from prior waves in interviewing and editing. Changes to the SIPP instrument for 2013 and 2014 were implemented to ameliorate these findings.


The SIPP-EHC, like SIPP, collects information about a variety of topics, including employment, income, participation in various government programs, health insurance coverage, and demographics. The evaluation report includes survey estimates for nineteen SIPP-EHC topics: assets, child support, disability, education, employment and earnings, health insurance, household composition, housing subsidies, Medicaid, Medicare, migration, nativity and citizenship, Old-Age Survivors and Disability Insurance (OASDI), poverty, Supplemental Nutrition Assistance Program (SNAP), Supplemental Security Income (SSI), Temporary Assistance for Needy Families (TANF), and unemployment insurance. Although evaluation continues, and opportunities for design revision may be revealed, the results of the evaluations to date strongly support the ability of an annual administration of SIPP to collect information that will improve the programs ability to fulfill its mission with reduced burden and cost and with equivalent or better data quality.


The 2014 SIPP Panel will continue the EHC methodology implemented in the previous field test instruments. The 2014 SIPP Panel Wave 1 instrument will similarly be evaluated in several domains including field implementation issues and data comparability vis-à-vis the 2008 SIPP Panel and administrative records. Distributional characteristics such as the percent of persons receiving Temporary Assistance for Needy Familes (TANF), Food Stamps, Medicare, who are working, who are enrolled in school, or who have health insurance coverage reported in the EHC will be compared to the same distributions from the 2008 SIPP Panel. The primary focus will be to examine the quality of data that the new instrument yields for low-income programs relative to the current SIPP and other administrative sources. The 2014 SIPP Panel sample is nationally representative, with an oversample of low-income areas in order to increase the ability to measure participation in government programs. In general, there are two ways we will evaluate data quality:


First, we will compare monthly estimates from the 2014 SIPP Panel to estimates from the 2008 SIPP Panel for characteristics such as participation in Food Stamps, TANF, Supplemental Security Income (SSI), the Special Supplemental Nutrition Program for Women, Infants, and Children (WIC), and Medicaid. We plan to conduct a rigorous statistical analysis using the model established for the 2010-2013 SIPP-EHC evaluations, where data from the 2008 Panel and 2010-2013 SIPP-EHC for the previous calendar years were mapped to a common analysis standard. The tests of significance conducted for the differences in monthly participation levels, identification of patterns of significance, and the likelihood of transition will again be applied to the 2013 calendar year comparison mapped data. Additional content will be included in the mapped data to expand the comparisons beyond the focus of the EHC section of the instrument comparisons made with the SIPP-EHC field tests. As with the 2010-2013 SIPP-EHC field tests, we will also compare paradata related to interview performance (interview length and non-response) by region, interviewer and household characteristics, and training performance as measured by the certification test.


Second, for a small subset of characteristics, and for a subset of sample areas, we will have access to administrative record data, which should allow for a more objective data quality assessment of the validity of the survey estimates for respondents matched to administrative data. The acquisition of administrative data from national sources and especially from states is difficult and time consuming. We continue to work with Texas, Maryland, Illinois, and Wisconsin to acquire state-level data (primarily focused on Food Stamps or the Supplemental Nutrition Assistance Program (SNAP) and TANF), and additional state discussions are in progress. From national-level administrative records, we are working to acquire additional data from the Internal Revenue Service, the detailed and summary earnings records, Old-Age, Survivors, and Disability Insurance (OASDI), SSI, Medicare, and Medicaid (from Centers for Medicare and Medicaid services (CMS)). To the extent that data can be obtained in a timely way for calendar year 2013 we will include validation evaluations of the responses given both in the 2008 Panel and the 2014 SIPP Panel Wave 1 data. These administrative data can tell us the rate of both false positive and false negative reporting, as well as some indication of the accuracy of the timing of reports. The ability to make effective comparisons with administrative data is dependent on the match rate of administrative data to SIPP and re-engineered SIPP data, the timing of the receipt of the data, and the accuracy and quality of the administrative records. This project will continue to show the importance of developing systems that can integrate administrative reports with survey data.


Results from the 2010-2013 Field Tests and the 2008 SIPP Panel were used to inform final decisions regarding the design, content, and implementation of the 2014 SIPP Panel. This OMB clearance request is for the full 2014 SIPP Panel (Waves 1, 2, 3, and 4).


3. Use of Information Technology


The survey is administered using CAPI and CARI methodologies. The Census Bureau field representatives (FRs) collect the data from respondents using laptop computers and transmit to the Census Bureau Headquarters via high-speed modems. Automation significantly enhances our efforts to collect high quality data with skip instructions programmed into the instrument and information obtained in earlier interview segments fed back to the respondent. Response burden can be minimized by incorporating design features that make it easier to collect and record respondent information. Therefore, screening questions and lead-in questions are built into the automated instrument to skip respondents out of sections of the questionnaire that are not relevant or applicable.


Preliminary analysis from an Internet field test conducted by the SIPP Methods Panel in August and September 2000 indicated that using the Internet as a mode of collection for a complex demographic survey such as SIPP is not feasible. The SIPP automated instrument contains many complicated skip patterns and roster related components. The costs of converting a complex questionnaire such as SIPP to an online survey far outweigh the benefits even in a multimode environment. The final report is available upon request.


4. Efforts to Identify Duplication


The demographic data collected in the SIPP must be collected in conjunction with the labor force and program participation data in order for the information to be most useful; therefore, although we collect demographic data in conjunction with almost all surveys, we need to continue its present collection in the SIPP. No other current data source is available which provides as comprehensive a set of statistics for analysis as described in question 2 above.




5. Minimizing Burden


The Census Bureau uses appropriate technology to keep respondent burden to a minimum. Examples of technology used to minimize respondent burden include: use of appropriate screening and lead-in questions that serve to skip respondents out of sections of the CAPI instrument that are not relevant or applicable to them; use of flash cards to aid respondents with multiple response categories; and the arrangement of questions and sections of the CAPI instrument that facilitate the flow of administration from one topic area to another. The 2014 SIPP Panel will yield substantially lower respondent burden than the previous SIPP instrument due to one interview per year rather than three.


6. Less Frequent Collection


The 2014 SIPP Panel will interview respondents annually, using the previous calendar year as the reference period. One possible consequence of the one year reference period in the 2014 SIPP Panel, rather than the 4 month reference period in traditional SIPP, is the possibility of increased memory decay by respondents. However, use of the EHC methodology of interview should help to alleviate this decay by linking respondents’ memories to significant life events. See earlier explanation above in section 2.


7. Special Circumstances


There are no special circumstances associated with this clearance request.


8. Consultations Outside the Agency


The OMB established an Interagency Advisory Committee to provide guidance for the content and procedures for the SIPP. That committee along with the subcommittee on the topical modules has previously worked actively with the Census Bureau to assure that the SIPP content and procedures collect the appropriate data and that duplications between surveys are minimized to the extent possible.


Further, the Census Bureau has engaged an American Statistical Association – Survey Research Methods (ASA-SRM) advisory group to provide ongoing input into the SIPP development process, as well as continued involvement with the Committee for National Statistics (CNSTAT) at the National Academies of Science. We have continued to hold regular meetings with both groups. On June 12, 2013, the Census Bureau hosted an ASA/SRM teleconference. We provided an update on survey progress and preliminary results from evaluations of the 2012 field test, and solicited input from group members. On July 10, 2013, the Census Bureau hosted a public meeting for CNSTAT at the Keck Center of the National Academy. Census Bureau staff presented information on the 2014 panel and results from evaluations of the 2011 and 2012 field tests. CNSTAT members provided feedback and asked a number of questions about both the data content and structure contained in the 2014 instrument.


When the initial content reviews were conducted leading to the 2010 SIPP-EHC field test, the Census Bureau held five subject area meetings (health, general income and government programs, assets and wealth, labor force, and demographics and other items) as well as subsequent “virtual” meetings with SIPP stakeholders. These consultations were not held for consensus or group recommendation, and the opinions which were expressed were all given on an individual basis and not for purposes of producing a group consensus. Data users indicated a significant need for most of the existing SIPP core content. Select areas of content were added based on stakeholders input for lost topical module content. The 2014 SIPP will include revised content from the 2010-2013 SIPP-EHC instruments and will also include revisions developed subsequent to the 2013 SIPP-EHC test.


We published a notice in the Federal Register on February 22, 2013, Vol. 78, No. 36, page 12,293, inviting public comment on our plans to submit this request. We received one comment generally opposing collection.


9. Paying Respondents


We have designed a multi-wave incentive experiment to evaluate the efficacy of incentives as a means of increasing respondent cooperation with the SIPP. We will divide the panel into four groups and randomly assign each household to one of the groups. Since our sample in 2014 will consist of approximately 52,000 households, each group will contain approximately 13,000 households.


Group 1 is the control group; households in this group will not be eligible for an incentive in any wave of the 2014 panel. Group 2 is not eligible to receive an incentive in Wave 1. However, in subsequent waves, households will be eligible to receive incentives of $0, $20, or $40, based on a propensity model. That is, we plan to design a model to predict which households would be mostly likely to complete a Wave 2 interview if provided with an incentive to do so. This targeting of incentives will help to both increase the response rate and to lower costs (since we would not be providing incentives to households that would likely complete the survey with no monetary incentive). Additionally, we will evaluate the modeling for Group 2 for alternate specifications that focus more on responsive design and the ability to maintain or improve the representation of the SIPP sample through targeted incentives.


The third group of respondents will receive a $20 incentive in Wave 1 and all subsequent waves, conditional on completing the interview. Finally, Group 4 will receive a $40 incentive in all waves, again conditional on completing the interview. For both of these groups, we will inform households in both the advance letter and the introduction to the survey of their eligibility for an incentive.


Inputs for the propensity model for Group 2 will come from the Wave 1 responses, Wave 1 contact data, and results from comparing Groups 1 and 2 with Groups 3 and 4. We will evaluate what characteristics of households seem to make them more or less likely to complete interviews, how they contribute to the eventual sample representation, and how likely they are to respond to incentives. Using this knowledge, we can design a model for Waves 2+ that optimizes our distribution of incentives to Group 2 in a way that maximizes the return on each dollar spent.


For all waves, we are planning to distribute the incentives centrally, from our National Processing Center. This centralized distribution eliminates any discretion on the part of the field representatives, ensuring that only eligible households are given (or promised) incentives. This control is necessary to ensure the success of the propensity modeling experiment. We plan to mail the debit cards containing the incentives on a weekly basis. That is, as we receive completed interviews from eligible households, we will send a list of these households to the NPC, and they will mail the debit cards. One day later, they will send a second mailing containing the PIN information. Splitting the mailings this way allows us to avoid the additional expense of sending the debit cards via a signature-required service such as FedEx.


10. Assurance of Confidentiality


We are conducting this survey under the authority of Title 13, United States Code, Section 182. Section 9 of this law requires us to keep all information strictly confidential. The respondents will be informed of the confidentiality of their responses and that this is a voluntary survey by a letter from the Director of the Census Bureau that will be sent to all participants in the survey (Attachments B, C, D, and E).





11. Justification for Sensitive Questions


The sources of income and assets are among the kinds of data collected and possibly considered of a sensitive nature. The Census Bureau takes the position that the collection of these types of data is necessary for the analysis of important policy and program issues, and consequently has structured the questions to lessen their sensitivity.


12. Estimate of Respondent Burden


Based on our experience with the 1996, 2001, 2004, 2008 SIPP Panels, the

2010-2013 SIPP-EHC field tests, and in-house testing, the burden estimates for the 2014 SIPP Panel (per wave) are as follows:


2014 SIPP PANEL

FY 2014-2017 BURDEN HOUR SUMMARY



Respondents

Waves

Responses

Hours Per Response

Total
Hours

Interview

73,500

1

73,500

1.0

73,500

Totals

73,500

1

73,500

1.0

73,500








Approximately 35,000 households are expected to be interviewed for the 2014 SIPP Panel. We estimate that each household contains 2.1 people aged 15 and above, yielding approximately 73,500 person-level interviews per wave in this panel. Interviews take approximately 60 minutes per adult on average, consequently the total annual burden for 2014 SIPP-EHC interviews will be 73,500 hours per year in FY 2014, 2015, 2016, and 2017.


13. Estimate of Cost Burden


There are no direct costs to respondents participating in the survey other than the time involved in answering the survey questions.


14. Cost to Federal Government


The production costs of all parts of the 2014 SIPP panel are approximately $51,000,000 in each year from FY 2014-2017. That amount is included in the estimate of total costs to the federal government of the Census Bureau's current programs supplied to the OMB.

15. Reason for Change in Burden


The 2014 SIPP Panel is being submitted as a brand new collection; therefore, change in burden will occur.


16. Project Schedule


The 2014 SIPP Panel advance letters will be mailed prior to interviewing. Wave 1 of the SIPP 2014 Panel will be conducted from February to May of 2014. Wave 2 is scheduled to be conducted from January to April of 2015. Wave 3 is scheduled to be conducted from January to April of 2016. Wave 4 is scheduled to be conducted from January to April of 2017. We will release public use data product on a schedule to be determined.


We plan to continue the thorough evaluation of the event history calendar methodology and the new SIPP data structure, which we began with the field tests, using data from the 2014 production panel. Those evaluations focused mainly on data quality and whether the reengineered survey instrument, with its annual interview, was delivering results that compared favorably to those from the existing SIPP instrument. While we will continue that line of inquiry, we also have several additional avenues of evaluation to undertake with the 2014 data.


We will evaluate the survey using both collected data and paradata. One facet of our data evaluations that we will continue from the field tests is comparing the SIPP-EHC results to those from SIPP. The most recent review compared results for calendar year 2011 from the 2012 field test to a matched sample from the 2008 panel (Attachment P). The final report compiling these results should be available soon. With the field tests, we had done the comparison to the 2008 SIPP using a matched sample, which we refer to as the “Matched SIPP” or MSIPP dataset. This was necessary because the samples of the field tests were not nationally representative, and so output from the tests was not directly comparable with the full, nationally representative SIPP sample. Therefore, we took a subset of the SIPP sample, consisting of households residing in the Primary Sampling Units (PSUs) from which we drew the SIPP-EHC test samples. This workaround will not be necessary in 2014, as we will be comparing two nationally representative samples covering the majority of calendar year 2013.


Because of the way the two surveys are structured, we are able to compare results from almost exactly the same period, for at least part of the SIPP sample. The SIPP-EHC has a yearlong reference period, so the 2014 interview will ask about 2013. The SIPP has a four-month reference period, so the last rotation group of the final wave (Wave 16) of the 2008 SIPP will be interviewed in December 2013 and will have a reference period covering August-November of 2013. Still, we will be able to compare monthly results for the first three quarters of 2013 with the full sample from both surveys and to evaluate concerns about differential recall by this overlap.


As each subject-matter area is evaluating its content, we expect that in addition to comparing the results from the 2014 SIPP to the 2008 SIPP, where possible they will also compare the results to those from other surveys, such as the American Community Survey (ACS) or the Current Population Survey (CPS). While we would expect some variation in estimates from the different surveys due to sample size, survey universe, etc., we can tell whether SIPP’s results are broadly in line with those from other surveys.


The 2014 SIPP instrument will also generate a large amount of paradata that the survey team will continue to use to evaluate the survey. First, we want to run a number of comparisons related to interview timing – for example, how much longer are adults’ interviews compared to children’s, and how much different are proxy interviews from self-reports? Those comparisons involve respondents’ behavior, but additionally we want to use paradata to evaluate how much influence field representatives’ behavior has on the survey results. We will have results from the FR certification test, so we can determine how much more successful high-scoring FRs are than low-scoring ones. Differential success by certification test score will help to identify areas to target for improvement in training and supplemental interviewer observation.


The paradata will also provide us with metrics that allow us to evaluate the respondent burden and to produce better cost estimates. For example, we will know the average number of questions asked during each interview, allowing us to pinpoint content areas that we could streamline or change during the research panel. We also know how many visits to a household it takes to get as completed interview, so we can use this statistic to estimate our 2015 costs more precisely.


The paradata can provide information we can use to improve the overall survey or correct errors in the existing instrument. We will be reviewing item level don’t know and refusal rates, as well as particularly time consuming items, to further target instrument improvement. Field representatives have the ability to enter notes, both at the item level and at the case level. We used these notes to identify and correct a number of bugs in the 2013 instrument, and we hope to duplicate this success in 2014. Additionally, we are developing an FR debriefing instrument, which allows the FRs to offer detailed comments on each section of the instrument and to express any concerns or problems they had when fielding the survey.


In addition to these evaluation tools, the Census Bureau plans to use CARI technology (see page 2) for a sample of items in each interview. We will record items from each household’s interview, so long as the household provides its consent to the quality assurance recordings. It is the Census Bureau’s plan to evaluate both the quality of the SIPP data and the possibility that CARI could supplement or replace standard reinterview.


Additionally, by recording all households, supervisors will have the ability to select recordings for the supplemental observation and coaching of interviewers. As the recordings from 2014 are used during and after data collection for Wave 1, the Census Bureau will use the recordings to develop standards for the use of recorded interviews for coaching interviewers. We will also use them to develop options to use the recordings as supplements to both in-person observation and reinterview. The 2014 SIPP Panel instrument will utilize the CARI Interactive Data Access System (CARI System), an innovative, integrated, multifaceted monitoring system that features a configurable web-based interface for behavior coding, quality assurance, and coaching. This system assists in coding interviews for measuring question and interviewer performance and the interaction between interviewers and respondents.


Finally, for a small subset of characteristics, and for a subset of sample areas, we will have access to administrative record data, which we integrated into our evaluation of the field test data and will continue to use for an objective assessment of the validity of survey estimates matched to administrative data (see explanation on page 4).


17. Request Not to Display Expiration Date


The expiration date is displayed in the advance letter that will be sent to eligible households before each wave’s interview.


18. Exceptions to the Certification


There are no exceptions to the certification.

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorBrian Harris-Kojetin
File Modified0000-00-00
File Created2021-01-28

© 2024 OMB.report | Privacy Policy