SUPPORTING STATEMENT
U.S. Department of Commerce
U.S. Census Bureau
2014 Survey of Income and Program Participation Panel
OMB Control No. 0607-0977
A. Justification
1. Necessity of Information Collection
The U.S. Census Bureau requests authorization from the Office of Management and Budget (OMB) to extend the 2014 Survey of Income and Program Participation (SIPP) Panel. This package is a request to extend the current OMB approval, which expires on December 31, 2016, to December 31, 2019.
The Census Bureau has completed two of four waves of the Survey of Income and Program Participation 2014 Panel (SIPP), which began in February 2014. Wave 1 of the SIPP 2014 Panel was conducted from February to June of 2014. Wave 2 was conducted from February to June of 2015. Wave 3 is currently being conducted from April to June of 2016. Wave 4 is scheduled to be conducted from February to June of 2017. The SIPP is a household-based survey designed as a continuous series of national panels. The SIPP represents a source of information for a wide variety of topics and allows the integration of information for separate topics to form a single, unified database allowing for the examination of the interaction between tax, transfer, and other government and private policies. Government domestic policy formulators depend heavily upon SIPP information concerning the distribution of income received either directly as money or indirectly as in-kind benefits and the effect of tax and transfer programs on that distribution. They also need improved and expanded data on the income and general economic and financial situation of the U.S. population, which the SIPP has provided on a continuing basis since 1983. The SIPP has measured levels of economic well-being and permitted measurement of changes in these levels over time.
The 2014 SIPP interview includes a portion conducted using an Event History Calendar (EHC) that facilitates the collection of dates of events and spells of coverage. The EHC assists the respondent’s ability to recall events accurately over the one year reference period and provides increased data quality and inter-topic consistency for dates reported by respondents. The EHC is intended to help respondents recall information in a more natural “autobiographical” manner by using life events as triggers to recall other economic events. The EHC was previously used in the 2010- 2013 SIPP-EHC field tests in addition to 2014 Panel Waves 1 and 2. The 2014 Panel SIPP design does not contain freestanding topical modules; however, a portion of traditional SIPP topical module content is integrated into the 2014 SIPP Panel interview. Examples of this content include questions on medical expenses, child care, retirement and pension plan coverage, marital history, adult and child well-being, and others.
The 2014 SIPP Panel Wave 1 was a brand new sample with new survey respondents who were not previously interviewed. The 2014 SIPP Panel uses a revised interviewing method structure that follows adults (age 15 years and older) who move from the prior wave household. Consequently, Waves 2, 3, and 4 incorporate dependent data, which is information collected from the prior wave interview brought forward to the current interview.
The Census Bureau has used and plans to continue using Computer Audio Recorded Interview (CARI) technology for some of the respondents during the 2014 SIPP Panel. CARI is a data collection method that captures audio along with response data during computer-assisted personal and telephone interviews (CAPI & CATI). With the respondent’s consent, a portion of each interview is recorded unobtrusively and both the sound file and screen images are returned with the response data to a central location for coding. By reviewing the recorded portions of the interview, quality assurance analysts can evaluate the likelihood that the exchange between the field representative and respondent is authentic and follows critical survey protocol as defined by the sponsor and based on best practices. During the 2014 SIPP Panel we are developing protocols to use the CARI Interactive Data Access System (CARI System), an innovative, integrated, multifaceted monitoring system that features a configurable web-based interface for behavior coding, quality assurance, and coaching. This system assists in coding interviews for measuring question and interviewer performance and the interaction between interviewers and respondents.
Approximately 30,500 households are expected to be interviewed for the 2014 SIPP Panel Waves 3 and 4. We estimate that each household contains 2.1 people aged 15 and above, yielding approximately 64,050 person-level interviews per wave in this panel. Based on Wave 1 results, interviews take approximately 40 minutes per adult on average. Consequently, the total annual burden for 2014 SIPP-EHC interviews will be 42,700 hours per year.
The SIPP is authorized by Title 13, United States Code, Sections 141 and 182.
2. Needs and Uses
Information quality, as described by the Census Bureau’s Information Quality Guidelines, is an integral part of the pre-dissemination review of information released by the Census Bureau. Information quality is essential to data collections conducted by the Census Bureau and is incorporated into the clearance process required by the Paperwork Reduction Act.
In 2006, the U.S. Census Bureau began a complete redesign of the Survey of Income and Program Participation. The redesign was initiated by a program review initiated by a budgetary crisis. The SIPP program needed to modernize and more efficiently provide the critical information necessary to understand patterns and relationships in income and program participation. The re-engineering set out objectives to reduce respondent burden and costs, to improve data quality and timeliness, and modernize the instrument and processing. The Census Bureau has developed the new and innovative SIPP data collection instrument through a series of field tests. Each test refined and improved the data collection experience for respondents and interviewers, and focused on improvements in data quality and better topic integration. The development and testing has produced an instrument that exceeded expectations and collects very high quality data with a greatly reduced annual respondent burden. While evaluations will be ongoing, the Census Bureau is very pleased with the results and is confident in continuing to provide high quality SIPP data utilizing the new annual SIPP data collection instrument and procedures.
The new survey instrument, called SIPP-EHC, is the instrument being used as the 2014 production survey instrument for the SIPP program. The SIPP-EHC instrument is a complete redevelopment in Blaise and C# of the previous SIPP survey instrument that was implemented in a DOS-based CASES instrument. The new survey is built around the change in survey reference period from three interviews per year (interviewing about the prior four months) to a single annual interview with a reference period extending to the beginning of the prior calendar year. The SIPP-EHC incorporates an event-history-calendar design to help ensure that the 2014 panel will continue to collect intra-year dynamics of income, program participation, and other activities with at least the same data quality as earlier panels. The EHC is intended to help respondents recall information in a more natural “autobiographical” manner by using life events as triggers to recall other economic events. For example, a residence change may often occur contemporaneously with a change in employment. The entire process of compiling the calendar focuses, by its nature, on consistency and sequential order of events, and attempts to correct for otherwise missing data.
To develop the instrument and provide information for use in evaluation, five field tests of the SIPP-EHC instrument have taken place in 2008, 2010, 2011, 2012, and 2013. A new test sample was initiated in 2011, following the successful 2010 feasibility test. The 2012 SIPP-EHC field test is a wave 2 interview of the 2011 SIPP-EHC field test sample. The reference year for waves one and two of the 2011 SIPP-EHC field tests were calendar years 2010 and 2011. A final evaluation of the results from the 2011 and 2012 field tests is attached (Attachment K). The 2013 SIPP-EHC field test was a wave 3 interview of this same sample.
While review and analysis of the test data continues, the evaluation of the 2011 and 2012 field tests provide comparisons with data from the traditional three-interviews-per-year SIPP instrument and with administrative records. With very few exceptions, agreement between survey and administrative data is higher for SIPP-EHC or not statistically different between surveys. While estimates from the two survey instruments (SIPP-EHC and SIPP) do differ statistically in many cases, these differences are typically small and correspond to rates of agreement with administrative data that are better for SIPP-EHC than for SIPP. There is little evidence that key estimates from SIPP-EHC data are less accurate for periods earlier in the one-year reference period as might be expected if respondents had difficulty reporting events further in the past. The review has suggested that reported transitions in program participation or other status may fall disproportionately at the beginning of reference periods. It also appears that this bias in the measurement of transitions can be improved by using information from prior waves in interviewing and editing. Changes to the SIPP instrument for 2013 and 2014 were implemented to ameliorate these findings.
The SIPP-EHC, like SIPP, collected information about a variety of topics, including employment, income, participation in various government programs, health insurance coverage, and demographics. The evaluation report includes survey estimates for nineteen SIPP-EHC topics: assets, child support, disability, education, employment and earnings, health insurance, household composition, housing subsidies, Medicaid, Medicare, migration, nativity and citizenship, Old-Age Survivors and Disability Insurance (OASDI), poverty, Supplemental Nutrition Assistance Program (SNAP), Supplemental Security Income (SSI), Temporary Assistance for Needy Families (TANF), and unemployment insurance. Although evaluation continues, and opportunities for design revision may be revealed, the results of the evaluations to date strongly support the ability of an annual administration of SIPP to collect information that will improve the program’s ability to fulfill its mission with reduced burden and cost and with equivalent or better data quality.
The 2014 SIPP Panel will continue the EHC methodology implemented in the previous field test instruments. The 2014 SIPP Panel Waves 1 and 2 instruments will similarly be evaluated in several domains including field implementation issues and data comparability vis-à-vis the 2008 SIPP Panel and administrative records. Distributional characteristics such as the percent of persons receiving Temporary Assistance for Needy Families (TANF), Food Stamps, Medicare, who are working, who are enrolled in school, or who have health insurance coverage reported in the EHC will be compared to the same distributions from the 2008 SIPP Panel. The primary focus will be to examine the quality of data that the new instrument yields for low-income programs relative to the current SIPP and other administrative sources. The 2014 SIPP Panel sample is nationally representative, with an oversample of low-income areas in order to increase the ability to measure participation in government programs. In general, there are two ways we will evaluate data quality:
First, we will compare monthly estimates from the 2014 SIPP Panel to estimates from the 2008 SIPP Panel for characteristics such as participation in Food Stamps, TANF, Supplemental Security Income (SSI), the Special Supplemental Nutrition Program for Women, Infants, and Children (WIC), and Medicaid. We plan to conduct a rigorous statistical analysis using the model established for the 2010-2013 SIPP-EHC evaluations, where data from the 2008 Panel and 2010-2013 SIPP-EHC for the previous calendar years were mapped to a common analysis standard. The tests of significance conducted for the differences in monthly participation levels, identification of patterns of significance, and the likelihood of transition will again be applied to the 2013 calendar year comparison mapped data. Additional content will be included in the mapped data to expand the comparisons beyond the focus of the EHC section of the instrument comparisons made with the SIPP-EHC field tests. As with the 2010-2013 SIPP-EHC field tests, we will also compare paradata related to interview performance (interview length and non-response) by region, interviewer and household characteristics, and training performance as measured by the certification test.
Second, for a small subset of characteristics, and for a subset of sample areas, we will have access to administrative record data, which should allow for a more objective data quality assessment of the validity of the survey estimates for respondents matched to administrative data. The acquisition of administrative data from national sources and especially from states is difficult and time consuming. We continue to work with Texas, Maryland, Illinois, and Wisconsin to acquire state-level data (primarily focused on Food Stamps or the Supplemental Nutrition Assistance Program (SNAP) and TANF), and additional state discussions are in progress. From national-level administrative records, we are working to acquire additional data from the Internal Revenue Service, the detailed and summary earnings records, Old-Age, Survivors, and Disability Insurance (OASDI), SSI, Medicare, and Medicaid (from Centers for Medicare and Medicaid services (CMS)). To the extent that data can be obtained in a timely way for calendar year 2013 we will include validation evaluations of the responses given both in the 2008 Panel and the 2014 SIPP Panel Wave 1 data. These administrative data can tell us the rate of both false positive and false negative reporting, as well as some indication of the accuracy of the timing of reports. The ability to make effective comparisons with administrative data is dependent on the match rate of administrative data to SIPP and re-engineered SIPP data, the timing of the receipt of the data, and the accuracy and quality of the administrative records. This project will continue to show the importance of developing systems that can integrate administrative reports with survey data.
Results from the 2010-2013 Field Tests and the 2008 SIPP Panel were used to inform final decisions regarding the design, content, and implementation of the 2014 SIPP Panel. This OMB clearance request is for the extension of the 2014 SIPP Panel through Wave 4.
3. Use of Information Technology
The survey is administered using CAPI and CARI methodologies. The Census Bureau field representatives (FRs) collect the data from respondents using laptop computers and transmit to the Census Bureau Headquarters via high-speed modems. Automation significantly enhances our efforts to collect high quality data with skip instructions programmed into the instrument and information obtained in earlier interview segments fed back to the respondent. Response burden can be minimized by incorporating design features that make it easier to collect and record respondent information. Therefore, screening questions and lead-in questions are built into the automated instrument to skip respondents out of sections of the questionnaire that are not relevant or applicable.
Preliminary analysis from an Internet field test conducted by the SIPP Methods Panel in August and September 2000 indicated that using the Internet as a self-response mode of collection for a complex demographic survey such as SIPP is not feasible. The SIPP automated instrument contains many complicated skip patterns and roster related components. The costs of converting a complex questionnaire such as SIPP to an online survey far outweigh the benefits even in a multimode environment. The final report is available upon request.
4. Efforts to Identify Duplication
The demographic data collected in the SIPP must be collected in conjunction with the labor force and program participation data in order for the information to be most useful; therefore, although we collect demographic data in conjunction with almost all Census Bureau surveys, we need to continue its present collection in the SIPP. No other current data source is available which provides as comprehensive a set of statistics for analysis as described in question 2 above.
5. Minimizing Burden
The Census Bureau uses appropriate technology to keep respondent burden to a minimum. Examples of technology used to minimize respondent burden include: use of appropriate screening and lead-in questions that serve to skip respondents out of sections of the CAPI instrument that are not relevant or applicable to them; use of flash cards to aid respondents with multiple response categories; and the arrangement of questions and sections of the CAPI instrument that facilitate the flow of administration from one topic area to another. The 2014 SIPP Panel has yielded substantially lower respondent burden than the previous SIPP instrument due to one interview per year rather than three.
6. Less Frequent Collection
The 2014 SIPP Panel interviews respondents annually, using the previous calendar year as the reference period. One possible consequence of the one-year reference period in the 2014 SIPP Panel, rather than the 4 month reference period in traditional SIPP, is the possibility of increased memory decay by respondents. However, use of the EHC methodology of interview should help to alleviate this decay by linking respondents’ memories to significant life events. See earlier explanation above in section 2.
7. Special Circumstances
There are no special circumstances associated with this clearance request.
8. Consultations Outside the Agency
The OMB established an Interagency Advisory Committee to provide guidance for the content and procedures for the SIPP. That committee along with the subcommittee on the topical modules has previously worked actively with the Census Bureau to assure that the SIPP content and procedures collect the appropriate data and that duplications between surveys are minimized to the extent possible.
Further, the Census Bureau has engaged an American Statistical Association – Survey Research Methods (ASA-SRM) advisory group to provide ongoing input into the SIPP development process, as well as continued involvement with the Committee for National Statistics (CNSTAT) at the National Academies of Science. We have continued to hold regular meetings with both groups. On June 12, 2013, the Census Bureau hosted an ASA/SRM teleconference. We provided an update on survey progress and preliminary results from evaluations of the 2012 field test, and solicited input from group members. On July 10, 2013, the Census Bureau hosted a public meeting for CNSTAT at the Keck Center of the National Academy. Census Bureau staff presented information on the 2014 panel and results from evaluations of the 2011 and 2012 field tests. CNSTAT members provided feedback and asked a number of questions about both the data content and structure contained in the 2014 instrument.
When the initial content reviews were conducted leading to the 2010 SIPP-EHC field test, the Census Bureau held five subject area meetings (health, general income and government programs, assets and wealth, labor force, and demographics and other items) as well as subsequent “virtual” meetings with SIPP stakeholders. These consultations were not held for consensus or group recommendation, and the opinions which were expressed were all given on an individual basis and not for purposes of producing a group consensus. Data users indicated a significant need for most of the existing SIPP core content. Select areas of content were added based on stakeholders input for lost topical module content. The 2014 SIPP will include revised content from the 2010-2013 SIPP-EHC instruments and will also include revisions developed subsequent to the 2013 SIPP-EHC test.
We published a notice in the Federal Register (Attachment R) on March 28, 2016, Vol. 81, No. 59, page 17,137, inviting public comment on our plans to submit this request. We did not receive any comments.
9. Paying Respondents
SIPP designed a multi-wave incentive experiment to evaluate the efficacy of incentives as a means of increasing respondent cooperation. In Wave 1, the panel was divided into four groups and each household was randomly assigned to one of the groups. Group 1 was the control group; households in this group were not to be eligible for an incentive in any wave of the 2014 panel. Group 2 was not eligible to receive an incentive in Wave 1, but was eligible for a $40 debit card for Wave 2. This group was used to test retroactively the efficacy of a propensity model. Group 3 was eligible to receive a $20 incentive in Wave 1, but was not eligible to receive a debit card in Wave 2. Group 4 was eligible to receive a $40 incentive in Wave 1. In Wave 2 Group 4 was split in two subgroups: A – did not receive a debit card; and B – was eligible for a $40 debit card. Consequently, in Wave 2 only two groups were eligible to receive debit cards (Group 2 and 4B). A summary of the findings and results of the incentives experiment for Waves 1 and 2 is included in Attachment Q.
For Wave 3 in 2016, Group 1 will continue as prior waves (no incentive), Group 4A will continue to receive a $40 debit card, and Group 4B will be determined using an adaptive model with the remaining groups. For those in the modeled groups, roughly 22,500 households, 30% will be eligible for incentives. Selection for the Wave 3 incentive in the modeled groups will be made using a propensity model process. For all waves, we distribute the incentives centrally from our National Processing Center. This centralized distribution eliminates any discretion on the part of the field representatives, ensuring that only eligible households are given (or promised) incentives. We will evaluate the results of the 2016 experiment before making any final decisions on incentives for Wave 4 in 2017.
10. Assurance of Confidentiality
We are conducting this survey under the authority of Title 13, United States Code, Sections 141 and 182. Section 9 of this law requires us to keep all information strictly confidential. Respondents are informed of the confidentiality of their responses and the voluntary nature of the survey by a letter from the Director of the Census Bureau that will be sent to all participants in the survey (Attachments B and C).
11. Justification for Sensitive Questions
The sources of income and assets are among the kinds of data collected and possibly considered of a sensitive nature. The Census Bureau takes the position that the collection of these types of data is necessary for the analysis of important policy and program issues, and consequently has structured the questions to lessen their sensitivity.
12. Estimate of Respondent Burden
Based on our experience 2014 Panel Waves 1 and 2, the burden estimates for Waves 3 and 4 are as follows:
12A. Estimated Annualized Burden Hours FY 2016-2017
|
Expected Number of Respondents |
Waves |
Number of Responses per Respondent |
Average Burden per Response (in hours) |
Total Burden Hours |
Interview |
64,050 |
1 |
1 |
0.67 |
42,700 |
Total |
64,050 |
1 |
1 |
0.67 |
42,700 |
Approximately 30,500 households are expected to be interviewed for the 2014 SIPP Panel Waves 3 and 4. We estimate that each household contains 2.1 people aged 15 and above, yielding approximately 64,050 person-level interviews per wave in this panel. Based on Wave 1 results, interviews take approximately 40 minutes per adult on average. Consequently, the total annual burden for 2014 SIPP-EHC interviews will be 42,700 hours per year.
12B. Estimated Annualized Burden Costs FY 2016-2017
|
Total Burden Hours |
Hourly Wage Rate1 |
Total Respondent Costs |
|
42,700 |
$10.61 |
$453,047 |
Total |
42,700 |
$10.61 |
$453,047 |
13. Estimate of Cost Burden
There are no direct costs to respondents participating in the survey other than the time involved in answering the survey questions.
14. Cost to Federal Government
The production costs of all parts of the 2014 SIPP panel are approximately $35,000,000 in FY 2016 and 2017. That amount is included in the estimate of total costs to the federal government of the Census Bureau's current programs supplied to the OMB.
15. Reason for Change in Burden
The 2014 SIPP Panel extension is being submitted as a continuing collection with updated information; therefore, a change in burden will occur due to revisions to the number of respondents and amount of time to complete the interview.
16. Project Schedule
The 2014 SIPP Panel advance letters (Attachments B and C) are mailed prior to interviewing. The Census Bureau has completed two of four waves of the Survey of Income and Program Participation 2014 Panel (SIPP), which began in February 2014. Wave 1 of the SIPP 2014 Panel was conducted from February to June of 2014. Wave 2 was conducted from February to June of 2015. Wave 3 is currently being conducted from April to June of 2016. Wave 4 is scheduled to be conducted from February to June of 2017. We will release public use data product on a schedule to be determined.
We plan to continue the thorough evaluation of the EHC methodology and the new SIPP data structure, which we began with the field tests, using data from the 2014 production panel. Those evaluations focused mainly on data quality and whether the reengineered survey instrument, with its annual interview, was delivering results that compared favorably to those from the existing SIPP instrument. While we will continue that line of inquiry, we also have several additional avenues of evaluation to undertake with the 2014 data.
We will evaluate the survey using both collected data and paradata. One facet of our data evaluations that we will continue from the field tests is comparing the 2014 SIPP results to those from 2008 SIPP. We have compared results for calendar year 2011 from the 2012 field test to a matched sample from the 2008 panel. See Attachment K for the final report compiling these results. With the field tests, we had done the comparison to the 2008 SIPP using a matched sample, which we refer to as the “Matched SIPP” or MSIPP dataset. This was necessary because the samples of the field tests were not nationally representative, and so output from the tests was not directly comparable with the full, nationally representative SIPP sample. Therefore, we took a subset of the SIPP sample, consisting of households residing in the Primary Sampling Units (PSUs) from which we drew the SIPP-EHC test samples. This workaround was not necessary in 2014, as we were comparing two nationally representative samples covering the majority of calendar year 2013.
Because of the way the two surveys are structured, we are able to compare results from almost exactly the same period, for at least part of the SIPP sample. The 2014 SIPP has a yearlong reference period, so the 2014 interview asked about 2013. The 2008 SIPP had a four-month reference period, so the last rotation group of the final wave (Wave 16) of the 2008 SIPP was interviewed in December 2013 and had a reference period covering August-November of 2013. Still, we were able to compare monthly results for the first three quarters of 2013 with the full sample from both surveys and to evaluate concerns about differential recall by this overlap.
As each subject-matter area is evaluating its content, we expect that in addition to comparing the results from the 2014 SIPP to the 2008 SIPP, where possible they will also compare the results to those from other surveys, such as the American Community Survey (ACS) or the Current Population Survey (CPS). While we would expect some variation in estimates from the different surveys due to sample size, survey universe, etc., we can tell whether SIPP’s results are broadly in line with those from other surveys.
The 2014 SIPP instrument will also generate a large amount of paradata that the survey team will continue to use to evaluate the survey. First, we want to run a number of comparisons related to interview timing – for example, how much longer are adults’ interviews compared to children’s, and how much different are proxy interviews from self-reports? Those comparisons involve respondents’ behavior, but additionally we want to use paradata to evaluate how much influence field representatives’ behavior has on the survey results. We will have results from the FR certification test, so we can determine how much more successful high-scoring FRs are than low-scoring ones. Differential success by certification test score will help to identify areas to target for improvement in training and supplemental interviewer observation.
The paradata will also provide us with metrics that allow us to evaluate the respondent burden and to produce better cost estimates. For example, we will know the average number of questions asked during each interview, allowing us to pinpoint content areas that we could streamline or change during the research panel. We also know how many visits to a household it takes to get as completed interview, so we can use this statistic to estimate our 2015 costs more precisely.
The paradata can provide information we can use to improve the overall survey or correct errors in the existing instrument. We will be reviewing item level don’t know and refusal rates, as well as particularly time consuming items, to further target instrument improvement. Field representatives have the ability to enter notes, both at the item level and at the case level. We used these notes to identify and correct a number of bugs in the 2013 instrument, and we hope to duplicate this success in 2014. Additionally, we developed an FR debriefing instrument, which allows the FRs to offer detailed comments on each section of the instrument and to express any concerns or problems they had when fielding the survey once the FRs have completed all of cases on their workload.
In addition to these evaluation tools, the Census Bureau plans to continue using CARI technology (see page 2) for a sample of items in each interview. We will continue recording items from each household’s interview, so long as the household provides its consent to the quality assurance recordings. It is the Census Bureau’s plan to continuously evaluate both the quality of the SIPP data and the possibility that CARI could supplement or replace standard reinterview. A summary of our findings based on using CARI in SIPP can be found in Attachments L and M.
Subsequently, by recording all households, supervisors will have the ability to select recordings for the supplemental observation and coaching of interviewers. As the recordings are used during and after data collection, the Census Bureau will use the recordings to develop standards for the use of recorded interviews for coaching interviewers. For example, for Wave 3 we are implementing a new policy where FRs with a low CARI consent rate are flagged, causing the field supervisors to follow-up on the cause of the low rate. We also plan on developing options to use the recordings as supplements to both in-person observation and reinterview. The 2014 SIPP Panel instrument will utilize the CARI Interactive Data Access System (CARI System), an innovative, integrated, multifaceted monitoring system that features a configurable web-based interface for behavior coding, quality assurance, and coaching. This system assists in coding interviews for measuring question and interviewer performance and the interaction between interviewers and respondents.
Additionally, the U.S. Census Bureau developed the Contact History Instrument (CHI). CHI enables FRs to record all details of contact attempts made to a household, such as date and time, respondent reluctance, FR strategies, and FR observations of the housing unit and neighborhood. Unlike other sources (e.g., case-level notes, informal discussions with supervisors, etc.), CHI affords the opportunity to document the behavior of both interviewed and non-interviewed households prior to the final outcome of the survey. CHI was launched during production in SIPP in May 2011, but will be more heavily utilized during the 2014 SIPP. CHI can be used as a tool for understanding respondent attrition. First, CHI data will allow members of the survey team to assess how much effort FRs exert to close out a case by examining how many contact attempts FRs make by the final case outcome. Second, it will also be utilized to determine which day of the week and time of the day to attempt a contact and any special considerations (e.g., prefers previous FR, works evenings, etc.) given during a previous contact attempt. Information collected in CHI will help make the interviewing process more efficient, which should help reduce survey-operating costs. Third, CHI will help gauge respondent “fatigue” by recording any reluctance concerns/behaviors given during a contact attempt (e.g., “Too busy”, “Too many interviews”, etc.). Moreover, members of the survey team will be able to examine which strategies utilized by FRs were successful by scheduling and completing an interview.
Finally, for a small subset of characteristics, and for a subset of sample areas, we will have access to administrative record data, which we integrated into our evaluation of the field test data and will continue to use for an objective assessment of the validity of survey estimates matched to administrative data (see explanation on page 4).
17. Display of OMB Approval Information
The expiration date is listed in the advance letter that will be sent to eligible households before each wave’s interview.
18. Exceptions to the Certification
There are no exceptions to the certification.
1 For individuals, the wage rate is $10.61 per hour. This is based on the average hourly earnings for employees as reported by the Bureau of Labor Statistics (http://www.bls.gov/news.release/realer.t01.htm).
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
Author | Brian Harris-Kojetin |
File Modified | 0000-00-00 |
File Created | 2021-01-23 |