Supporting Statement B_ATUS (1220-0175)

Supporting Statement B_ATUS (1220-0175).docx

American Time Use Survey

OMB: 1220-0175

Document [docx]
Download: docx | pdf

American Time Use Survey

OMB Control Number 1220-0175

OMB Expiration Date: 10/31/2022


SUPPORTING STATEMENT FOR

AMERICAN TIME USE SURVEY


OMB CONTROL NO. 1220-0175


  1. COLLECTIONS OF INFORMATION EMPLOYING STATISTICAL METHODS


1. Describe (including a numerical estimate) the potential respondent universe and any sampling or other respondent selection methods to be used. Data on the number of entities (e.g., establishments, State and local government units, households, or persons) in the universe covered by the collection and in the corresponding sample are to be provided in tabular form for the universe as a whole and for each of the strata in the proposed sample. Indicate expected response rates for the collection as a whole. If the collection had been conducted previously, include the actual response rate achieved during the last collection.


The ATUS sample is drawn from the Current Population Survey (CPS), so the ATUS universe is the same as the CPS universe. From this universe, the Census Bureau selects a sample of approximately 69,000 households each month, of which approximately 59,000 households are eligible for interviews. The Census Bureau actually interviews individuals in about 44,000 households each month. For more information about the CPS sample, see chapters 2-1 and 2-2 of Design and Methodology: Current Population Survey, Technical Paper 77 (available at https://www2.census.gov/programs-surveys/cps/methodology/CPS-Tech-Paper-77.pdf).


Households that have completed their 8th CPS interview become eligible for selection in the ATUS. About 2,010 of these households are selected for the ATUS sample each month. Some of these cases will be identified as ineligible; designated respondents may have moved or died or the household may be ineligible for another reason. In 2019, about 1,960 households per month were eligible for selection in the ATUS. The ATUS sample is a stratified, three-stage sample. In the first stage of selection, the CPS oversample in the less populous States is reduced. In the second stage of selection, households are stratified based on the following characteristics: race/ethnicity of householder, presence and age of children, and the number of adults in adult-only households. In the third stage of selection, an eligible person from each household selected in the second stage is selected as the designated person (respondent) for the ATUS. An eligible person is a civilian household member at least 15 years of age.


The sample persons are then randomly assigned a designated reference day (a day of the week for which they will be reporting) and an initial interview week code (the week the case is introduced). In order to ensure accurate measures of time spent on weekdays and weekend days, the sample is split evenly between weekdays and weekend days. Ten percent of the sample is allocated to each weekday and 25 percent of the sample is allocated to each weekend day. For more information about the ATUS sample see chapter 3 of the ATUS User's Guide: http://www.bls.gov/tus/atususersguide.pdf.


Based on the average response rate in 2019,1 a response rate of about 40.1 percent is expected over an 8-week fielding period.2 Thus, about 785 interviews will be completed each month (1,960 eligible respondents x 0.401), or 9,435 annually.


Estimated Number of Respondents for the 2023-25 ATUS

ATUS Universe

(Persons)

2019 ATUS Response Rate

Estimated

ATUS Respondents

annually

Estimated Total ATUS Respondents

1,960 eligible cases per month

40.1 percent

9,435

28,305



2. Describe the procedures for the collection of information including:


  • Statistical methodology for stratification and sample selection,

  • Estimation procedure,

  • Degree of accuracy needed for the purpose described in the justification,

  • Unusual problems requiring specialized sampling procedures, and

  • Any use of periodic (less frequent than annual) data collection cycles to reduce burden.


  1. ATUS Data Collection: All ATUS interviews are conducted using Computer Assisted Telephone Interviewing (CATI) technology. Interviewers from the U.S. Census Bureau's Contact Center in Jeffersonville, Indiana, conduct the interviews and assign the activity codes.


The ATUS interview is a combination of structured questions and conversational interviewing. For the household roster update, employment status questions, the CPS updates, and module questions, Census Bureau interviewers read the question on the screen and enter the appropriate response. For the time-use diary and subsequent summary questions on childcare, paid work, volunteering, and eldercare, the interviewer more flexibly interviews the respondent, filling in the diary grid as questions are answered.


The data collection instrument includes an edit check that ensures all cells are filled before the interviewer exits the diary. Extensive interviewer training has been provided in how to do conversational interviewing—including when to selectively probe for adequate information to code activities. Refresher training is conducted regularly. Interviews are periodically monitored by supervisors, coaches, and BLS sponsors to evaluate conversational interviewing performance. Because the interviewers also are responsible for coding activity information collected in the time diary, they understand the level of detail that must be collected during the interview. Interviewers never code data from the interviews they conducted. A coding verification and adjudication process is in place to ensure activities are accurately coded. Verification continues to be done at 100 percent to ensure high and consistent data quality.


  1. ATUS Activity Lexicon: Respondent’s activities are coded using a classification system not in use in any other Federal survey. A coding lexicon was developed to classify reported activities into 17 major categories, with two additional levels of detail. (ATUS coding lexicons can be found on the Internet at: www.bls.gov/tus/lexicons.htm). BLS designed the ATUS lexicon by studying classification systems used for time-use surveys in other countries, drawing most heavily on the Australian time-use survey lexicon, and then determining the best way to produce analytically relevant data for the United States. The coding lexicon developed for the ATUS was extensively tested by U.S. Census Bureau coders and by coders at Westat prior to the start of full production in 2003. The development of the ATUS lexicon is described in "Developing the American Time Use Survey activity classification system," by Kristina Shelley, available at: http://www.bls.gov/opub/mlr/2005/06/art1full.pdf.


  1. Estimation Procedures: Estimation includes a series of adjustments to account for the probability of selection, a non-response adjustment, and a benchmarking procedure which will ensure that certain quarterly population counts from the ATUS sample agree with corresponding counts from the CPS.


The ATUS base weight for each ATUS sample case reflects the probability of selection into the ATUS. This weight takes into account the sample design and weighting for the CPS, and subsequent adjustments to the sample before selection into the ATUS. The non-response adjustment increases the weights of the responding sample cases to account for those who didn’t respond by reference day and incentive status. 


The benchmarking procedure is an iterative raking procedure containing three steps. The first step adjusts the weights of the sample cases so that weighted estimates of persons in various sex-race/ethnicity categories from the ATUS agree with similar population counts from the CPS. The second step of the benchmarking procedure adjusts the weights of the sample cases so that estimates from the ATUS match composite estimates from the CPS for household composition and educational attainment by sex. The third step adjusts the weights so that weighted estimates by age category and sex agree with CPS population counts. In all three steps, weights are adjusted separately for weekdays and weekend days so that population estimates agree with CPS for both day-of-week categories.


The probability that an individual participates in an activity on a given day varies across activities. For example, nearly everyone reports sleeping on the diary day, while few people report educational activities. A balanced repeated replication variance estimator is used to calculate standard errors and coefficients of variation for selected estimates.


A complete description of the estimation procedures for the ATUS can be found in chapter 7 of the ATUS User’s Guide: www.bls.gov/tus/atususersguide.pdf.


  1. ATUS Estimates: Four types of estimates are used to produce published ATUS tables: average hours per day, participation rates, number of participants, and average hours per day of participants.


Average Hours per Day: The average number of hours spent per day engaging in activity j for a given population, , is given by


where Tij is the amount of time spent in activity j by respondent i, and

fwgt­i is the final weight for respondent i.


Participation Rates: The percentage of the population engaging in activity j on an average day, , is computed using



where Pj is the percentage of people who engaged in activity j in a given day, and

Iij is an indicator that equals 1 if the respondent i engaged in activity j during the reference day and 0 otherwise.







Number of Participants: The number of persons engaging in activity j during an average day, Numj, is given by



where Numj is the number of persons participating in activity j during an average day,

Iij is an indicator that equals 1 if respondent i participated in activity j during the reference day and 0 otherwise, and

D is the number of days in the estimation period (365 for annual averages for non-leap years, for example).


Average Hours per Day of Participants: The average number of hours spent per day engaged in activity j by participants, , is given by


where Tij is the amount of time spent in activity j by respondent i,

fwgt­i is the final weight for respondent i, and

Iij is an indicator that equals 1 if respondent i participated in activity j during the reference day and 0 otherwise.



3. Describe methods to maximize response rates and to deal with issues of non-response. The accuracy and reliability of information collected must be shown to be adequate for intended uses. For collections based on sampling, a special justification must be provided for any collection that will not yield "reliable" data that can be generalized to the universe studied.


A number of efforts have been undertaken to maximize ATUS survey response rates.

  1. Field Test. The 2001 field test examined the effectiveness of incentives, sending advance materials by priority mail, doubling the number of eligible interviewing days by using a day-of-week substitution methodology, calling in advance to set interview appointments, “recycling” cases for field visits, and extending the field period from 4 to up to 8 weeks. (See Attachment B.)

  1. Use of Incentives and recycling cases to the field. As discussed in Part A, section 9, testing showed that incentives significantly increased response rates. “Recycling” cases to the field—that is, turning nonresponse cases over to interviewers to conduct face-to-face interviews in the respondent’s home—was also effective in maximizing response rates, particularly for no-telephone-number households. However, incentives to all respondents and recycling were both cost prohibitive.


  1. Appointment setting. Calling in advance to set an appointment (“proactive appointment setting”) did not improve response and completed interviews using that strategy required 70 percent more contact attempts than other completed interviews. As a result, advance appointment setting was rejected.


  1. Day-of-week substitution. Allowing day-of-week substitution increased response rates by about 4 percentage points over 8 weeks; however, this practice led to a disproportionately high number of completed interviews on Wednesdays and a disproportionately low number on Fridays. To maintain integrity in the day-of-week distribution of the sample, substitution was also rejected.


  1. Use of priority mail. Consistent with survey methods literature, priority mail appears to have increased response rates in the ATUS field test—by over 10 percentage points. It is relatively low cost to implement ($5.54 per mailing in 2021) and is currently used for sending advance materials.


  1. Fielding period. The optimal field period length varies depending on incentive use. Without an incentive, the field test showed that an 8-week fielding period was required to approach 70 percent (69 percent was achieved in the field test). As a result, this 8-week fielding period was adopted for full production. To even out workload and measure time use across days of the month, one quarter of the monthly sample is introduced each week for 4 weeks. Active cases are called up to 7 times per day on one eligible day each week for 8 weeks.


  1. Incentive expansions. Two OMB-approved incentive expansions were implemented over the years. As of 2013, incentives are sent to DPs in no-telephone-number households as well as individuals for whom the Census Bureau assigned call outcome codes of: 108 Number not in service; 109 Number changed, no new number given; 124 Number could not be completed as dialed; and 127 Temporarily not in service after the first week of collection. The use of incentives has helped to boost response among difficult-to-reach populations. Individuals who are sent incentives are more likely to be black, of Hispanic or Latino ethnicity, to have less education, and to have lower household incomes than members of households that provide phone numbers.


BLS fielded a new incentive study starting in January 2020. The ATUS incentive study had two goals. The first goal tested the effectiveness of using $0, $5, and $10 cash incentives, where effectiveness is measured in terms of survey response. The second goal tested whether a $5 or $10 cash incentive can boost survey response among certain underrepresented populations. In this study, the focus was on sampled persons who are 15 to 24 years old. Data for the incentive study was collected starting with the December 2019 sample through the September 2021 sample. Data are currently being analyzed to inform the future use of incentives in the ATUS. In the interim period, after the cash incentive study ended but before implementation of a new incentive plan, and as discussed in the previously OMB-approved cash incentive study, ATUS sends $5 cash incentives to incentive cases. (See Attachments H and I.)


  1. Toll-free number provided to DPs. To maximize response, a toll-free number is provided to all eligible respondents in the advance materials. They can use the number to call in and set an appointment for an interview or, if they call on their interview day, to complete the interview.


  1. Advance materials revised. In 2005, an examination of the ATUS advance materials was undertaken and the advance materials were subsequently revised. The advance materials were reviewed and updated again in 2012-13. The advance letters were revised to include information commonly asked by respondents during their first contact with interviewers. The ATUS brochure was updated and redesigned to appeal to more respondents. The debit card and instruction sheet also were redesigned to appear more prominently in the advanced mailer envelope. These materials were modified based on feedback received from expert reviewers and focus groups of ATUS interviewers who examined existing materials.


  1. Respondent Web site. BLS developed a website to address common respondent questions about the survey. Its web address is included in the advance letters (http://www.bls.gov/respondents/tus/home.htm).

  1. Fax letters. BLS worked with Census to develop "we've been trying to reach you letters" to fax to telephone numbers that reach fax machines. Like an answering machine message, the fax letters ask the sampled person to call the Census Bureau and complete an interview.


  1. Interview Operations Analysis. In 2004, telephone call center operations were examined to determine if measures could be taken to increase response rates, and three basic operations were changed. First, the ATUS staff learned that while many surveys set calling goals for interviewers, the call center management was not providing ATUS interviewers with daily or weekly goals. Beginning in the summer of 2004, the telephone center management set daily goals for ATUS interviewers, providing concrete guidelines for how many completed calls are desired. Although the interviewers do not always meet their goals, these goals assist the telephone center management to measure daily progress and to motivate the interviewers. Second, it was discovered that because of the way call blocks (times) were scheduled, many calls were being made between about 4:30 pm and 5:00 pm, before many people were home from work. Methods for calling were changed so that more calls would be made after 5:30 pm, when people who work regular 9-5 hours would be more likely to be home. Finally, the Census Bureau conducted more research into invalid phone numbers in an attempt to find valid phone numbers for the contact person.


  1. Interviewer job aids. Interviewers have job aids—answers to frequently asked questions—designed to help answer questions about the survey and to assist them in gaining respondents' cooperation to participate.


  1. Interviewer incentives. An interviewer incentive study was considered but subsequently rejected as the reality of implementing interviewer incentives was determined to be cost prohibitive.


  1. Newsletters. In cooperation with Census, BLS periodically produces newsletters that are designed to motivate and inform interviewers.


  1. Interviewer training. BLS and Census have conducted workshops for interviewers on techniques to gain cooperation from respondents, and much of the material developed for this training was incorporated into other interviewer training courses. Interviewer operations also have been scrutinized and revised to increase the probability of completed interviews, such as redesigning the call blocks to add more call attempts during evening hours.


  1. Studies to understand nonresponse and possible nonresponse bias. In addition to the efforts listed above, a number of studies have been done to understand nonresponse in the ATUS. More detail about these studies appears in Section 4.


  1. Web collection of ATUS diary. BLS is exploring a mixed mode design for the ATUS. A move to a mixed-mode design could potentially help ATUS improve response and be prepared for the survey climate of the future. BLS is planning additional research for the development of a mixed-mode design. More detail about these projects appears in Section 4.



4. Describe any tests of procedures or methods to be undertaken. Testing is encouraged as an effective means of refining collections of information to minimize burden and improve utility. Tests must be approved if they call for answers to identical questions from 10 or more respondents. A proposed test or set of test may be submitted for approval separately or in combination with the main collection of information.


Before the ATUS went into full production, extensive testing was done on the operations methodologies, question wording and interpretation, and activity coding. All questions added to the survey over the years also were subject to extensive testing before their implementation.




  1. Completed research


1. Operations Field Test. The ATUS presents special operational challenges because a designated person—rather than any household member—must be contacted on a specific day of the week. The field test was designed to examine methods to maximize respondent contact and response. The 2001 operations field test is mentioned throughout this clearance package and is described in more detail in Attachment B.


2. Cognitive testing

  1. Diary: None of the completed cognitive tests focused specifically on the time diary, although the ATUS introduction, instructions, roster update, time diary and associated contextual information were administered as part of all tests. As a result, respondents’ reactions to each of these survey elements were used to modify and improve the survey. Modifications based on respondent reactions include:


Time diary instructions

Time diary instructions were shortened so that they take approximately one minute to administer. Two major modifications were made to the original instructions: the original instructions did not specify that respondents needed to estimate the duration of each activity. As a result, respondents often “laundry-listed” activities without attributing times to each activity. Language informing respondents to estimate activity duration was added to the time diary instructions. As a result, fewer respondents have required prompting to provide time estimates. The original instructions included examples of how to report activities. Research showed that these examples were not helpful to respondents because they failed to match respondents’ daily circumstances (Stinson, 2000). Dropping examples from the time diary instructions shortened the time it took to administer them, and of the nearly 100 people who participated, fewer than 10 respondents requested examples that would specify the level of detail needed in the time diary.


Who was with you?”

Stinson3 and Fricker & Schwartz4 found that the question, “Who was with you?” was open to multiple interpretations. Some respondents interpreted the question as meaning, “Who was near you?” whereas others understood it to mean, “Who participated in the activity with you?” In order to make the probe clearer, ATUS interviewers ask, “Who was in the room with you?” when respondents are at their own or someone else’s home. They ask, “Who accompanied you?” for activities that occur in other locations. Respondents are not asked “Who was in the room with you?” when they report sleeping, grooming, personal activities, or being at work. In 2008, the questions “Who was with you?” and “Who accompanied you?” were cognitively tested for times when respondents reported working or doing work-related activities. None of the respondents had difficulty remembering who was with them while they were working, although some respondents did not provide the level of detail that was desired. To ensure an appropriate level of detail is collected, respondents who say they were with “co-workers” are asked the follow-up question, “By co-workers, did you mean you were with your manager/supervisor, people whom you supervise, or other co-workers?”


  1. Childcare: Focus groups and two rounds of cognitive testing were conducted to refine the wording of the childcare summary question.5,6 Based on the findings from those studies, reports of care for household children are restricted to times during which at least one child under the age of 13 was awake. The phrase “in your care” was selected to convey that the parent or care provider was responsible for and mindful of at least one child 12 years old or younger. For more details, please see the summary of cognitive lab #2106 that was provided to OMB in July 2001.


  1. Paid work: Stinson7 and Schwartz, Lynn & Gortman8 conducted three rounds of cognitive testing of the paid work summary questions. The major findings were that respondents interpreted both concepts, activities done for one’s job or business and activities done for pay, more broadly than researchers had intended. Based on respondents’ reports, activities done for one’s job or business can include networking or relationship-building activities and activities done for pay can include any income-generating activity that is not one’s main or second job.


  1. Eldercare Questions: In January 2011, questions on eldercare were added to the ATUS. These questions were cognitively tested on both caregivers of the elderly and the general public, and the results of these tests were used to refine the wording of the eldercare summary questions. The questionnaire was tested for clarity, comprehension, length, potential sensitivity, and the flow through the instrument. Respondents were asked if they have provided care or assistance to an adult who needed care because of a condition related to aging. The phrase “condition related to aging” was selected because the focus group results and research showed disagreement on a specific age that eldercare begins. Cognitive testing of the phrase and the questions showed the wording was effective in identifying individuals who had provided care to the elderly.


  1. Coding Tests. The ATUS coding lexicon was developed for and is unique to the ATUS. While originally based on the system used in the Australian Time-Use Survey, the system was modified a great deal to enable more detailed and flexible analysis of time-use data. Modifications were driven by results of four coding tests and by issues brought up in production. The first 3 tests were conducted with Census Bureau coders and the fourth with Westat coders. The tests examined the intuitiveness of the coding system, accuracy rates by activity tier, inter-coder variability, and coding software usability. A systems test of the coding verification and adjudication process was also completed in October 2001.


  1. Software tests. The ATUS data collection instrument is programmed in modules or blocks using Blaise software. Each block was extensively tested at Census and BLS prior to full production. Testing scenarios were repeated with each version of the instrument prior to production, and additional testing scenarios are run any time a change is made to the instrument to ensure that all modifications are correct and that there are no unintended consequences. “Audit trails” capturing every key stroke are used to investigate problems. Instruments are also tested by Census Bureau interviewers prior to being used.


  1. Advance diary test. Early in ATUS development, survey methodologists recommended sending diaries with the ATUS advance materials to facilitate recall and improve data quality. There was some concern among the survey sponsors about sending diaries in advance without testing effects on response.


BLS awarded a contract to the National Opinion Research Center (NORC) to conduct a split-panel test of advance diaries in April 2002. Half of the respondents in this test (n =225) received an advance diary and then completed a telephone interview that used conversational interviewing to elicit the details needed for coding. The other respondents (n =225) received the same advance materials with the exception of the diary and engaged in the standard time-use interview. NORC found that sending an advance diary increased burden, and did not improve data quality or response. The NORC final report was sent to OMB in December 2003.


After receiving the NORC test results, the BLS Office of Survey Methods Research further analyzed the data using multivariate analyses. This analysis confirmed NORC’s results. As a result, no diary was added to the ATUS advance materials.


  1. Simultaneous activities. Secondary or simultaneous activities are considered one of the significant dimensions of an activity that should be captured in a time diary.9 Early research at BLS as well as experience by Statistics Canada indicated that the systematic collection of secondary activities could be problematic in a telephone survey. While a paper diary form simply needs to include a column for secondary activities in order for respondents to know that they should record them, in a telephone survey, interviewers must probe, “Were you doing anything else?” for each activity in order to collect information in a systematic and unbiased way. Probing for secondary activities can quickly become burdensome and introduces the risk of fatiguing the respondent early in the interview. Additionally, Stinson10 found that respondents could not attribute times to secondary activities, which would weaken their analytical relevance. Nevertheless, research participants, members of advisory councils, and survey methodologists have all recommended collecting simultaneous activities.


In 2003, BLS solicited proposals from NORC to look at the systematic collection of simultaneous activities. The study was necessarily complex and costly. BLS decided to delay cognitive work on this subject until some empirical data on simultaneous activities were available from full production.


ATUS interviewers ask respondents to report the main activities they did on the diary day. From 2003-12, when interviewers voluntarily provided information about simultaneous activities, the interviewers recorded but did not code this information. Activity codes were assigned to some of these data for a 2011 research study that revealed ATUS respondents’ infrequent reports of secondary activities accounted for much less time compared with traditionally-collected secondary activity reports.11 The study went on to conclude that the ATUS data on simultaneous activities were of low quality and limited value; because of this, in early 2012, Census interviewers stopped recording voluntary reports of simultaneous activities in the ATUS.


  1. Advance Materials Analysis. In 2004, two studies were undertaken to re-examine the ATUS advance materials. An expert review of the materials and focus groups with ATUS interviewers were conducted to determine how the advance materials might be re-designed to better influence designated persons to participate. Findings from both studies indicated the letter should be shorter and the brochure should have a more appealing design, including switching from a dichromatic to a full color scheme. In addition, the focus groups and expert reviewers recommended revising the brochure to address respondents' questions. In 2005, extensive revisions were made to the advance materials based on these studies. The advanced materials were extensively reviewed, modified, and updated again in 2012-13. Changes were made based on feedback from expert reviewers and focus groups of ATUS interviewers.


  1. Incentive experiment. In line with terms of clearance from the 2003 OMB package, the feasibility of an incentive experiment conducted in a production environment was considered. A BLS and Census Bureau interagency team discussed the development of an experiment, with the intention of conducting it in fiscal year 2005. Planning and assessment meetings determined that the incentive experiment was not a viable option for increasing response rates due to the costs associated with providing incentives to all ATUS participants.


  1. Item nonresponse. BLS investigated the incidence of missing and imputed ATUS data to assess the quality of ATUS variables. Item nonresponse was found to be quite low in the ATUS, with most variables having an item nonresponse of well under 2 percent. The two variables describing weekly and hourly earnings had higher incidences of nonresponse compared to other variables (see chapter 6 of the ATUS User’s Guide at https://www.bls.gov/tus/atususersguide.pdf).


  1. Cell phone response analysis. Meekins and Denton12 used the ATUS to examine the impact of calling cell phone numbers on nonresponse and measurement error. They found that cell phone respondents have higher noncontact rates, but have refusal rates that are similar to landline respondents. They also note that there is no significant difference in the measurement error rates or in estimates of time use for both groups.


  1. Call block research. Meekins13 looked at call patterns to determine whether greater efficiencies could be attained without biasing ATUS data. Using ATUS call history data from 2006 to 2007, he found that a small number of ATUS sample units receive a disproportionately large amount of effort. His results also showed that dialing around the same time as a previous contact is a positive predictor of subsequent contact while refusing to provide income information on the CPS is a negative predictor and calling efficacy is greater in the later hours of the day. The study concluded with several recommendations for optimizing the efficiency of calls to sample units.


  1. Behavior coding. Behavior coding is a technique that has been successfully utilized with event history calendar data collection14 to understand how interviewers ask questions and provide clarification and feedback to respondents, how respondents interpret questions and recall answers, and how interviewers and respondents interact during the survey task. ATUS interviewers are trained in conversational interviewing techniques, which allow for interventions with a respondent to help him or her stay on track when remembering the day’s activities, and activity sequences and timing. Research from this study evaluated how conversational interviewing and specific recall techniques were used by interviewers and their effect on data quality, which helped aid in instrument development and interviewer interventions.


  1. Recall period research. BLS worked with outside researchers to examine whether an extended recall period affects data quality. As a whole, the research project examined various data quality measures—for example, the number of activities per day, the percent of “good” or “bad” time diaries—by the length of the recall period. Results from the analysis of ATUS data revealed no major differences in data quality that could be attributed solely to the length of the recall period. However, results from investigating other time diary surveys demonstrated some declines in data quality with longer recall periods; for example, respondents who said they completed the diary more than 24 hours after the diary day had 14-25% fewer activities in their diaries as respondents who said they completed their diaries while they went about their days. With the indication that data quality might be negatively affected and the increased costs and managerial challenges associated with developing and managing a collection system that uses two recall periods, BLS decided not to pursue additional research on an extended recall period.


  1. Examining ATUS cases that do not meet minimum data quality standards. In 2019-20, BLS examined ATUS cases that were removed because they contained fewer than five activities in the diary or more than 3 hours of time recorded as “don’t know” or “can’t remember.” The research examined differences in demographic characteristics between these cases and those on the public use files to assess whether the exclusion of these cases affected the representativeness of the ATUS. It also examined data collection factors that may be linked to lower data quality and whether these respondents had a different propensity to complete surveys than other respondents. Analysis showed that these cases were disproportionately older woman. While there may be some value to researchers in having data for these respondents included in the published files, and estimates produced with these cases may more accurately reflect what we know about time use for older cohorts, implementing a relaxed data quality standard would lead to series breaks. BLS will revisit the minimum data quality criteria when ATUS implements a re-design.


  1. Research done on ATUS response and nonresponse. Numerous studies have been done to understand ATUS survey response and nonresponse. BLS, the Census Bureau, and researchers who are not affiliated with these agencies all have been active in this area.


a. Census Bureau Response Rate Investigation. A team at the Census Bureau compared response rates achieved in the beginning of 2003 with higher rates achieved in 2002, just before ATUS full production began. The team tested several hypotheses in an attempt to determine why response declined at that time. The team examined whether there were changes in the number or timing of call attempts, and whether the hiring of new interviewers just before full production or problems with the call scheduling software might have affected response. While they found some spikes in times of day that people refuse, they did not find a strong pattern for day of week or time of day effects in refusal rates. They also found that there was no relationship between interviewers’ ATUS refusal rates and their years of experience interviewing. In a multivariate analysis, the team found a correlation between a refusal to provide income data in CPS and a refusal to participate in ATUS. This information could be valuable for predicting nonresponse and/or targeting refusal conversion efforts.

b. Response Analysis Survey. In 2004, qualitative research was completed to look at reasons for nonresponse in ATUS. In January 2004, the BLS developed and the Census Bureau conducted a Response Analysis Survey (RAS). Census Bureau interviewers attempted to contact a sample of both respondents and non-respondents to the ATUS to learn more about persons’ propensities to respond or not to the ATUS, and to better understand to which features of the survey response propensity might be correlated. The study focused on refusals rather than noncontacts, as the former are the main contributor to ATUS non-response. It was restricted to English-speaking adults selected for the ATUS. The primary reason that RAS respondents mentioned for not participating in the ATUS was that they were tired from responding to the CPS. The RAS also included questions about whether respondents read the advance materials, visited the web site, or sent e-mails asking for information, as well as their impressions of Census Bureau interviewers. Based on the responses to the RAS, the BLS examined how to best alter survey operations to increase designated persons’ propensities to respond. Advance materials were revised to explain more clearly the reasons why some CPS respondents were “re-selected” for the ATUS, and the ATUS brochure was redesigned to increase the proportion who read it, and to feature the web site address more prominently. The RAS report is available on the Internet at https://www.bls.gov/osmr/research-papers/2004/st040140.htm.


c. Alternative contact strategies. Using simulated data, Stewart15 examined the effect of using different contact strategies in a telephone survey. He found that allowing for day-of-week substitution resulted in a systematic bias, and that data collected would overstate the amount of time spent away from home. By contrast, a designated-day approach resulted in little bias.


d. Analysis of returned mail. Census Bureau staff conducted an analysis of returned advance mailings and postcards to assess how effective their address review and correction process was, what the impact on response rates would be if addresses identified as movers were reassigned as “not eligibles,” and how the mail return rates differed between incentive and non-incentive cases. The authors considered reassigning the 06 mover codes, the 08 address correction provided codes, and other codes. The research concluded that converting all returned mail cases currently coded as eligible to not eligible would only improve the overall response rate by a maximum of 1.22%, about half of which would be due to the 06 and 08 codes. It was also discovered that twice as many incentive cases had advance mailings returned than non-incentive cases, and those cases that had the advance mailings returned were three times less likely to complete the ATUS interview. Incentive cases are a special concern because respondents must contact the call center to complete the interview, and this contact information is provided in the advance letter. In order to increase incentive case response rates, Census Bureau staff now researches addresses for all incentive cases that had mail returned.


  1. Substitution of DPs and of diary days. In 2012-13, BLS contracted with Westat to provide guidance on some methodological changes that could be made to the ATUS with the intent of increasing response rates. The two options considered were allowing for a substitution of diary days and a substitution of designated persons (DPs) if the DP is not available on the assigned diary day. Westat found evidence that some day-of-week substitutions might successfully raise response rates without inducing bias; for example, they found that Monday – Thursday diaries are relatively interchangeable. Westat designed an experiment that could test these theories. Westat advised against allowing a substitution of DPs in the ATUS design because of concerns that doing so may bias the data.


f. Nonresponse bias analyses. BLS, the Census Bureau, and outside researchers have completed a number of nonresponse bias analyses over the years. In 2005, O’Neil and Dixon conducted an in-depth analysis to examine patterns of ATUS non-response using CPS data. This analysis included breaking out nonresponse by a variety of demographic characteristics, using logistic analysis to determine variables related to nonresponse, and building a propensity score model to examine differences in time-use patterns and to assess the extent of nonresponse bias. Findings showed race and age to be strong predictors of ATUS refusals and noncontacts. The study also showed that estimates of refusal and noncontact bias were small relative to the total time spent in the activities. A follow-up to this analysis (Dixon, 2006) found no nonresponse biases in the time-use estimates, probability of use of time categories, or the relationship between the categories. The study further concluded that any potential biases identified were small.


The ATUS survey methodology files are available to the public, enabling outside researchers to examine survey methods issues. Abraham et al. found that people weakly integrated into their communities were less likely to respond to ATUS, mostly because they were less likely to be contacted. They also found little support for their hypothesis that busy people are less likely to respond to the ATUS. The authors compared aggregate time use estimates using the ATUS base weights without adjustment for nonresponse, using the ATUS final weights with a nonresponse adjustment, and using weights that incorporated the authors’ nonresponse adjustment based on a propensity model. They found the three sets of estimates to be similar.


Letourneau and Zbikowski (2008) analyzed nonresponse in the ATUS using 2006 data. Some results from this study were consistent with previous nonresponse bias studies, such as lower response rates for those living in urban areas and higher refusal rates for those missing the CPS income variable. However, this study contradicted previous studies in several areas. Contrary to previous studies, this Census study did not find lower response rates for the unemployed or those not in the labor force. It also did find lower contact rates for people who work longer hours, and for blacks and Hispanics.


A 2009 paper (Abraham, Helms, and Presser, 2009) found that ATUS respondents were more likely to be volunteers than the general population, and that therefore the ATUS estimate of volunteer hours is biased upward. The authors estimated the associations between respondent characteristics and volunteer hours, and found them to be similar to those from the CPS Volunteer Supplement.


Fricker and Tourangeau (2010) examined characteristics that affect nonresponse using 2003 data. Many of their findings were consistent with earlier studies regarding age, race, income, and respondent busyness on response rates. They found higher nonresponse for those who skipped the CPS family income questions, had been a CPS nonrespondent, or were not the respondent in the last CPS interview. The authors also found that removing cases with a high nonresponse propensity from the sample produced small but significant changes in the overall time use estimates.


Dixon and Meekins (2012) focused on nonresponse bias and measurement error in the ATUS. Using a propensity score model to examine differences in time use patterns and to assess the extent of nonresponse bias, the authors found the estimates of bias were very small from all sources.


Dixon used 2012 data to examine nonresponse using propensity models for overall nonresponse as well as its components, refusal and noncontact. He also explored the possibility that nonresponse may be biasing the estimates due to the amount of zeroes reported by comparing the proportion of zeroes between the groups. The author found no nonresponse bias, but did find the level of potential bias differed by activity. The differences between the reported zeroes from the survey and the estimated zeroes for nonresponse were very small, suggesting that reasons for doing the activity were likely not related to the reasons for nonresponse.


Earp and Edgar (2016) assessed the potential for nonresponse bias in the ATUS using 2014 data by comparing the characteristics of ATUS respondents and nonrespondents using a regression tree model using demographic variables from the CPS. To determine the potential for nonresponse bias, they compared the CPS employment rate of ATUS respondents and nonrespondents, since employment status is expected to be related to time use. Results showed the employment rate did not vary significantly between ATUS respondents and nonrespondents in the overall sample, indicating little to no nonresponse bias for ATUS estimates correlated with CPS employment status.


See Attachment G for a table containing information and hyperlinks (where available) for most of the research cited above.


  1. In progress and planned research


  1. Interviewer/coder debriefings. Many interviewer debriefings have been conducted since full production began in 2003. These debriefings have illuminated procedural difficulties and identified questions that interviewers feel pose problems for respondents. They also assist in clarifying interviewer questions and improving future training.


  1. Cash Incentives. As discussed in Part B, section 3, BLS conducted an incentive study to test the effectiveness of using $0, $5, and $10 cash incentives on survey response. One goal of the study was to test whether a $5 or $10 cash incentive can increase survey response, particularly among underrepresented populations. Data for the proposed study was collected starting with the December 2019 through the September 2021 sample. Data are currently being analyzed and results are forthcoming.


  1. Web collection of ATUS diary. BLS is exploring a mixed mode design for the ATUS. A move to a mixed-mode design could potentially help ATUS improve response and be prepared for the survey climate of the future. While some of these projects have been completed, this work is ongoing. These projects include:


    1. ATUS Westat Web Collection Study: In 2015, BLS consulted with Westat to explore the feasibility of using a mixed-mode design that includes the collection of ATUS data via a web instrument. The project included a literature review of web and mixed-mode data collection, provided recommendations on the design of web data collection for the ATUS, including respondent allocation and contact strategies and question design considerations for a web instrument. The project also included a discussion of comparability issues between web and telephone data collection with methods to evaluate the proposed design including errors of nonobservation (e.g., coverage and nonresponse error) and errors of observation (e.g., measurement error). Westat also provided a preliminary mockup of the recommended diary design.


    1. ATUS Self-Administered Activity Lexicon: In 2020, BLS worked with survey experts to develop and test a modified ATUS activity lexicon that can be used in a self-administered time diary for web collection. Additional testing is planned.


    1. ATUS Web Diary Prototype: BLS collaborated with Westat again in 2020-21 to develop a self-administered web diary prototype for the ATUS. Westat reviewed measures collected in the current CATI version of the ATUS and assessed factors that will affect web design and which features can be implemented in a self-administered web mode. Westat provided BLS with recommendations for web administration and design considerations to ensure uniform measurement across diverse web devices (e.g., mobile, tablet, laptop, or PC) and modes (web and CATI). Westat used their recommendations to develop a self-administered web diary prototype that can be used for further development and testing of a web instrument.


    1. ATUS Web Diary Test: In 2021-22, BLS collaborated with NORC to program and test a self-administered web-based activities diary based on a BLS-provided design (Westat 2020-21 design) to evaluate the efficacy of the instrument relative to the current CATI interview. The test collected response data, as well as paradata to allow BLS and NORC to explore technical and burden-related data related to responding to the self-administered diary. NORC provided BLS with an experimental design that allowed for a comparison of responses to the web-based and CATI data. Findings and recommendations from this test are forthcoming.


    1. ATUS Eldercare Questions for Web Collection: In 2021-22, BLS collaborated with NORC to design and test introductory language, questions, and screen layouts for collecting the ATUS eldercare data in a self-administered web survey. Findings and recommendations from the project are forthcoming.


  1. Envelope Test - In 2019, studies done by the American Community Survey determined that the use of new, updated "Census Branding" envelopes increased the visibility of Census materials and surveys. In 2022, the ATUS is testing the use of Census branding on First Class envelopes against USPS Priority mail envelopes to determine if a change in the mailout material improves response and if it would be more cost effective to use.


  1. Current Nonresponse Bias Study: BLS researchers from the Office of Survey Methods Research are conducting an ATUS nonresponse bias study with recent ATUS data.



5. Provide the name and telephone number of individuals consulted on statistical aspects of the design and the name of the agency unit, contractor(s), grantee(s), or other person(s) who will actually collect and/or analyze person(s) who will actually collect and/or analyze the information for the agency.


The following individuals may be consulted concerning the statistical data collection and analysis operation:


Statistical Design:

Tim Trudell

Demographic Statistical Methods Division

U.S. Census Bureau



Statistical Analysis:

Patrick Carey

Associate Commissioner

Office of Current Employment Analysis

Bureau of Labor Statistics


Data Collection/Survey Design:

Beth Capps  

Assistant Survey Director for the American Time Use Survey

Associate Director for Demographic Programs

U.S. Census Bureau





Attachments:

A - ATUS Instrument Specifications

B - Field Test Analysis

C - Legal Authority Backing

D - Advance Letters

E - Advance Brochure

F - Table A7 2021 Median Hourly Earnings

G - Summary of Nonresponse Bias Studies

H - ATUS Cash Incentive Study Proposal

I - ATUS Cash Incentive Study Extension

J – ATUS Questionnaire






1 Data collection and response rates for the ATUS were impacted by the COVID-19 pandemic in 2020. Data was not collected for the ATUS from mid-March to mid-May 2020. Thus, the response rates for 2019 are a better representation of expected response rates in 2023-25.

2 During processing, a small number of interviews are thrown out because they do not meet the minimum ATUS data quality standards because they 1) have fewer than 5 activities, or 2) have more than 180 minutes of “don’t know” or “refused” activities, or 3) both. The response rate does not include these cases.

3 Stinson, Linda. (2000). Final report of cognitive testing for the American Time Use Survey. Bureau of Labor Statistics internal report, August 2000.

4 Fricker, Scott and Lisa K. Schwartz. (August, 2001). “Reporting absences from home: Results of cognitive testing of the American Time Use Survey’s missed days summary question.”


5 Fricker, Scott and Lisa K. Schwartz. (August, 2001).

6 Schwartz, Lisa K. (February, 2001). “Minding the Children: Understanding how recall and conceptual interpretations influence responses to a time-use summary question.”

7 Stinson (2000)

8 Schwartz, Lisa K., Jayme Gortman, and Siri Lynn. (July, 2001). “What’s work? Respondents’ interpretations of work-related summary questions.”


9 Harvey, Andrew S. “Guidelines for Time Use Data Collection.” Social Indicators Research 30 (1993): 197-228.

10 Stinson (2000)

11 Drago, Robert. “Secondary Activities in the 2006 American Time Use Survey.” Bureau of Labor Statistics Working Paper, February, 2011.


12 Meekins, Brian and Stephanie Denton. “Cell Phones and Nonsampling Error in the American Time Use Survey,” published in the proceedings of the American Statistical Association meetings, 2012.

13 Meekins, Brian. “Possibilities to Improve Calling Efficiency in the AmericanTime Use Survey.” Bureau of Labor Statistics Working Paper, March, 2013.


14 Belli, R., Lee, E.H., Stafford, F.P., and Chou, C.H. (2004). “Calendar and Question-List Survey Methods: Association Between Interviewer Behaviors and Data Quality.” Journal of Official Statistics 20: 185-218.


15 Stewart, Jay (2002). “Assessing the Bias Associated with Alternative Contact Strategies in Telephone Time-Use Surveys.” Survey Methodology. Volume 28, No. 2/December 2002, pp. 4-15.


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File Modified0000-00-00
File Created0000-00-00

© 2024 OMB.report | Privacy Policy