ATUS 2009 Supporting Statement B

ATUS 2009 Supporting Statement B.doc

American Time Use Survey (ATUS)

OMB: 1220-0175

Document [doc]
Download: doc | pdf


American Time Use Survey OMB Clearance Package, 2009

Supporting Statement

American Time Use Survey

B. COLLECTIONS OF INFORMATION EMPLOYING STATISTICAL METHODS


1. Respondent Universe and Respondent Selection Method


The ATUS sample is drawn from the Current Population Survey (CPS), so the ATUS universe is the same as the CPS universe. The universe for the CPS is composed of the approximately 105 million households in the U.S. and the civilian, noninstitutional population residing in those households. From this universe, the CPS selects approximately 60,000 households every month. About one-eighth (or about 7,500) of these retire permanently from the CPS sample each month after their eighth CPS interview attempt. Households that complete their eighth-month interview are eligible for selection for ATUS. About 2,200 of the households in this group that complete their eighth CPS interview will be selected for the ATUS sample each month.1 On average, about 200 households will be identified as ineligible; designated respondents may have moved or died or the household may be ineligible for another reason. Based on the average response rate over 2003-07, a response rate of about 55.0 percent is expected over an 8-week fielding period. Thus, about 1,100 interviews will be completed each month (2,000 eligible respondents x .550). About 34 interviews are then thrown out of the estimation process each month because they 1) have fewer than 5 activities, or 2) have more than 180 minutes of “don’t know” or “refused” activities, or 3) both.


The ATUS sample is a stratified, three-stage sample. In the first stage of selection, the CPS oversample in the less populous States is reduced. The CPS is designed to produce reliable estimates at the State and national level. The ATUS does not have a State reliability requirement. Because of the CPS State reliability requirement, the less populous States are allocated a larger proportion of the national CPS sample than they would get with only a national reliability requirement. In order to improve the efficiency of the national estimates from the ATUS and to reduce the cost of potential field follow-up, the CPS sample is subsampled to obtain the ATUS sample. The sample that remains after the subsampling is distributed across the States is approximately equal to the proportion of the national population they represent.


In the second stage of selection, households are stratified based on the following characteristics: race/ethnicity of householder, presence and age of children, and the number of adults in adult-only households. Sampling rates vary within each stratum. Eligible households with a Hispanic or non-Hispanic black householder are oversampled to improve the reliability of time-use data for these demographic groups. To ensure adequate measures of childcare, households with children are also oversampled. To compensate for this, households without children are undersampled.


In the third stage of selection, an eligible person from each household selected in the second stage is selected as the designated person (respondent) for the ATUS. An eligible person is a civilian household member at least 15 years of age. All eligible persons within a sample household have the same probability of selection.


The sample persons are then randomly assigned a designated reference day (a day of the week for which they will be reporting) and an initial interview week code (the week the case is introduced). In order to ensure accurate measures of time spent on weekdays and weekend days, the sample is split evenly between weekdays and weekend days. Ten percent of the sample is allocated to each weekday and 25 percent of the sample is allocated to each weekend day.


The following tables show the approximate size of the ATUS universe and expected annual sample size for each of the ATUS sampling strata:


Table 4. Estimated number of Persons in Universe for ATUS Sampling Strata (Civilian non-institutional population, age 15 and older, in thousands)


Race/Ethnicity of Household Reference Person

Household Type

Hispanic

Non-Hispanic, Black

Non-Hispanic, Non-Black

Total

With at least one child under 6

4,614

3,507

16,138

24,259

With at least one child between 6 and 17

4,533

4,772

26,884

36,189

Single adult, no children under 18

846

2,493

14,329

17,668

Two or more adults, no children under 18

4,369

5,369

50,231

59,969

Total

14,362

16,141

107,582

138,085


Table 5. Estimated number of Households in Universe for ATUS Sampling Strata (in thousands)


Race/Ethnicity of Household Reference Person

Household Type

Hispanic

Non-Hispanic, Black

Non-Hispanic, Non-Black

Total

With at least one child under 6

1,860

1,674

7,553

11,087

With at least one child between 6 and 17

1,607

2,004

10,193

13,804

Single adult, no children under 18

846

2,493

14,329

17,668

Two or more adults, no children under 18

1,737

2,192

21,930

25,859

Total

6,050

8,363

54,005

68,418


Table 6. Estimated Annual Sample Size in by ATUS Sampling Strata (Designated Persons)



Race/Ethnicity of Household Reference Person

Household Type

Hispanic

Non-Hispanic, Black

Non-Hispanic, Non-Black

Total

With at least one child under 6

900

744

3,420

5,064

With at least one child between 6 and 17

924

996

4,512

6,432

Single Adult, no children under 18

468

1,320

3,600

5,388

Two or more adults, no children under 18

1,032

1,188

7,224

9,444

Total

3,324

4,248

18,756

26,328


Estimation includes a series of adjustments to account for the stages of sample selection, a non-response adjustment, and a benchmarking procedure which will ensure that certain quarterly population counts from the ATUS sample agree with corresponding counts from the CPS.


The initial weight for each ATUS sample case is the CPS weight after the first-stage adjustment. This weight accounts primarily for the probability of selecting the household for the CPS and for CPS non-response. This weight is then adjusted by three factors to account for: the reduction of the CPS oversample in less-populous States, the probability of selecting the household within the ATUS sampling strata, and the probability of selecting the individual person from each sample household. The non-response adjustment increases the weights of the responding sample cases to account for those who didn’t respond by reference day and incentive status. Additional details on the weighting procedures are provided in the ATUS Weighting Plan (see Attachment J).


The benchmarking procedure is an iterative raking procedure containing three steps. The first step adjusts the weights of the sample cases so that weighted estimates of persons in various gender-race/ethnicity categories from the ATUS agree with similar population counts from the CPS. The second step of the benchmarking procedure adjusts the weights of the sample cases so that estimates from the ATUS match composite estimates from the CPS for household composition and educational attainment by gender. The third step adjusts the weights so that weighted estimates by age category and gender agree with CPS population counts. In all three steps, weights are adjusted separately for weekdays and weekend days so that population estimates agree with CPS for both day-of-week categories.


The probability that an individual participates in an activity on a given day varies across activities. For example, nearly everyone reports sleeping on the diary day, while few people report educational activities. A balanced repeated replication variance estimator is used to calculate standard errors and coefficients of variation for selected estimates. Table 5 shows the coefficients of variation (CV) of ATUS quarterly and annual average (2007) hours measures for activity categories that were published in the release of ATUS estimates.


Table 5: Quarterly and annual average CVs on average hours estimates, annual averages 2007


Activity

Estimated average CV, Quarterly estimates, 2007

CV

Annual estimates, 2007

Personal care, including sleeping

0.0059

0.0028

Eating and drinking

0.0181

0.0085

Household activities

0.0272

0.0126

Purchasing goods and services

0.0418

0.0211

Caring for and helping household members

0.0429

0.0213

Caring for and helping non-household members

0.0931

0.0531

Working and work-related activities

0.0249

0.0132

Educational activities

0.0999

0.0454

Organizational, civic, and religious activities

0.0718

0.0362

Leisure and sports

0.0147

0.0072

Telephone calls, mail and email

0.0621

0.0321

Other activities, n.e.c.

0.0992

0.0561



2. Description of Procedures


A. Description of Estimation Methodology


Four types of estimates are used to produce published ATUS tables: average hours per day, participation rates, number of participants, and average hours per day of participants.


Average Hours per Day: The average number of hours spent per day engaging in activity j for a given population, , is given by


where Tij is the amount of time spent in activity j by respondent i, and

fwgt­i is the final weight for respondent i.


Participation Rates: The percentage of the population engaging in activity j on an average day, , is computed using



where Pj is the percentage of people who engaged in activity j in a given day, and

Iij is an indicator that equals 1 if the respondent i engaged in activity j during the reference day and 0 otherwise.


In this type of estimate, Pj does not represent the proportion of people who participate in activity j over periods longer than a day.


Number of Participants: The number of persons engaging in activity j during an average day, Numj, is given by



where Numj is the number of persons participating in activity j during an average day,

Iij is an indicator that equals 1 if respondent i participated in activity j during the reference day and 0 otherwise, and

D is the number of days in the estimation period (365 for annual averages for non-leap years, for example).


Average Hours per Day of Participants: The average number of hours spent per day engaged in activity j by participants, , is given by


where Tij is the amount of time spent in activity j by respondent i, and

fwgt­i is the final weight for respondent i, Iij is an indicator that equals 1 if respondent i participated in activity j during the reference day and 0 otherwise.

B. Procedures for Collection of Information


The ATUS interview is a combination of structured questions and conversational interviewing. For the household roster update, employment status questions, the CPS updates, and the well-being module questions, Census Bureau interviewers read the question on the screen and enter the appropriate response. For the time-use “diary” and subsequent summary questions on childcare, paid work, volunteering, and missed days, the interviewer more flexibly interviews the respondent, filling in the diary grid as questions are answered. The data collection instrument includes an edit check that ensures that all cells are filled before the interviewer exits the diary. Extensive interviewer training has been provided in how to do conversational interviewing—including when to selectively probe for adequate information to code activities. Refresher training is conducted at least annually. Interviews are periodically monitored by supervisors, coaches, and BLS sponsors to evaluate conversational interviewing performance. The coding task helps to ensure that interviewers understand the level of detail needed in activity reports for accurate coding; all interviewers are also coders, though interviewers do not code their own work. A coding verification and adjudication process is in place. Verification continues to be done at 100 percent to ensure high and consistent data quality.



3. Maximizing response rates


A number of procedures were implemented in ATUS to maximize survey response rates. The 2001 field test examined the effectiveness of incentives, sending advance materials by priority mail, doubling the number of eligible interviewing days by using a day-of-week substitution methodology, calling in advance to set interview appointments, “recycling” cases for field visits, and extending the field period from 4 to up to 8 weeks. (See Attachment D.)


As discussed in Part A, section 9, testing showed that incentives significantly increased response rates. “Recycling” cases to the field—that is, turning nonresponse cases over to interviewers to conduct face-to-face interviews in the respondent’s home—was also effective in maximizing response rates, particularly for no-telephone-number households. However, incentives to all respondents and recycling were both cost prohibitive. Incentives are currently offered only to the approximately five percent of the sample for which the Census Bureau does not have a telephone number. In December 2007, BLS was granted permission to expand the definition of no-telephone-number households to include households with non-viable telephone numbers (e.g., “number could not be completed as dialed"). These households have similar characteristics as other no-telephone-number households. Since partial implementation in August 2008, 33 of these cases have been identified, and 13 have resulted in completed interviews (about a 39 percent response rate).


Calling in advance to set an appointment (“proactive appointment setting”) did not improve response, and completed interviews using that strategy required 70 percent more contact attempts than other completed interviews. As a result, advance appointment setting was rejected. Day-of-week substitution increased response rates by about 4 percentage points over 8 weeks; however, it led to a disproportionately high number of completed interviews on Wednesday and a disproportionately low number on Fridays. To maintain integrity in the day-of-week distribution of the sample, substitution was also rejected.


Consistent with survey methodological literature, priority mail appears to have increased response rates in the ATUS field test—by over 10 percentage points. It is relatively low cost to implement (about $3.50 per mailing) and is currently used for sending advance materials. The optimal field period length varies depending on incentive use. Without an incentive, the field test showed that an 8-week fielding period was required to approach 70 percent (69 percent was achieved in the field test). As a result, this 8-week fielding period was adopted for full production. To even out workload and measure time use across days of the month, one quarter of the sample is introduced each week for 4 weeks. Active cases are called up to 4 times per day on one eligible day each week for 8 weeks.

To maximize response, a toll-free number is provided to all eligible respondents in the advance materials. They can use the number to call in and set an appointment or to complete the interview (if they call on an eligible interviewing day). In addition, interviewers have job aids—answers to frequently asked questions (FAQs)—designed to help answer questions about the survey and to assist them in gaining respondents’ cooperation to participate.


In 2007, the unweighted response rate for telephone households averaged 53.8 percent and the weighted response rate was 54.9 percent. Including the no-telephone-number households, the overall response rate was 52.6 percent. Because overall response rates were lower in 2007 than the 69 percent rate achieved (using no incentives) during the 2001 field test, the BLS and the Census Bureau are cooperating to conduct a number of analyses to understand and address non-response. Completed studies include:


  • An analysis by the Census Bureau focusing on why response rates dropped between prefielding in 2002 and full production (see Attachment K);

  • A qualitative Response Analysis Survey designed by BLS and conducted by Census in 2004 (available on the Internet at http://www.bls.gov/ore/pdf/st040140.pdf);

  • A study which has examined non-response by demographic group and analyzed determinants of non-response has been undertaken (available on the Internet at http://www.amstat.org/Sections/Srms/Proceedings/y2005/Files/JSM2005-000193.pdf);

  • A non-response bias study on the 2006 ATUS data was completed by researchers at the Census Bureau (see Attachment L); and,

  • An analysis of returned mail was completed by the Census Bureau to assess their address review process, assign more accurate case outcome codes, and improve incentive case response rates (see Attachment O).


An interviewer incentive study was considered but subsequently rejected as the reality of implementing interviewer incentives was determined to be cost prohibitive.


An examination of the ATUS advance materials was undertaken and the advance materials were subsequently revised. (See Attachment M.) Advance and refusal conversion gatekeeper letters have been developed in response to interviewer focus group concerns that parents or guardians of minor designated persons were often refusing the interview for the minor. These letters were revised to improve readability and translated into Spanish. (See Attachment N.)


BLS also developed a Web site to answer respondent questions, and this web address is included in the advance letters (http://www.bls.gov/respondents/tus/home.htm). In cooperation with Census, BLS produces a quarterly interviewer newsletter to motivate and inform interviewers. BLS also conducted a workshop for interviewers on techniques to gain cooperation from respondents, and much of the material developed for this training was incorporated into other interviewer training courses. Finally, interviewer operations have been scrutinized and revised in several ways in order to increase the probability of completed interviews, such as redesigning the call blocks to add more call attempts during evening hours.


4. Tests and Research


Before the ATUS went into full production, extensive testing was done on the operations methodologies, question wording and interpretation, and activity coding. The questions that have been proposed to be added to the ATUS at the request of NIA have also been extensively tested.


  1. Completed research.


1. Operations Field Test. The 2001 operations field test is mentioned throughout this clearance package and is described in more detail in Attachment D. The ATUS presents special operational challenges because a designated person—rather than any household member—must be contacted. And, that person must be contacted on one of a set of eligible days over an 8-week period. The test was designed to examine methods for 1) maximizing respondent contact and 2) maximizing response. To do this, the field test examined the effectiveness of recycling cases for field visits, providing incentives of varying amounts, setting appointments in advance, doubling the number of eligible call days for some respondents (“day of week substitution”), and sending advance materials by priority mail. It also examined how each of these operations affected the optimal field period.


2. Cognitive testing

    1. Diary

None of the completed cognitive tests focused specifically on the time diary, although the ATUS introduction, instructions, roster update, time diary and associated contextual information were administered as part of all tests. As a result, respondents’ reactions to each of these survey elements were used to modify and improve the survey. Modifications based on respondent reactions include:


Time diary instructions

Time diary instructions were shortened so that they take approximately one minute to administer. Two major modifications were made to the original instructions: the original instructions did not specify that respondents needed to estimate the duration of each activity. As a result, respondents often “laundry-listed” activities without attributing times to each activity. Language informing respondents to estimate activity duration was added to the time diary instructions. As a result, fewer respondents have required prompting to provide time estimates. The original instructions included examples of how to report activities. Research showed that these examples were not helpful to respondents because they failed to match respondents’ daily circumstances (Stinson, 2000). Dropping examples from the time diary instructions shortened the time it took to administer them, and of the nearly 100 people who have participated in time-use research, fewer than 10 respondents requested examples that would specify the level of detail needed in the time diary.


“Who was with you?”

Stinson (2000) and Schwartz & Fricker (2000) found that the question, “Who was with you?” was open to multiple interpretations. Some respondents interpreted the question as meaning, “Who was near you?” whereas others understood it to mean, “Who participated in the activity with you?” In order to make the probe clearer, ATUS interviewers ask, “Who was in the room with you?” when respondents are at their own or someone else’s home. They ask, “Who accompanied you?” for activities that occur in other locations. Respondents are not asked “Who was in the room with you?” when they report sleeping, grooming, personal activities, or being at work. In 2008, the questions “Who was with you?” and “Who accompanied you?” were cognitively tested for times when respondents reported working or doing work-related activities. None of the respondents had difficulty remembering who was with them while they were working, although some respondents did not provide the level of detail that was desired. To ensure an appropriate level of detail is collected, respondents who say they were with “co-workers” will be asked the follow-up question, “By co-workers, did you mean you were with your manager/supervisor, people whom you supervise, or other co-workers?” (See attachment I).


    1. Childcare

Focus groups and two rounds of cognitive testing were conducted to refine the wording of the childcare summary question (Schwartz & Fricker, 2000; Schwartz, 2001). Based on the findings from those studies, reports of care for household children are restricted to times during which at least one child under the age of 13 was awake. The phrase “in your care” was selected to convey that the parent or care provider was responsible for and mindful of at least one child 12-years old or younger. For more details, please see the summary of cognitive lab #2106 that was provided to OMB in July 2001.


    1. Paid work

Stinson (2000) and Schwartz, Lynn & Gortman (2001) conducted three rounds of cognitive testing of the paid work summary questions. The major findings were that respondents interpreted both concepts, activities done for one’s job or business and activities done for pay, more broadly than researchers had intended. Based on respondents’ reports, activities done for one’s job or business can include networking or relationship-building activities and activities done for pay can include any income-generating activity that is not one’s main or second job. Cognitive lab summary #2112 provides more detail about these studies.


    1. Absences From Home

Because of the 24-hour recall design, the ATUS will systematically miss activities done when respondents are away from home for 2 days or more. A study on such absences from home, or “missed days,” was developed to test questions designed to measure the purpose of such trips. Fricker & Schwartz (2001) found that respondents who travel less frequently than 12 times per year can provide accurate information about trips that occurred up to 2 months prior. Respondents accurately reported trips’ month of occurrence and were generally accurate in their estimates of trip duration. When estimates of trip duration were off, they were off on average by one day. Respondents were also able to sort trips by purpose categories. Based on the results of this test, response options for trip purposes were revised and a multi-purpose category was added.


e. NIA Well-being Module Questions

Two rounds of cognitive testing were undertaken in order to assess comprehension and response quality of the NIA-sponsored questions. Results showed that all questions met the goals of the survey sponsor. During round 1 of the testing, some respondents were confused about the direction of the 0 to 6 scale. During round 2 of the testing, additional explanations were given that cleared up most of the confusion. No participants during the cognitive testing indicated that being asked how they felt during certain activities was too personal. These affect questions will not be asked for personal activities. (See Attachment I).


  1. Coding Tests. The ATUS coding lexicon was developed for and is unique to the ATUS. While originally based on the system used in the Australian Time-Use Survey, the system was modified a great deal to enable more detailed and flexible analysis of time-use data. Modifications were driven by results of four coding tests and by issues brought up in production. The first 3 tests were conducted with Census Bureau coders and the fourth with Westat coders. The tests examined the intuitiveness of the coding system, accuracy rates by activity tier, inter-coder variability, and coding software usability. A systems test of the coding verification and adjudication process was also completed in October 2001. The coding system continues to be modified slightly in response to issues that arise during production.


4. Software tests. Both the ATUS data collection instrument and the coding

instrument are programmed in Blaise. The data collection instrument is programmed in modules or blocks. Each block was extensively tested at Census and BLS prior to full production. Testing scenarios were repeated with each version of the instrument prior to production, and additional testing scenarios are run any time a change is made to the instrument to ensure that all modifications are correct and that there are no unintended consequences. “Audit trails” capturing every key stroke are used to fix problems. Instruments are also tested by Census Bureau interviewers prior to being used.


The Blaise coding software was tested two of the coding tests mentioned above. It was tested by Census Bureau interviewer/coders throughout preparation for ATUS and again any time a modification is introduced. Coders regularly provide feedback on contents, structure, and usability through periodic debriefings.


5. Advance diary test. Early in ATUS development, survey methodologists recommended sending diaries with the ATUS advance materials to facilitate recall and improve data quality. There was some concern among the survey sponsors about sending diaries in advance without testing effects on response.


BLS awarded a contract to the National Opinion Research Center (NORC) to conduct a split-panel test of advance diaries in April 2002. Half of the respondents in this test (n =225) received an advance diary and then completed a telephone interview that used conversational interviewing to elicit the details needed for coding. The other respondents (n =225) received the same advance materials with the exception of the diary and engaged in the standard time-use interview. NORC found that sending an advance diary increased burden, and did not improve data quality or response. The NORC final report was sent to OMB in December 2003.


After receiving the NORC test results, the BLS Office of Survey Methods Research further analyzed the data using multivariate analyses. This analysis confirmed NORC’s results. As a result, no diary was added to the ATUS advance materials.


6. Simultaneous activities. Secondary or simultaneous activities are considered one of the significant dimensions of an activity that should be captured in a time diary (Harvey, 2001). Early research at BLS as well as experience by Statistics Canada indicated that the systematic collection of secondary activities could be problematic in a telephone survey. While a paper diary form simply needs to include a column for secondary activities in order for respondents to know that they should record them, in a telephone survey, interviewers must probe, “Were you doing anything else?” for each activity in order to collect information in a systematic and unbiased way. Probing for secondary activities can quickly become burdensome and introduces the risk of fatiguing the respondent early in the interview. Additionally, Stinson (2000) found that respondents could not attribute times to secondary activities, which would weaken their analytical relevance. Research participants, members of advisory councils, and survey methodologists have all recommended collecting simultaneous activities.


In 2003, BLS solicited proposals from NORC to look at the systematic collection of simultaneous activities. The study was necessarily complex and costly. BLS decided to delay cognitive work on this subject until some empirical data on simultaneous activities were available from full production. Currently, respondent reports of simultaneous activities are recorded in ATUS but are not coded as a part of regular production. (See Part B, section 4B.)


7. Census Bureau Response Rate Investigation. In the Spring of 2003, a team at the Census Bureau compared response rates achieved in the beginning of 2003 with higher rates achieved in 2002, just before full production began. The team tested several hypotheses in an attempt to determine why response declined at that time. The team examined whether there were changes in the number or timing of call attempts, and whether the hiring of new interviewers just before full production or problems with the call scheduling software might have affected response. While they found some spikes in times of day that people refuse, they did not find a strong pattern for day of week or time of day effects in refusal rates. They also found that there was no relationship between interviewer’s ATUS refusal rates and their years of experience interviewing. In a multivariate analysis, the team found a correlation between a refusal to provide income data in CPS and a refusal to participate in ATUS. This information could be valuable for predicting nonresponse and/or targeting refusal conversion efforts. (See Attachment K.)

8. Response Analysis Survey. In 2004, qualitative research was completed to look at reasons for nonresponse in ATUS. In January 2004, the BLS developed and the Census Bureau conducted a Response Analysis Survey (RAS). Census Bureau interviewers attempted to contact a sample of both respondents and non-respondents to the ATUS to learn more about persons’ propensities to respond or not to the ATUS, and to better understand to which features of the survey response propensity might be correlated. The study focused on refusals rather than noncontacts, as the former are the main contributor to ATUS non-response. It was restricted to English-speaking adults selected for the ATUS. The primary reason that RAS respondents mentioned for not participating in the ATUS was that they were tired from responding to the CPS. The RAS also included questions about whether respondents read the advance materials, visited the web site, or sent e-mails asking for information, as well as their impressions of Census Bureau interviewers. Based on the responses to the RAS, the BLS examined how to best alter survey operations to increase designated persons’ propensities to respond. Advance materials have been revised to explain more clearly the reasons why some CPS respondents were “re-selected” for the ATUS, and the ATUS brochure was redesigned to increase the proportion who read it, and to feature the web site address more prominently. The RAS report is included with this supporting statement (available on the Internet at http://www.bls.gov/ore/pdf/st040140.pdf).


9. Non-response bias analyses. In 2004, an in-depth analysis was conducted to examine patterns of non-response. This analysis included breaking out non-response by a variety of demographic characteristics, using logistic analysis to determine variables related to non-response, and building a propensity score model to examine differences in time-use patterns and to assess the extent of non-response bias. Findings indicate that older persons and those with higher education are more likely to respond to ATUS than are younger and less-educated persons. Also, Hispanics have lower response rates than non-Hispanics (available on the Internet at http://www.amstat.org/Sections/Srms/Proceedings/y2005/Files/JSM2005-000193.pdf).


The ATUS survey methodology files are available to the public, enabling outside researchers to examine survey methods issues. Abraham et al. (2006) found that people weakly integrated into their communities were less likely to respond to ATUS, mostly because they were less likely to be contacted. The authors compared aggregate time use estimates using the ATUS base weights without adjustment for nonresponse, using the ATUS final weights with a nonresponse adjustment, and using weights that incorporate incorporating the authors’ nonresponse adjustment based on a propensity model. They found the three sets of estimates to be similar.


An analysis of non-response on the 2006 ATUS was recently completed by the Demographic Statistical Methods Division at Census. Some results from this study were consistent with previous non-response bias studies, such as lower response rates for those living in urban areas and higher refusal rates for those missing the CPS income variable. However, this study contradicted previous studies in several areas. Contrary to previous studies, this Census study did not find lower response rates for the unemployed or those not in the labor force. It also did find lower contact rates for people who work longer hours, and for blacks and Hispanics. (See Attachment L).


10. Advance Materials Analysis. In 2004, two studies were undertaken to re-examine the ATUS advance materials. An expert review of the materials and focus groups with ATUS interviewers were conducted in order to determine how the advance materials might be re-designed to better influence designated persons to participate. Findings from both studies indicated the letter should be shorter and the brochure should have a more appealing design, including switching from a dichromatic to a full color scheme. In addition, the focus groups and expert reviewers recommended revising the brochure to address more of the questions that participants often have about the survey. In 2005, extensive revisions were made to the advance materials based on these studies (see Attachment M for the full report; see Attachments G & H for revised advance letter and brochure).


11. Interview Operations Analysis. In 2004, telephone call center operations were examined in order to determine if measures could be taken to increase response rates. Three basic operations were changed in order to address response rates. First, the ATUS staff learned that while many surveys set calling goals for interviewers, the call center management was not providing ATUS interviewers with daily or weekly goals. Beginning in the summer of 2004, the telephone center management set daily goals for ATUS interviewers (based on a 60 percent response rate), providing concrete guidelines for how many completed calls are desired. Although the interviewers do not always meet their goals, these goals assist the telephone center management to measure daily progress and to motivate the interviewers. Second, it was discovered that because of the way call blocks (times) were scheduled, many calls were being made between about 4:30 pm and 5:00 pm, before many people were home from work. Methods for calling were changed so that more calls would be made after 5:30 pm, when people who work regular 9-5 hours would be more likely to be home. Finally, the Census Bureau conducted more research into invalid phone numbers in an attempt to find valid phone numbers for the contact person.


12. Incentive experiment. In line with terms of clearance from the 2003 OMB package, the feasibility of an incentive experiment conducted in a production environment has been considered. A BLS and Census Bureau interagency team extensively considered the development of an experiment, with the intention of conducting it in fiscal year 2005. Planning and assessment meetings determined that the incentive experiment was not a viable option for increasing response rates. The ATUS’s budget is not large enough to provide incentives to every participant were an incentive used in production. Therefore, even if the incentive experiment did show that incentives increased in response rates, ATUS would not be able to follow through on this information and provide incentives after the experimental period.


13. Item nonresponse. In order to assess the quality of individual variables collected through ATUS, BLS investigated the incidence of missing and imputed ATUS data. Item nonresponse was found to be quite low in the ATUS, with most variables having an item nonresponse of well under 2 percent. The two variables describing weekly and hourly earnings had a higher incidence of nonresponse, but the rate of imputation for these variables were lower in the ATUS than they were in the CPS (see chapter 6 of the ATUS Users’s Guide at http://www.bls.gov/tus/atususersguide.pdf).


14. Alternative contact strategies. Using simulated data, Stewart (2002) examined the effect of using different contact strategies in a telephone survey. He found that allowing for day-of-week substitution resulted in a systematic bias, and that data collected would overstate the amount of time spent away from home. By contrast, a designated-day approach resulted in little bias.


15. Analysis of returned mail. Census Bureau staff conducted an analysis of returned advance mailings and postcards to assess how effective their address review and correction process was, what the impact on response rates would be if addresses identified as movers were reassigned as “not eligibles,” and how the mail return rates differed between incentive and non-incentive cases. The research concluded that reassigning all returned mail with a “mover-left no address” code as “not eligibles” would only increase response rates by 0.1 percent. However, it would require significant staff time to properly research these addresses to ensure that they were movers.


However, it was also discovered that twice as many incentive cases had advance mailings returned than non-incentive cases, and those cases that had the advance mailings returned were three times less likely to complete the ATUS interview. Incentive cases are a special concern because respondents must contact the call center to complete the interview, and this contact information is provided in the advance letter. In order to increase incentive case response rates, Census Bureau staff now researches addresses for all incentive cases that had mail returned. (See Attachment O.)


B. In progress and planned research


1. Eldercare. BLS has begun working on a conceptual definition of eldercare in an effort to determine how best to develop appropriate survey questions or collection methodologies for obtaining good measures of the amount of time spend caring for the elderly. In 2004, a researcher in the BLS Office of Survey Methods Research conducted a literature review on eldercare measurement; BLS also solicited thoughts on this subject from students in a University of Maryland Joint Program on Survey Methodology course on survey design. BLS developed a series of vignettes to be presented to a panel consisting of experts in the field of eldercare to assist in developing an eldercare definition. Work in this area has halted due to budget constraints, though BLS is still interested in collecting this information through additional questions in the ATUS. If this project is revisited, an OMB research clearance package will be developed for this work.


2. Simultaneous activities. In the first year of production, only simultaneous childcare was collected and measured. In order to determine the types and quality of simultaneous activity data collected through the ATUS, Census Bureau interviewers coded all simultaneous activities collected in 2006. Current research compares the simultaneous activities collected in the 2006 ATUS to those collected in the 2000 Family Interaction, Social Capital, and Trends in Time Use Survey. The project will assess the quality of the simultaneous activities and evaluate the usefulness of the data.


3. Interviewer/coder debriefings. Many interviewer debriefings have been conducted since full production began in 2003, and they are regularly conducted as part of training evaluation. These debriefings have illuminated procedural difficulties and identified questions that interviewers feel pose problems for respondents. They also assist in clarifying interviewer questions and improving future training. Periodic debriefings will continue to be held throughout survey production.


4. Call block research. Using the times that respondents were contacted and the outcome of each call attempt, BLS will determine optimal call block times. This research will seek to determine whether different subpopulations – for example, elderly persons and teens – would be better served by a different call block strategy.


5. Behavior coding. Behavior coding is a technique that has been successfully utilized with event history calendar data collection (Belli, 2004) to understand how interviewers ask questions and provide clarification and feedback to respondents, how respondents interpret questions and recall answers, and how interviewers and respondents interact during the survey task. ATUS interviewers are trained in conversational interviewing techniques, which allow for interventions with a respondent to help him or her stay on track when remembering the day’s activities, and activity sequences and timing. BLS is conducting additional research on respondents’ cognitive processes to aid in instrument development and interviewer interventions. Research to evaluate how conversational interviewing and specific recall techniques are used by interviewers and whether the techniques are successful in helping respondents reconstruct their day could help refine ATUS procedures, reduce measurement error, and improve data quality.


6. Undelivered mail reports. All of those in the ATUS sample are sent an advance letter prior to the first interview attempt, but some of the letters are not delivered. The Demographic Surveys Division at Census is conducting a review of the undelivered mail reports from April 2005 through October 2007. The goal of this analysis is to improve the ATUS address review process and increase the likelihood that advance materials are received. This analysis is also focusing on the difference between the rates of returned mail for incentive and non-incentive cases.


7. Secondary childcare question investigation. BLS is evaluating the secondary childcare data to determine if what is being collected is what was intended by the question.


8. Employment differences between ATUS and CPS. BLS is investigating differences between the employment numbers in ATUS and CPS to try and identify the causes of the differences. Identifying the reasons for the differences could lead to improvements in either or both surveys’ methodologies.



5. Contacts


The following individuals may be consulted concerning the statistical data collection and analysis operation:


Statistical Design

Samson Adeshiyan

Demographic Statistical Methods Division

Bureau of the Census

301-763-5874


Data Collection

Howard McGowan

Demographic Surveys Division

Bureau of the Census

301-763-5342


Statistical Analysis

Dorinda Allard

American Time Use Survey

Bureau of Labor Statistics

202-691-6470


Works Cited


Abraham, Katharine G., Aaron Maitland and Suzanne M. Bianchi. (2006). “Nonresponse in the American Time Use Survey: Who Is Missing from the Data and How Much Does It Matter?” Public Opinion Quarterly, 70 (5): 676-703.


American Association for Public Opinion Research. (2008). Standard Definitions:

Final Dispositions of Case Codes and Outcome Rates for Surveys. 5th edition. Lenexa,

Kansas: AAPOR.


Belli, R., Lee, E.H., Stafford, F.P., and Chou, C.H. (2004). “Calendar and Question-List Survey Methods: Association Between Interviewer Behaviors and Data Quality.” Journal of Official Statistics 20: 185-218.


Fricker, Scott and Lisa K. Schwartz. (August, 2001). “Reporting absences from home: Results of cognitive testing of the American Time Use Survey’s missed days summary question.”


Harvey, Andrew S. “Guidelines for Time Use Data Collection.” Social Indicators Research 30 (1993): 197-228.


Herz, Diane E., Lisa K. Schwartz, and Kristina J. Shelley (2001). “Coding Activities in the American Time Use Survey.”


Landefeld, J. Steven, Barbara M. Fraumeni, and Cindy M. Vojtech. “Accounting for Nonmarket Production: A Prototype Satellite Account Using the American Time Use Survey.” Bureau of Economic Analysis Working Paper, December 2005.


Landefeld, J. Steven and Stephanie H. McCulla. “Accounting for Nonmarket Household Production Within a National Accounts Framework.” Review of Income and Wealth, Series 46, Number 3 (September 2000): 289-307.


McGuckin, Nancy. U.S. Department of Transportation, National Personal Transportation Survey. Personal Communication. 3 Aug. 1996.


National Academy of Sciences (NAS) and Committee on National Statistics, National Research Council. Time-Use Measurement and Research. Washington, D.C.: National Academy Press, 2000.


Nordhaus, William. “Time-Use Surveys: Where Should the BLS Go From Here?” Summary of the Conference on Time Use, Non-Market Work, and Family Well-Being, Hatch, Lynn (Ed.), Washington, DC: BLS and the MacArthur Network on the Family and the Economy, 1997.


Robinson, John P. and Geoffrey Godbey. Time for Life: The Surprising Ways Americans Use Their Time. University Park, PA: The Pennsylvania State University Press, 1997.


Robison, E. (1999) “Sampling and Reporting in Time-Use Surveys,” published in the proceedings of the American Statistical Association meetings.


Schwartz, Lisa K. (February, 2001). “Minding the Children: Understanding how recall and conceptual interpretations influence responses to a time-use summary question.”


Schwartz, Lisa K., Jayme Gortman, and Siri Lynn. (July, 2001). “What’s work? Respondents’ interpretations of work-related summary questions.”


Shelley, Kristina (2005). “Developing the American Time Use Survey activity classification system,” Monthly Labor Review, June 2005, pp. 3-15.

Stewart, Jay (2002). “Assessing the Bias Associated with Alternative Contact Strategies in Telephone Time-Use Surveys.” Survey Methodology. Volume 28, No. 2/December 2002, pp. 4-15.


Stinson, Linda. (2000). Final report of cognitive testing for the American Time Use Survey. Bureau of Labor Statistics internal report, August 2000.


Westat. “Research Services for Usability Testing and Lexicon Evaluation: The American Time Use Survey.” October, 2001.



Attachments:

  1. ATUS Questionnaire Specifications

  2. Proposed Well-being Module Questions

  3. Consultations Outside BLS

  4. Report on 2001 Field Test Analysis

  5. ATUS Debit Card Mailer

  6. Legal Authority Backing

Title 13, United States Code, Sections 8 & 9

Title 29, United States Code, Sections 1 & 2

  1. ATUS Advance Letters

  2. ATUS Advance Brochure

  3. Cognitive Testing (Well-being Module) Report

  4. ATUS Weighting Plan

  5. ATUS Response Rates Analysis Results

  6. ATUS Nonresponse Bias Study

  7. Advance Materials Report

  8. Refusal Conversion Letter

  9. Census Returned Mail Analysis

1 In 2003, the first year of full production, the ATUS sample was 35 percent higher than in later years. The original target was to complete 2,000 interviews per month. The monthly sample was reduced beginning in December 2003 in order to bring survey costs in line with the survey budget. The original annual sample was drawn to meet the target goal assuming a 70% response rate. The goal was twice the minimum 12,000 interviews/year (1,000/month) originally identified by Robison (1999), in “Sampling and Reporting in Time-Use Surveys,” as the number required to contrast time-use estimates for major subpopulations of interest. Robison recommended adding an additional 12,000 interviews to enable more subpopulation comparisons. His assumptions used time-use distributions for various subpopulations from a 1975 University of Michigan time-use survey as well as associated parameters that enabled the calculation of standard errors and confidence intervals under different assumptions. These numbers and parameters were published in Juster and Stafford (1985).

21


File Typeapplication/msword
File TitleSUPPORTING STATEMENT
Authorrones_p
Last Modified Byrowan_c
File Modified2009-05-06
File Created2009-04-27

© 2024 OMB.report | Privacy Policy