statistical methods

statistical methods.doc

Fourth National Study of OAA Title III Service Recipients

OMB: 0985-0023

Document [doc]
Download: doc | pdf

B. Collection of Information Employing Statistical Methods

B.1 Respondent Universe and Sampling Methods

The respondent universe will vary depending on the data collection instrument (see Exhibit B-1). For the telephone contact with the State and Area Agencies on Aging, the research team will collect information from a probability sample that is determined by selecting agencies proportional to size (PPS) of the total annual budget. The service recipients within each sampled AAA will be selected at random. In this way, all service recipients will have a known probability of selection for the sample.


This is the fourth time this type of study will be conducted. The earlier OMB approved surveys (0985-0014, 0985-0017, 0985-0020) were conducted in 2002-2003, 2004 and 2005. Before Westat contacted the AAAs, a letter was sent explaining the survey, with the materials needed to develop lists of participants. (A copy of these materials was also sent to each State Unit on Aging that had AAAs in the study.) Once each agency had a participant list for each service, they contacted Westat for the random selection of the respondents to be interviewed. This process was completed using a program Westat developed, whereby the number of participants on the service list was entered into the program, and the program gave line numbers of the respondents it selected. The number of respondents to select per service was already entered into the program and was based on the size of the agency. The agencies then provided the participant names and telephone numbers associated with those line numbers to Westat. Westat will use the same procedure to select respondents for the Fourth National Study of Title III Service Recipients.


The research team knows that most of the AAAs will not have a complete unduplicated list of service recipients for their agencies. The research team therefore proposes using the same methodology for selecting clients for the survey as used in the previous three surveys: requesting AAAs to develop lists of clients (by service) who have received that service within the past 12 months, then randomly selecting respondents for the survey from those lists. This will help reduce burden: agencies who have previously participated in the survey would otherwise need to learn a new way of preparing the list for selection of respondents or (for all selected AAAs) would need to develop an unduplicated list across services. The services for which lists would be developed would be: home delivered meals, congregate meals, transportation, case management, and homemaker services as well as caregivers who are served by the National Family Caregiver Support Program. Appendix E contains the pre-notification packet the agencies will receive.


For the First National Survey, Westat recruited 132 out of 150 agencies, for a response rate of 88 percent. For the Second National Survey, Westat recruited 138 out of 165 agencies, for a response rate of 83.6 percent. For the Third National Survey, Westat recruited 272 out of 310 agencies, for a response rate of 87 percent. When selecting agencies for the fourth survey, Westat will select a sample size large enough to recruit at least 250 Area Agencies on Aging. Westat will assure that at least 80 percent of the sampled AAAs agree to participate. Westat research assistants will encourage the participation of all selected agencies by contacting those that have not called with participant lists, coaching them on how to easily set up their lists, and assuring them that the time involved for them to complete the participant selection procedures will be minimal, just as Westat did for the first, second and third surveys. For agencies that refuse to participate, Westat will send them a refusal conversion letter (already developed for the previous three surveys), and call them one more time to try to gain their cooperation. Once an agency refuses a second time, Westat will not try to contact them again. For the Evaluation of Independent Living Programs (an OMB-approved national study for the Department of Rehabilitation Services Administration Office of Special Education and Rehabilitative Services, U.S. Department of Education), and for the first, second and third AoA National Surveys, Westat research assistants called the original agencies, sent E-mails, and/or faxes, and resent recruitment packages via FedEx. Westat will use the same techniques to gain cooperation for the fourth study.


The research team anticipates an 80 percent response rate for the telephone survey of respondents, based on the success we had with the first, second and third surveys. To ensure this high response rate, each AAA will send participants who are eligible for the telephone survey a letter before they are contacted by an interviewer. The letter will be on their AAA’s letterhead, as was the precontact letter for the first three surveys. Westat will attempt to contact participants at different times of the day and different days of the week to maximize the possibility of contact. Westat is also experienced in refusal conversion procedures, generally achieving a 25 to 30 percent refusal conversion rate. For all three National Surveys of OAA Participants, the refusal conversion rate was about 40 percent.


Exhibit B-1 presents the respondent universe for each module proposed for the Fourth National Study of OAA Title III Service Recipients.


Exhibit B-1. Respondent universe


Performance measures

Indicator

Participants to be sampled

Service Recipient Survey

Congregate Meals Module

Questions on nutrition intake, nutrition risk, food security and clients’ assessments of the Congregate Meals program.

All service recipients receiving Congregate Meals services

Home-delivered Meals Module

Questions on nutrition intake, nutrition risk, food security and clients’ assessments of the Home-delivered Meals program.

All service recipients receiving Home Delivered Meals

Transportation Module

.Questions on client’s experience and assessment of transportation services

All users of Transportation Services

Case Management Module

Questions on clients’ experiences and assessments of case management services.

All service recipients receiving case management services.

Homemaker/Housekeeping Module

Questions on clients’ experiences and assessments of Homemaker/Housekeeping services.

All service recipients who receive Homemaker/Housekeeping Services

Additional Services List

Questions asking service recipients if they receive other OAA services.

All service recipients. Caregivers will be asked about services received by their care recipients.

Physical Functioning Module

Revised Katz Activities of Daily Living (ADL) Scale and Quality of life measures from the CDC questionnaire1, as well as the full SF 12 v. 2 (see Appendix H)2.

All service recipients. Caregivers will be asked these questions about their care recipients.

Caregiver Survey

National Family Caregiver Support Program Questionnaire

Questions on caregiver support and assessment of the program based on the Caregiver survey developed for the first, second and third national surveys.

Caregivers who participate in the National Family Caregiver Support Program

Service Recipient and Caregiver Surveys

Demographic Information Module

Demographic information

All service recipients and caregivers

B.2 Procedures for the Collection of Information

B.2.1 Introduction

Several data collection activities will be conducted to support the study. They are designed to inform AoA on results of performance measures for state and community programs on aging under the Older Americans Act.


B.2.2 Data Collection Procedures

B.2.2.1 Telephone Contact with State and Local Agencies on Aging

Information will be collected in a two-step process. The proposed design will employ a probability sample of all State and Local Agencies on Aging proportional to size (PPS) of the total annual budget. Once an agency is selected, it will receive a FedEx package which contains an introductory letter from AoA along with detailed instructions for the AAA. Approximately two days later, a researcher will call the agencies to explain the purpose of the participant telephone survey and the agency’s role in it. The agencies will be asked to develop lists of clients by service: Family Caregiver Support Program, home delivered meals, congregate meals, case management, transportation and homemaker service recipients. The researcher will explain the participant lists the agency needs to develop (if they do not already have them) and to schedule a time for the researcher to call back to select the respondents from each list, to participate in the survey. Previous experience has enabled Westat, the contractor, to streamline the data collection procedures for the AAAs.


B.2.2.2 Telephone Survey of Participants and Caregivers

This activity entails conducting a 30-minute telephone survey of a representative sample of Older Americans Act participants and caregivers. The interviews are designed to determine participant and caregiver assessment of program participation and client reported progress outcomes.


B.2.3 Sampling Plan

B.2.3.1 Sample Design

The sample design for the fourth survey will consist of two stages, with a sample of 250 AAAs in the first stage and a sample of clients, by service type, from each selected AAA, in the second stage. This design is similar to that of the third survey. The client sample sizes by service type, as specified by the AoA, are as follows:


  • Caregiver Services 1,000

  • Home Delivered Meals 1,000

  • Congregate Meals 1,000

  • Case Management Services 1,000

  • Transportation Services 1,000

  • Homemaker Services 1,000


As in the third national survey, these sample sizes will permit the production of reliable estimates both at the national level and at the geographic regional or demographic sub-group level.


For a two-stage design, Table B-1 presents the half-widths of the 95 percent confidence intervals (CI) for various sample sizes and for target characteristics of proportions ranging from 10 percent to 50 percent.3 The 50 percent target is a worst-case scenario, where respondents are expected to be fairly evenly split on a particular response item, limiting the reliability of the estimate (e.g., such as trying to predict the outcome of an election where the sample of voters is about evenly divided between two candidates). Also, the precision of any estimate greater than 50 percent is the same as that of its complement, i.e., the precision of a 70 percent estimate is the same as the precision of a 30 percent estimate. The numbers in the tables are half-widths of 95 percent CIs, (i.e., the estimate, the half-width is the CI, where half-width is 1.96 times the standard error (SE) of an estimate). For example, Table B-1 shows that for a sample of size 1,000, for a target characteristic of around 30 percent, the CI would be the estimate 3.24 percent.


The table can be used to assess the adequacy of the sample sizes for both the national, and the regional or sub-group level estimates. For example, if the sample size is 1,000 at the national level then the sixth row in Table B-1 would provide the precision of the estimates at the national level. From the same table, the precision of an estimate at the regional or sub-group level can be obtained by computing the sample size that is expected for a particular region. For instance, if the region covers 25 percent of the target population, then the sample size for that region is expected to be about 250 (out of 1,000) under a proportional allocation, and the precision of the estimates for that region can be checked from the row where the sample size equals 250 in Table B-1. Similarly, if a sub-group covers 10 percent of the target population then the expected sample size for that sub-group is 100 out of 1,000 and the precision of the estimates for that sub-group can be checked from the row with sample size equal 100.


The total size of the target population has a negligible impact on the sample size requirement. For example, if a sample size of 250 is required to produce an estimate at the national level, then to estimate the same characteristic for a particular region (with the same level of precision), the required sample size from that region alone would be about 250. If there are four regions, then the required sample size at the national level would be about 1,000 (to guarantee adequate representation in each group). Therefore, to meet the objective of the proposed study (i.e., to produce estimates at the regional or sub-group level with the same level of precision as the national estimates obtained from previous studies), the required sample size for each target region or sub-group will have to be the same as the total sample size of the previous studies.


For instance, a question was asked in the first national survey about the timeliness of the delivery of meals and an estimated 44 percent of all clients reported that the meals arrived on time, all the time. This estimate was based on a sample of 472 clients and had a CI of 5.2 percent. Table B-1 shows that to achieve a CI of 5.2 percent for an estimate, with a proportion between 40 percent and 50 percent, a sample of size around 480 is required. That means if this estimate is required at the regional level with the same level of confidence as the national, then the sample size in each region will have to be 480 and hence the sample size at the national level will be 480x4=1,920. In that case, the CI for this estimate at the national level would be much more precise than for the region (a little over 2.5 percent). Table B-1 can be used to the see the precision of the estimates that would be achieved at various levels using the expected sample sizes at the respective levels. The table can also be used to check the sample size requirement corresponding to a desired level of precision of an estimate.


Table B-1 Half-widths of 95 percent confidence intervals by various sample sizes and estimates of target characteristics (computed for a two-stage design with a design effect of 1.30)

Sample size

Estimates of target characteristics

10 percent

20 percent

30 percent

40 percent

50 percent

3,500

1.13

1.51

1.73

1.85

1.89

3,000

1.22

1.63

1.87

2.00

2.04

2,500

1.34

1.79

2.05

2.19

2.23

2,000

1.50

2.00

2.29

2.45

2.50

1,500

1.73

2.31

2.64

2.83

2.89

1,000

2.12

2.83

3.24

3.46

3.53

750

2.45

3.26

3.74

4.00

4.08

500

3.00

4.00

4.58

4.90

5.00

400

3.35

4.47

5.12

5.47

5.59

300

3.87

5.16

5.91

6.32

6.45

250

4.24

5.65

6.48

6.92

7.07

200

4.74

6.32

7.24

7.74

7.90

100

6.70

8.94

10.24

10.95

11.17


It is important to note that if the population sizes in the sub-groups or regions vary widely, then the national sample must be allocated appropriately to produce estimates from all individual sub-groups/regions with an equal level of precision. Otherwise, under a proportionate allocation, larger sub-groups will have more than the required sample size while the smaller sub-groups will have less than the sample size required. For example, if the estimates are required separately for Whites and African-Americans, then just increasing the national sample would not ensure sufficient sample size for African-Americans, because less than 15 percent of recipients are African-Americans for many services. In this situation, the national sample can be disproportionately allocated by over-sampling smaller sub-groups to ensure that sufficient samples are drawn from all target sub-groups. However, over-sampling an ethnic or demographic group will require that agencies first list all their clients with the characteristic of interest and then select a sample from this list by sub-group (which may exceed the capacity of many AAA information systems).


B.2.3.2 Sample Size for Estimation of Change

If there is interest in comparing estimates from one year with another year, or comparing estimates of one sub-group with another sub-group, the sample size requirements are different from those that show individual point estimates at the same level of precision. The standard error (SE) of the difference between two independent estimates (for example, A and B) can be obtained by the formula , and the half-width of the 95 percent CI is . Since the variance of the estimate (of a difference between estimates) is the sum of the variances of the relevant individual estimates, the required sample size for estimating a difference or change is higher than for a single point estimate.


Table B-2 presents half-widths of 95 percent CI’s under a two-stage design for various sample sizes and various averages of the two estimates to be compared. For example, if the average of the two target characteristics to be compared is around 30 percent (for example, A=25 and B=35) and the sample size in each sub-group is 500, to detect a difference between the two sub-groups with statistical significance, the actual difference between the two sub-group characteristics will have to be at least 6.48 percent. This is much higher than the corresponding half-widths presented in Table B-1 for each of the individual estimates. That means a sample size that is sufficient to produce a reliable point estimate for each sub-group, individually, is not necessarily sufficient to detect the difference between the two sub-groups with the same level of precision.


Therefore, if the survey is designed for use at a region or sub-group level, then the corresponding national estimates can be compared meaningfully from one year to another, or for one service versus another (e.g., the percent of each service’s clients below a certain income level). For example, if the sample size is 1,000 in each year, and if the average response proportion for the two target characteristics is around 30 percent, then a difference of 4.58 percent or more between the years is detectable. The corresponding comparison with a sub-group sample of size 500 would not allow the detection of a difference smaller than 6.48 percent. Table B-2 can be used to see the extent of difference that can be detected under a two-stage design, for various sample sizes, and for various characteristics to be compared, either at the national or at the sub-group level.


Table B-2 Half-widths of 95 percent confidence intervals for the difference between two estimates by various sample sizes and for various averages of the two estimates (computed for a two-stage design with a design effect of 1.30)



Sample size in each group

Average of the estimates to be compared

10 percent

20 percent

30 percent

40 percent

50 percent

3,500

1.60

2.14

2.45

2.62

2.67

3,000

1.73

2.31

2.64

2.83

2.89

2,500

1.90

2.53

2.90

3.10

3.16

2,000

2.12

2.83

3.24

3.46

3.53

1,500

2.45

3.26

3.74

4.00

4.08

1,000

3.00

4.00

4.58

4.90

5.00

750

3.46

4.62

5.29

5.65

5.77

500

4.24

5.65

6.48

6.92

7.07

400

4.74

6.32

7.24

7.74

7.90

300

5.47

7.30

8.36

8.94

9.12

250

6.00

8.00

9.16

9.79

9.99

200

6.70

8.94

10.24

10.95

11.17

100

9.48

12.64

14.48

15.48

15.80



B.2.3.3 Sample Survey Operations

Westat will work with the States and the 250 selected AAAs to draw a random sample of OAA Service recipients for the six service areas being studied: National Family Caregivers Support Program, Home-Delivered Meals, Congregate Meals, Case Management Services, Transportation Services and Homemaker Services. This work will be completed after OMB clearance. Westat will contact the selected AAAs, collect client sample contact information, and submit this material to the Telephone Research Center (TRC) for interviewing purposes. Based on the experience of the previous surveys, the AAA recruiting process will take three months to complete; however, the client interviewing will be completed at the end of seven months from the date of the beginning of the sampling process, given the benefit of concurrent sampling and interviewing.


B.2.4 Older Americans Act Participant Survey Instruments

The study consists of telephone interviews with service recipients and caregivers. The interview is structured and will contain specific questions about the mix of services the person has received and his or her assessment of those services. Whenever appropriate, questions will contain predefined categories. Probes will be used to facilitate obtaining complete responses to all the questions. The interviews of caregivers will not include the questions that ask for physical functioning (except health conditions and ADL and IADL limitations of their care recipients). The interviews will last approximately 30 minutes and cover the topics discussed below. Since service recipients will be selected from lists by service, respondents will only be asked about the service for which they were selected for an interview. This is the same process followed for each of the previous surveys. These questionnaires can be found in Appendix H.


  1. Nutrition-Congregate Meals--If a respondent receives Congregate Meals, they will be administered a questionnaire based on the Congregate Meals survey, used for the first and second national surveys, as well as POMP I through VI. This questionnaire asks how long they have been attending the congregate meals program; how often they eat at the site; when the last time was they ate at the site;, to rate the program; their daily food intake and how much of their food intake the meal provides on the days they eat at the site.

  2. Nutrition-Home-delivered Meals— If a respondent receives Home-delivered Meals, they will be administered a questionnaire based on the Home-delivered Meals survey, used for the first, second and third national surveys, and POMP I through VI. This questionnaire asks how long they have been receiving home-delivered meals; how often they receive home-delivered meals; when the last time was they received a meal; to rate the program; their daily food intake and how much of their food intake the meal provides on the days they receive home –delivered meals.

  3. Transportation—All service recipients who use transportation services will be interviewed using this survey module. The module asks how long they have been using the transportation; how often they use it; when the last time was they used it; trip purpose; to rate the transportation service and about the number of times the respondent uses the service. This module is based on the instrument used for the first three surveys, and all six of the POMP surveys.

  4. Homemaker/Housekeeping—Questions on the impact of homecare services will be asked of respondents who receive homemaker or housekeeping services. These questions are based on the Housekeeping Service Module developed by the POMP VI grantees. Again, the set of questions is similar to those asked of the other services: how long respondents have been receiving homemaker services; how often they receive homemaker services; when the last time was they used the services; to rate the program and if they can depend on their aides to do deliver the allotted services.

  5. Case Management—Service recipients who receive case management services will be asked questions about their experiences with the program. They will be asked: how long they have been receiving the services; how they would rate the various aspects of the case management services (e.g. ease of contact with the case managers; if the case managers understand their needs, etc.); to rate the services overall and if they contribute to the decisions about their care. This module is based on the case management module developed by the POMP V grantees.

  6. Service List--All service recipients will then be asked about the mix of services they receive and the impact of those services. They will also be asked to rate the services overall. This module is based on the service module used for the third national survey, with added questions from POMP VI.

  7. Physical Functioning— This module will be asked of all service recipients. This survey module will include questions on: how the respondents’ mental and physical health affect their day-to-day lives, Activity of Daily Living limitations (e.g., difficulty with personal care activities such as bathing and dressing) and Instrumental Activity of Daily Living limitations (e.g., difficulty with such home management activities as meal preparation, shopping, and housekeeping). Questions about the respondents’ health are also being asked, to help with assessing the frailty of the clients served by OAA services. Caregivers will be asked these questions about their care recipients.

  8. National Family Caregiver Support Program Assessment—Caregivers who receive caregiver support services through the National Family Caregiver Support Program will be surveyed as part of the Fourth National Study of Title III Service Recipients. This module has questions on services offered to caregivers through the National Family Caregiver Support Program, and the impact of those services. There are also questions about services the care recipient receives and satisfaction with and impact of those services; support the caregiver receives, either as part of a formal support group or from other relatives and friends; and what kinds of other information the caregiver would find valuable. The survey asks about the type of help the caregiver provides for the care recipient, the amount of time they provide care, benefits caregiving provides them, drawbacks of caregiving (financial burdens, lack of private time, etc), and demographic and health information on the care recipient. Three of the questions for this module were adapted from an AARP survey, Caregiving in the U.S.4

  9. Demographic information of the respondent—Demographic information about the respondent will be collected, including type of area of residence (urban, suburban, or rural), Zip Code, education level, race, gender, living arrangements (living alone, with spouse, or with others), and income level. Response items to the income questions for meals recipients have been set to enable determination of poverty level. This module will be administered to all participants. The caregiver survey already includes some demographic questions about the care recipient, but the demographic information on the caregiver will be gathered using this demographic module.

Many of the national survey questions come from such commonly used vehicles as the Survey of Income and Program Participation (SIPP), (e.g., the ADL and IADL questions), the Behavioral Risk Factor Surveillance System (BRFSS) surveys conducted within each state using HHS/CDC standard questions, CDC’s Quality of Life Measures, the Sf12 v. 2 and other existing surveys. These are virtually the same instruments used for the previous three national surveys as well (see Appendix F for a comparison of the questions on the first, second, third and fourth surveys).


B.3 Eliciting Cooperation/Maximizing Response Rates

AoA does not expect problems in eliciting cooperation from the respondents who will be interviewed during the course of the study. Proven methods will be used to achieve very good completion rates (80% for agencies selected for the participant survey and for service recipients). These methods include a letter from AoA informing agencies about the study, encouraging cooperation, and using knowledgeable Westat staff to contact the State and Local Agencies and respondents.


Nonresponse adjustment was done as part of the weighting process for the First and Second National Surveys of OAA Participants and the Third National Study of Title III Service Recipients, and will also be done for the fourth study. The weights of the respondents were inflated to account for the weights of the non-respondents separately for each service. The adjustment was applied independently within nonresponse adjustment groups defined by census region and size of the agencies. That means the non-respondents within a group are represented by the respondents in the same group. The same types of nonresponse adjustment will be done for the 2006 study.


As was done for the previous three national surveys, several steps will be taken to ensure high response rates of agencies (for the First National Survey, Westat recruited of 88 percent of the AAAs; for the Second National Survey, the AAA response rate was 83.6 percent; and for the Third National Survey, Westat recruited 87 percent of the AAAs). First, agencies will receive an early communication introducing the study, explaining the purpose of the data collection, and describing steps that need to be taken to produce numbered participant lists by service (see Appendix E). Westat staff will assist agencies as much as possible by telephone. The agencies will also be given a toll-free number to call with their assigned Westat staff person’s extension, so they may call with any questions they have. Westat staff will also contact agencies that have not contacted them, remind them of the letter they received from the AoA and the packet of information and instructions sent to them by Westat, and assure them of the ease of participating in the selection of respondents


Westat will use proven methods to ensure response rates from older persons. These will include special techniques taught during interviewer training, such as communicating simply and clearly, repeating questions when necessary, and assuring legitimacy and confidentiality. The AAAs will send the selected respondents a prenotification letter that has been approved by AoA, on their own AAA letterhead (see Appendix E), and provide a toll-free number responders can call to verify the study. At all times respondents will be assured of the voluntary nature of the study and the confidentiality of their responses. Westat will also assure them that their decision on whether or not to cooperate with the study will have no effect on their eligibility for services.


Other elements for achieving a high response rate include acquiring an experienced, sensitive interviewing staff; developing a training program that prepares the interviewers for the survey tasks; implementing appropriate interviewing procedures; being sufficiently flexible to accommodate respondents’ requests; and implementing sound management and quality control procedures. Factors that specifically influence reluctant individuals to participate include the following:


Interviewers’ ability to obtain cooperation—Westat will use as many experienced interviewers as possible. New interviewers will be thoroughly trained in general interviewing techniques prior to the project-specific training. All interviewers will be monitored, evaluated, and provided with instant feedback on their performance to eliminate interaction patterns or telephone demeanor that might be detrimental to achieving cooperation. For the national surveys of OAA Participants, Westat used all experienced interviewers. Most of the interviewers for the Third National Study had worked on the prior two studies.


Flexibility in scheduling interviews—Being available to speak with people when it is most convenient for them is sometimes overlooked as a factor that can tip the balance in favor of cooperation for an individual who has doubts about participating. Interviewing activities for the survey will be scheduled to coincide with the hours people are most likely to be at home. Westat normally calls from 9:00AM to 9:00PM Monday through Friday respondent time. However, for the first two surveys, Westat called from 9:00AM to 8:30PM, local time, Monday through Friday. For the Third National Study, per conversations interviewers had with the respondents, Westat began calling at 8:30AM. Westat also calls from 10:00AM to 6:00PM on Saturday respondent time and from 2:00PM to 9:00PM on Sunday respondent time (which will be adjusted to 8:30PM for this survey). Westat will continue calling between 8:30AM and 8:30PM local time Monday through Friday; 8:30AM to 6:00PM local time, on Saturdays, and 2:00PM to 8:30PM local time on Sundays for the Fourth National Study. Interviewers can also schedule exact or general appointment times to accommodate respondents’ schedules.


Procedures to encourage participation—Perhaps the most significant technique for persuading reluctant individuals to participate is the interviewer training segment that encourages customer participation. Nearly as important is a well-planned and concerted effort to convert each refusal to final cooperation.


Refusal conversion--For each case in which the respondent refuses to participate, the interviewer will complete a Non-Interview Response Form (NIRF). The form will capture information about key characteristics of the refusing respondent and the stated reason(s) for refusing to participate.


Special interviewer training sessions led by highly experienced supervisors will be held for a select group of interviewers. The sessions will include participating in the analysis of survey-specific and generic reasons for refusal, preparing answers and statements that are responsive to the objections, effective use of voice and manner on the telephone, and role-playing of different situations. This team of customer cooperation interviewers will re-contact the reluctant respondents. For the 2002, 2004 and 2005 surveys, the refusal conversion rate was about 40 percent.


Use of proxies and interpreters-- Although Westat does not anticipate that many of the respondents will be unable to complete the questionnaire by themselves, the need to rely on some interpreters or proxies for interviews may be necessary (In the first three surveys, less than one percent of the interviews were proxy interviews. The surveys have always been offered in Spanish, to reduce the need for interpreters.). The respondent’s own responses are always preferable to those of a proxy. Therefore, Westat attempts to first determine if someone in the respondent’s household can act as an interpreter. If that is not possible, then a proxy can be interviewed. Westat will allow the use of proxies when the sampled persons cannot or will not respond for themselves. Interviewers will be trained to recognize situations where proxies are appropriate. However, the final decision on using a proxy to complete the interview will be made only by supervisory personnel. The AAA may also indicate the need for a proxy for a particular respondent on the selected participant list that is sent to Westat. If that happens, Westat is also requesting the contact information for the proxy or interpreter.


Quality Control--This survey will be conducted using Computer Assisted Telephone Interviewing (CATI) software. CATI is programmed to follow skip patterns. Interviewers are trained to probe to get complete answers to all survey questions. If a respondent does not know how to answer a question, or refuses to answer a particular question, those options are allowed on the questionnaire as well. However no question can be skipped.


For the survey, Westat will implement procedures to review and edit questionnaire responses. Westat maintains a large in-house data preparation staff experienced in performing tasks for all study types conducted at Westat. During a CATI study, data preparation staff checks the CATI responses for consistency and continuously monitor the data. Interviewer comments and problem sheets are reviewed daily and updates are made as necessary. Frequencies of responses to all data items are reviewed to ensure that appropriate skip patterns are followed by the CATI system. Each item is checked to make sure that the correct number of responses is represented. When a discrepancy is discovered, the problem cases are identified and reviewed.


Frequencies of responses to open-ended and other/specify responses are also run. These responses are reviewed and are either up-coded into existing response categories (for other/specify responses) or categories are developed (for both open-ended and other/specify responses) for analysis.


Cognitive issues related to telephone survey administration--The minimal use of proxies and the fact that the questionnaires have been well tested contributed to few problems with the respondents understanding the questions. Westat also conducted interviewer debriefings at the end of data collection for all three surveys. Interviewers were asked about problems and concerns they had, based on their experiences interviewing the respondents. Revisions were made to the questionnaire, based on the information from the debriefing. For instance, interviewers said that caregiver respondents in the first national survey sometimes were not sure they qualified as caregivers. Interviewers asked for a better way to help define who was a caregiver. For the Third National Study of Title III Service Recipients, only caregivers who were served by the National Family Caregiver Support Program were interviewed. The same will be true for the fourth national survey. Interviewers also indicated the need for additional response categories for different questions. For example, some respondents told us they do not eat certain types of food (e.g. fruit) every day. Response options have been added that capture that information.


B.4 Tests of Data Collection Instruments and Procedures

The sampling procedures and previous data collection instruments were successfully used.. The fourth national study instruments were developed and pre-tested by the POMP VI AAAs. The selection of respondents from numbered lists was successfully used for the previous three AoA National Surveys and for a recent OMB-approved Westat study conducted for the U.S. Department of Education’s Rehabilitative Services Administration. These performance measures were used for the previous three national surveys and revised versions will also be used for the Fourth National Study of Title III Service Recipients modules. The surveys have been revised to 1) allow respondents to assess the services they receive and to evaluate the combination of services received and 2) make the questionnaires less customer satisfaction oriented and more assessment/outcome based.


B.5 Use of Statistical Survey Methodology

The use of statistical sampling methods is critical to this study. Westat has developed the sampling plan for this survey as described in Section B.2.3, using standard statistical methods. Westat and the Administration on Aging are also responsible for selecting the sample, and carrying out the analyses. AoA has consulted with Dwight Brock, a Westat statistician, on developing the sampling plan for the selection of the agencies and the selection of the participants, as well as the survey methodology for the survey.



1 Centers for Disease Control and Prevention (n.d.). How does CDC measure population health-related quality of life?. Retrieved April 17, 2006 from National Center for Chronic Disease Prevention and Health Promotion, Health-Related Quality of Life: Methods and Measures Web site: http://www.cdc.gov/hrqol/methods.htm

2 Ware JE, Kosinski M, and Keller SD. A 12-Item Short-Form Health Survey: Construction of scales and preliminary tests of reliability and validity. Retrieved April 17, 2006. Medical Care, 1996;34(3):220-233. Web site: http://www.sf-36.org/demos/SF-12v2.html.

3 This percent range refers to the client response patterns that may occur; for example, in a yes/no question, it refers to the expected percent of respondents who will answer yes, versus no.

4 National Alliance for Caregiving and AARP (2004, April). Caregiving in the U.S. Appendix C. Retrieved July 27, 2004 from AARP, Web site: http://research.aarp.org/il/us_caregiving.pdf, pp. 16-17


13

File Typeapplication/msword
AuthorAdministrator
Last Modified ByAdministrator
File Modified2007-08-08
File Created2007-08-08

© 2024 OMB.report | Privacy Policy