Nonsubstantive Change Justification

REVISED Supporting statement - EXPERIMENT - TAA.3.doc

Plan for Evaluation of the Trade Adjustment Assistance Program

Nonsubstantive Change Justification

OMB: 1205-0460

Document [doc]
Download: doc | pdf

REVISED – 8/28/08

Supporting Statement/Justification

Increasing the Monetary Incentive for Respondents

In the Baseline Survey of the TAA Evaluation

OMB Control No. 1205-0460


Introduction


The National Evaluation of the Trade Adjustment Assistance (TAA) Program is a $10.4 million impact study that includes two treatment and two participant groups. It is being conducted by Social Policy Research Associates, with Mathematica Policy Research (MPR) as the subcontractor responsible for the survey data collection and the statistical aspects of the design. The evaluation, which began in 2004, is scheduled now to end in March 2011, due to delays associated with the initial Office of Management and Budget (OMB) clearance process1 and subsequent problems with recruitment of a number of states to participate in the evaluation.


The evaluation was undertaken in response both to OMB’s Program Assessment Rating Tool review and to the passage of new legislation in 2002 reauthorizing and amending TAA. The TAA evaluation is thus intended to provide information on areas of OMB concern and to generate high quality information which will be of use in the development of legislation, budget proposals, regulations, administrative guidance and technical assistance.


Baseline Survey Design and Role in the Evaluation


The evaluation relies on statistically matched comparison groups rather than randomly selected control groups (because of the nature of the TAA program under which benefits cannot be arbitrarily denied to eligible individuals). The study involves a baseline survey, as well as follow-up surveys. Data collection for the baseline survey began in early March 2008 and the total number of individuals in the sample is currently anticipated to be 9,990.


The baseline survey for the TAA evaluation has involved incentive payments to sample members to complete the telephone interview. Sample members have been randomly assigned to one of three incentive groups: 1) a $25 post-payment only group (60 percent of cases), 2) a $2 pre-payment and $25 post-payment group (20 percent of cases), and 3) a $5 pre-payment and $20 post-payment group (20 percent of cases). Sample members are sent an advance letter (from MPR) describing the study and the incentive payment, and inviting their participation. Sample members are then contacted by telephone. Non-respondents are contacted on average 11 times, at different times of day, in order to secure their response. However, depending on the type of non-response, there can be from 15 to 20 attempts.

At the time of this writing (8/28/08) baseline survey interviews are currently being conducted with respondents in 15 states; data collection will also occur with 11 additional states during the next several months, assuming that all remaining states provide the full Unemployment Insurance (UI) data, as they have agreed, in a timely fashion. The baseline survey data collection is due to end in late January 2009.


The baseline survey will provide critical information on service interventions for TAA participants, eligible workers who chose not to accept the TAA offer (the “nonparticipants”), and UI claimants. Information from the survey will be used for a second round of comparison group matching as well as in generating estimates on the impact of the lengthy training interventions possible under TAA and potential correlations associated with spousal employment and earnings, previous educational level, and attributes of later jobs. Estimates of the impact of TAA benefits and services on different subpopulations (identified via the survey) will also be possible, but only if there are sufficient numbers of respondents in the treatment and comparison groups.


Response Rates Lower Than Anticipated


High response rates in general are necessary to increase the chances that survey respondents are representative of all sample members and that impact estimates are generalizable nationally. However, equal response rates among all treatment and comparison groups are also vital, since they are necessary for obtaining unbiased impact estimates (average treatment to comparison group differences) of TAA services on key participant outcomes. Further, response rates among subpopulations with different characteristics must also be similar to avoid other potential sources of bias in impact estimates.


At this time, response rates to the telephone interview have been lower than expected. For sample members in the seven states where the survey has been conducted for the longest period of time, the overall response rate is about 46 percent, with averages of 60 percent for those who received TAA benefits and services, 48 percent for TAA nonparticipants, and 40 percent for comparison group of UI claimants, as noted in the attached tables.2 For the state that has been worked the longest (since March 13, 2008) the response rate is only 47 percent overall, with a rate of 66 percent among TAA participants, 44 percent for TAA nonparticipants, and about 40 percent among comparison group members.


In addition, there is considerable variation in response rates among respondents with different characteristics, as shown in the following table regarding gender and age groups, as found in all 15 states for which sample has been released.

Response Rates

Sample Size

Response Rate as of 8/16/08


By Gender







Male

3,282

35%




Female

2,952

42%




By Age







Under age 30

518

30%





Age 30 to 39

1,130

31%





Age 40 to 49

1,827

36%





Age 50 or more

2,759

44%




Reasons for Low Response Rates


Three factors appear responsible for the low response rates: 1) contact information is no longer correct for a large number of sample members, 2) securing new contact information for incorrect phone numbers and addresses has been difficult, and 3) a number of potential respondents appear to lack motivation to respond, even when contact information is correct.


Initial contact information (name, address, and telephone number) for the TAA sample and their matched comparison group is obtained from UI claims records. Most of the sample includes workers who became unemployed in 2005 or 2006 and contact information available in the UI files is not up-to-date for many sample members. Efforts to locate current contact information have been undertaken for more than a third of the sample so far. MPR’s trained locating staff are using all available means3 to find new phone and address information.


This brings us to the second key factor for nonresponse, the difficulty in obtaining correct contact information. Some percentage of sample members are not using a landline phone, paying rent, or making financial transactions that trigger identification via a “search engine,” and thus no additional contact information has been found.


Many of the advance and locating letters for non-respondents have not been returned as undeliverable, suggesting these individuals may be living with family or friends at the available addresses (though it is difficult to know this for certain). If the sample members are at these addresses but not responding to various contact attempts, this suggests that the third factor—lack of motivation to respond, is likely to be the cause. Lack of motivation has also been evident after contact has been made, as many sample members asked to be called back but then did not answer the phone when the subsequent call was made. Given that many answering machines now display the initiating telephone number, avoiding these calls is simple. Also, the large number of “soft” refusals, such as “I'm really too busy at the moment,” are also a manifestation of this type of non-response. Lack of motivation thus appears to be the biggest challenge in reaching an acceptable response rate.


Proposed Approach to Increasing Response Rates


A new approach is needed to increase response rates so that the evaluation can obtain nationally representative and unbiased impact estimates of the TAA program. With approximately four and one half months left for data collection, we propose a two- pronged strategy to include:

1) changes in procedures and 2) testing various incentive payments. Since the most effective -- and cost-effective -- incentive amount is unknown, we request permission to conduct a short-term experiment in the next two month to test different amounts of the incentive with different status groups in the evaluation.


If approved, experimental results will be analyzed to determine the efficacy of different incentive amounts in increasing response rates overall, for each status group as well as for different subpopulations of respondents. We will obtain a clearer picture of what level of incentive or combination of incentives will be most likely to reduce differences in response rates among different status groups and subgroups with different characteristics, thus reducing potential bias in the final estimates. We are hopeful that one of the groups with lower response rates (i.e., younger people, who often have less savings and lower household income) may be particularly responsive to a modest increase in financial incentives. 


We will also conduct an analysis of the costs for different options, accounting for higher incentive payments but also for savings associated with reducing the number of contacts and locating efforts. The analysis, available in late October, would be shared with OMB and used for developing options regarding the incentive structure for the last months of data collection.


Our proposal includes the following:


1. Changes in Data Collection Procedures:


Changes in data collection procedures can be important “tools” for garnering higher response rates. Survey participation depends upon both effectively making contact with sample members and also persuading them to be interviewed. With the changes in procedures described below, we hope to increase contact and respondent willingness to participate. We propose the following:

  • Sending all correspondence to sample members (e.g., the advance, refusal conversion, and locating letters) on U.S. Department of Labor (DOL) letterhead, over a DOL official’s signature, with a DOL contact number, rather than using MPR letterhead with an MPR manager’s signature, as being used now. This more “official” correspondence, it is hoped, will receive greater attention from respondents and lend greater legitimacy to the request, than a letter provided from MPR;


  • Using priority mail4, with its visually prominent red, white and blue exterior envelope, for sending refusal conversion letters to respondents with the lowest response rates (non-respondents in Groups B through D), based on the successful use of priority mail by MPR. In addition, we will send to all existing non-respondents a follow-up postcard with the amount of the incentive prominently displayed, so as to alert potential respondents and their families;


  • Reviewing current procedures using social security numbers to locate addresses and phone numbers, and assuring that the most productive methods are being systematically applied across all cases, thus taking full advantage of the availability of social security numbers in the TAA study;


  • Reviewing the CATI production records to determine the most productive interview completion times and, as needed, increasing the number of interviewers for these time periods;


  • Selecting a core of the most elite refusal converters at MPR and increasing their work hours on this project;


  • Conducting a CATI interviewer debriefing in order to identify what approaches are most successful for making contact with TAA households and assuring that all of the voice-mail messages left by interviewers clearly identify the incentive amounts; and


  • Conducting additional refusal conversion training as needed.


2. Incentive Experiment


This revised survey design proposes a short-term experiment which recognizes the need for different approaches with different types of sample members: (1) TAA participants (Group A) versus other sample members, including TAA nonparticipants and the two comparison groups (Groups B, C, and D) and (2) existing nonrespondents—those who have already been released for interviewing—versus new cases that will be released for interviewing within the next month or so. The description below begins with the approaches to be used with TAA participants (Group A) and then discusses the approaches to be used with all other sample members (Groups B-D).

a. Experimental Design for TAA Participants (Group A)

Existing Non-respondents (N= 700). The response rate for Group A is about 60 percent in the seven states that were released for interviewing the earliest. This response rate is significantly higher than the average of 43 percent for the response rate among the other sample groups. Thus, we propose an experiment to modestly increase the response rate among Group A non-respondents by sending, via regular mail, refusal conversion and locating letters printed on DOL letterhead, signed by a DOL official, and to test a $25 incentive against a $50 one.

  • This approach will allow us to determine a) if the new letterhead and the other survey operation changes discussed above were sufficient to increase responses overall and in states with lower rates and b) if the $50 payment was helpful as well in reaching modestly higher response rates.

  • The two different incentive treatments will be evenly divided among all cases. The minimal detectable difference (MDD) in percentage of response rate between the two groups would be 9.3 percent5.


New Cases (N=1,500). We propose to use advance letters on DOL letterhead, signed by a DOL official, and to test the $25 incentive against a $50 incentive. This will allow us to test the effects of the USDOL versus MPR advance letters with the same incentive as in the initial data collection and to determine the effect of a $50 incentive, when it is offered initially rather than after an original offer of $25 (as with the existing non-respondents). The MDD between the two evenly divided groups of these cases would be 6.3 percent.


b. Experimental Design for Comparison Groups and TAA Nonparticipants

(Groups B through D) 


Existing Non-respondents (N=3,000).  The combined survey response rate for Groups B through D is only 43 percent in the seven earliest states. To obtain credible information about the effect of all the alternate procedures and which amount of the monetary incentive works best, we propose the following:


  • Sending the refusal conversion and locating letters on DOL stationery over a DOL official’s signature, but also using priority mail (as discussed above) in order to get the attention of sample members. In addition, a postcard will be sent a few days after the letter in order to give sample members and their families another opportunity to be informed about the new incentive, in case they don’t open the letter.


  • Conduct an incentive experiment with three groups of nonrespondents, who would receive a $25, $50, or $75 incentive upon completion of the interview. Because the $25 amount has been tried already, and found to be low in all states thus far, we propose to test that amount with only 20 percent of the nonrespondents in Groups B-D. We will test the $50 and $75 amounts with two equal-sized subgroups of the remaining 80 percent, (i.e., 40 percent of the total for each of the higher incentive amounts).


  • The MDD for comparisons between the 20 and 40 percent subgroups is 6.1. Between the two 40 percent subgroups, the MDD is 5.0.


  • We will track the effects of the use of the different incentive payments both on sample members with good telephone numbers and those for whom there appear to be good addresses, but no telephone numbers.


New Cases (N=4,500).  For this group of new sample members, from new states about to be released, we propose to test the effect of the U.S. DOL letter, sent via regular mail only using the original $25 amount for the incentive, and, also, to test incentive levels of $50 and $75 percent, all with the new letter. Similar to the incentive experiment with the existing cases, we propose to test the $25 amount with 20 percent of these cases, and the $50 and $75 amounts with 40 percent each of these new cases.


  • The MDD for comparisons between the 20 and 40 percent subgroups is 5.0. Between the two 40 percent subgroups, the MDD is 4.1.


The proposed incentive experiment as discussed above is summarized in the following table:


Survey Group

New or Existing Cases

Expected Sample Size

Incentive Payment Amounts/Groups

MDD for Pairwise Comparisons (Percentage Points)

Group A

Existing

700

$25, $50 split into two 50 percent groups

9.3

Group A

New

1,500

$25, $50 split into two 50 percent groups

6.3

Groups B-D

Existing

3,000

$25, $50, $75 split into 20/40/40 percent groups

5.0 for 40/40 Comparisons;

6.1 for 20/40 Comparisons

Groups B-D

New

4,500

$25, $50, $75 split into 20/40/40 percent groups

4.1 for 40/40 Comparisons;

5.0 for 20/40 Comparisons


Costs


The total cost for these changes in procedures and incentive payments is difficult to calculate in the absence of data on the number of completed interviews with differing incentive amounts. There are likely to be offsetting cost savings, which could be significant, associated with the higher incentives, since they will likely result in fewer interviewer contacts, and reduced locating costs. As noted above, the costs and experimental results will be analyzed and used in determining the course of action, in consultation with OMB, for the final months of the data collection. A revised budget will then be prepared using estimated data collection costs for the baseline survey, and for determining other changes that may be necessary in the activities or design of the evaluation.6

Other Alternatives Considered


Several other survey design options for increasing response rates have been considered as well, including an across-the-board increase in incentive payment of $100, which was rejected in favor of the experiments proposed above to test different amounts of the incentive up to $75. Other options considered, but rejected as being too expensive or infeasible are discussed below.


One option considered was use of local field interviewers to find sample members. However, the cost of conducting the in-person locating and interviewing would be substantial: $300 to $400 per completed interview (three or four times the cost of a telephone-administered interview) and possibly higher if the “find rate” is low. Furthermore, this design would take too much time to implement (since it requires setting up field procedures, training, and travel time) at this juncture. Also, and perhaps most importantly, it is unlikely to yield better response rates than the proposed higher incentive payment. Setting up the procedures, securing OMB approval, and training would take two to three months, all time lost to the actual data collection.


Also considered were mail, fax, and web versions of the survey. However, the length of the questionnaire and the need to insert respondent-specific information into the body of the questionnaire renders mail or fax versions of the questionnaire impractical. Moreover, a self-administered questionnaire could be confusing to many respondents, given the complicated job, education, and training history grids that must be completed, thus greatly increasing the chance for respondent error, even if an individual decided to participate. A web version of the survey was also considered, but lack of computer access in the target population suggested that a web option would be of limited success, and that the considerable programming costs involved could thus not be justified.


Other Changes Proposed


The current design includes a prepay-postpay incentive experiment, which we propose to end. The new experimental design, including a test of multiple incentive amounts and procedural changes, is already complex administratively; continuing the prior experiment would not difficult and cell sizes for some subgroups would be too small to yield statistically significant information. However, we have already conducted the experiment with nearly 6,000 cases and found that response rate differences across the three payment incentive groups were modest and not statistically significant.


Finally, if none of the proposed options show a likelihood of producing the desired number of completed interviews, we will consider drawing additional sample. The original sample size was 9,990 cases. With an expected 80 percent response rate, there would be 7,965 completed interviews. Depending on the results of the incentive experiment and procedural changes, we will consult with OMB as to an acceptable number of cases in order to retain the statistical power needed for subgroup estimates. The decision on pulling additional sample would be made in mid-October, at the same time as the determination of the incentive structure for the final two and a half months of the data collection.

Timeline

With a short period remaining for data collection and conducting the experiments, we estimate that the following is a reasonable timetable to encompass the proposals advanced in this statement:

  • 9/9/08 - MPR begins mailing advance letters to sample members in new states and re-contact letters to existing nonrespondents from old states

  • 9/11/08 - MPR begins calling sample members with new incentive amounts programmed into CATI

  • 10/16/08 - Analysis of response rates for experiment to USDOL

  • 10/31/08 - Consultation with OMB on experiments

  • 1/31/09 - End of baseline survey data collection

Conclusion

In light of the low response rates overall, for three of the four status groups, and for different age and gender subgroups in the TAA evaluation baseline survey, DOL proposes to make a variety of changes in data collection procedures and to test different incentive amounts with different segments of the respondent pool. Should this be approved, an analysis of response rates and of all associated costs will be conducted and results shared with OMB to determine the incentive structure for the final months of data collection. Because of the short timeframe for increasing response rates, and thus protecting DOL’s investment in this evaluation, we respectfully request an expedited response to this proposal.

1 OMB provided approval of the information collection request in November 2006. The approval expires November 30, 2009 and an extension will be sought well prior to the expiration.

2 ?The tables are in Acrobat format and should have been submitted with this statement.

3 These efforts include directory assistance searches on the sample member’s name and address; reverse searches on the original address and phone number; similar efforts on all forwarding addresses; web searches on Google, Face Book, and other directory system; emails sent if electronic addresses found (with an 800 number for a call-in); use of Accurint to search for second degree relatives and for a cell phone listing; sending three locating letters to the original and updated forwarding addresses, and holding exhausted cases open for possible new Accurint information or forwarding addresses from the post office.



4MPR is currently exploring the literature on mailing modes. So far they found that priority mail has been used successfully, particularly with non-respondents without a known phone number, to increase response rates. A mail mode that requires a signature (such as certified or Federal Express overnight letters) has been found to be successful with physicians and would probably work well with a highly educated sample. However, we are concerned that requiring sample members to sign for a letter or to pick up it at the post office might impose a burden, unduly alarm, or generate hostility among some potential respondents, and would thus be counterproductive.



5This and all calculations of MDD’s assume (1) a 95 percent confidence level; (2) a two-tailed test (due to the uncertainty of the direction of effects); (3) an 80 percent level of power; (4) an 80 percent response rate to the baseline interview for the base group; (5) a reduction in variance of 20 percent owing to the use regression models (that is, an R2 value of .20); and (6) no clustering effects, because the focus of the experiment is to draw inferences about the effect of each payment type for our sample, rather than to generalize to the sample universe as a whole.




6 Any significant changes to the design, if necessary, will be included in a subsequent information collection request to extend OMB approval for the current methodology.

9

File Typeapplication/msword
File TitleMEMORANDUM
AuthorPat Nemeth
Last Modified Byschifferes.charlotte
File Modified2008-08-28
File Created2008-08-28

© 2024 OMB.report | Privacy Policy