Supporting Statement Part B - 0785

Supporting Statement Part B - 0785.doc

Benefit Offset National Demonstration (BOND) Project

OMB: 0960-0785

Document [doc]
Download: doc | pdf

Table of Contents

Part B: Collection of Information Employing Statistical Methods

B.1 Statistical Methodology

B.1.1 Sample Recruitment and Random Assignment

Within each of the ten demonstration sites, we will recruit SSDI beneficiaries into the sample over an eighteen month period, and randomly assign them to one of three different treatment conditions, and two current law control groups. The recruitment and random assignment process occurs in two stages. Exhibit B1 depicts the BOND recruitment and random assignment process.


Stage 1. In the first stage, we randomly assign all eligible SSDI-only and concurrent beneficiaries in each demonstration site to one of three experimental conditions: a current law control group (C1); a treatment group that receives the $1 for $2 offset (T1); and a group made up of SSDI-only beneficiaries who we solicit to volunteer for Stage 2 of the study, the Solicitation Pool (SP).1 We complete Stage 1 random assignment based entirely on SSA administrative records. Thus, it will not involve direct recruitment of beneficiaries, nor does it require informed consent. However, the beneficiaries assigned to the T1 group receive a letter from SSA explaining that they are subject to a new benefit schedule, and providing them with information about the $1 for $2 benefit offset. SSA sent these letters to approximately 1/6 of the 80,000 Stage 1 participants every two weeks over a three-month period, to prevent a possible overrun of calls to BOND call centers and site offices. The final mailing, sent in late August 2011, contained approximately 12,000 letters. We selected a random sample of 1000 Stage 1 treatment beneficiaries assigned to the final mailing for the initial letter survey. Consistent with the SSDI population, which is by definition disabled, we established data collection procedures to accommodate various types of disabilities. If the respondent was cognitively unable to respond to the survey, the interviewer identified a proxy respondent so as not to exclude these sample members from the data collection effort. If the respondent was hearing impaired, the interviewer used text telephoning. In addition, we translated the survey instrument into Spanish. If the respondent spoke only Spanish, the interviewer conducted the survey in Spanish.


SSA follows all required notification procedures to inform beneficiaries assigned to T1 about the change in their benefit schedule. We do not contact C1 group members.


Stage 2. We sent a letter inviting all eligible beneficiaries in the Stage 1 SP group to consider participation in the Stage 2 component of the study, with the possibility that we would select them to receive an experimental benefit that would allow them to maintain SSDI eligibility and still receive partial SSDI benefits if they earn above SGA. We also selected some beneficiaries to receive enhanced work incentives counseling services that are not currently available to other beneficiaries in combination with the benefit offset.


Exhibit B1


The Stage 2 recruitment letter asks eligible beneficiaries to call a toll-free number to arrange an initial phone interview, or to return a postcard indicating that they are not interested in participating in the study. Site office staff contacts all SP beneficiaries who express interest in the demonstration to verify that they meet demonstration eligibility criteria (for example, beneficiaries must not reach retirement age before the end of the demonstration2). The caller terminates the call if the individual does not pass this screen. If the beneficiary passes the screen, the caller provides additional information about the study. The additional information includes:


  • A summary description of the potential benefits that the individual might receive if they enter the study;3

  • A statement that some beneficiaries that enroll in the study may not receive any additional benefits; and

  • A statement that, to be included, the beneficiary will have to agree to cooperate with the study’s data collection activities, responding to the surveys and granting the evaluators permission to access administrative data for evaluation purposes only.


To conclude, the caller invites the beneficiaries to attend an in-person enrollment session, to provide them with further information about current and demonstration benefits, confirm demonstration eligibility, (if still interested) obtain informed consent (described in A10.2), and enroll them in the demonstration. Specifically, the informed consent explains that:


“….Once you have agreed to participate in BOND, a professional interviewer from Abt Associates will meet with you. The interviewer will ask you questions about your work experiences, your health, your ability to do certain activities, and health insurance coverage. The interview session will take about 60 minutes. You will also be asked about any benefits you may receive, your income, and the people that live with you.

“….If you agree to participate in BOND, you are also agreeing to participate in the long-term research study to find out whether the new program works. This will involve responding to periodic surveys from Abt Associates…. Agreeing to participate also means that you give the program staff and researchers permission to access certain types of information about you. As a condition of, and for the duration of your participation in BOND, in any of the three groups, you are giving permission for local BOND staff, researchers from the Abt Associates project staff, and SSA to obtain the following information from the date of your enrollment until [DATE]:


  • identifying information, including your name, address, Social Security number, and date of birth

  • the dates of your participation in the new program

  • U.S. Department of Education’s Rehabilitation Services Administration (RSA)Vocational Rehabilitation administrative records

  • SSA administrative records

  • Centers for Medicare & Medicaid Services (CMS) administrative records

  • Self-reported employment and earnings data…..”


Where we expect non-English speaking potential subjects to attend recruitment events at each site, we provide translators. At the session, beneficiaries complete the baseline survey. They then go through the Stage 2 random assignment and we assign them into one of treatment groups or a control group.


We draw the BOND sample from SSA administrative data on the universe of eligible beneficiaries using Abt Associates’ random assignment software running in a secure database environment. The random assignment software generates a permanent record of eligibility status for each case in each site. As random assignment proceeds, the software generates permanent variables recording the random assignment date(s) and group assignment(s) of each case.4


B.1.2 Universe of Households and Survey Samples

The demonstration sample comprises two major program groups: SSDI-only beneficiaries and concurrent beneficiaries (those receiving both SSDI and SSI benefits). Both groups include those beneficiaries who are receiving benefits at the time the Stage 1 sample is drawn. We refresh the sample periodically during the random assignment period to allow for the inclusion of newly approved SSDI-beneficiaries. From a subsample of the SSDI-only group, we will solicit volunteers for the Stage 2 treatments.


Exhibit B2 summarizes the definition and sample sizes for all of the random assignment groups. We believe that the sample sizes assumed here, though large, are realistic, both in terms of the likely number of volunteers in the demonstration sites and in terms of the ability of the demonstration to provide services.


Exhibit B2. Definition & Size of Randomly Assigned Groups in the Benefit Offset National Demonstration

Group

Treatment

RA Stage

# Assigned per Site

Total # Assigned



Eligible SSDI-Only and Concurrent Beneficiaries Randomly Assigned in Stage 1

C1

Current law control group

1

90,200

902,000

T1

50% offset and regular work incentives counseling

1

8,000

80,000


Pool to be solicited for Stage 2 (SSDI-only)

1

21,800

218,000


SSDI beneficiaries, Stage 1, all groups

1

94.9

1,200,000



SSDI-Only Volunteers from Solicitation Pool Randomly Assigned in Stage 2

T21

50% offset and regular work incentives counseling

2

494

4,935

T22

50% offset with enhanced work incentives counseling (EWIC)

2

309

3,089

C2

Current law control group

2

493

4,930

 

Stage 2 volunteers, all groups

2

1,295

12,954








NUMBER RECEIVING:

Offset

88,024



EWIC

3,089

KEY:

Control group for all SSDI-only and concurrent beneficiaries





Control group for SSDI-only volunteers





50% offset treatment groups






B.2 Procedures for Collecting the Information

B.2.1 Sample Design

Data to analyze the impacts of the Stage 1 treatment comes primarily from SSA administrative records on all T1 and C1 sample members. However, we will administer a follow-up survey to a subsample of Stage 1 treatment and control group members (approximately 36 months after random assignment) to collect more detailed information on employment outcomes than is available from administrative records. The added data concern the wages, occupations, benefits, and hours worked of beneficiaries who return to work. We will select a random sample of 10,000 beneficiaries assigned to the Stage 1 treatment and control groups for the Stage 1 36-month survey (5,000 from T1 and 5,000 from C1). For this data collection, we will over-sample beneficiaries we predicted as likely to find employment.5


Researchers enrolled 12,954 SSDI-only beneficiaries to volunteer for Stage 2 of the demonstration. At the end of the intake session, staff randomly assigned these volunteers to one of the two Stage 2 treatment groups or a control group. All of these participants completed the baseline and interim surveys, and will complete the Stage 2 36-month surveys. Therefore, we require no sampling for these surveys.


B.2.2 Estimation Procedures

As described in Section A.16 above, the data we collect for the BOND evaluation will allow researchers to estimate impacts of the demonstration on a wide range of outcomes in several behavioral domains. With properly designed and implemented random assignment, treatment-control comparisons of raw means provide unbiased estimates of impact. Use of regression analysis to control for baseline characteristics that affect the outcome improves the precision of the estimates while preserving their unbiased character. The estimates of precision presented in the next section assume such regression adjustments, with precision gains based on those obtained in the earlier SSA Project NetWork evaluation.


Exhibits B3-B5 provide sample table shells for presenting the impact results. Exhibit B3 shows the simplest contrast, for the Stage 1 random assignment of eligible beneficiaries. Based on the comparison of the T1 group (which receives the offset only) with the C1 group, this table presents the impact estimates for three key outcomes: current monthly SSDI benefit, cumulative benefits since random assignment, and the proportion of the group currently on the SSDI rolls. For each of the outcomes, the table shows estimated impacts for all members of the groups and for subgroups defined based on whether or not the sample member’s primary disability is a mental health condition. The first data column shows the mean values for the controls. The next column gives the intent-to-treat effect estimate, which is the difference in mean value for T1 in contrast to C1. We provide standard errors and statistical significance level markings (below and to the right of the estimates, respectively) to make it clear whether the impact is likely to be due to the program or simply random variation. We will compute these standard errors by adjusting for the weighted and clustered nature of the data. The last column gives the treatment-on-treated effect estimates, where applicable.


Exhibit B3. Impacts on Key BOND Outcomes from Administrative Data, Stage 1


ELIGIBLE BENEFICIARIES

Control Mean (C1)

Offset Only (T1) vs. Control (C1)

ITTa

TOTa

Current Monthly Benefit (n= )

All

$nnn

$nnn

(nnn)

$nnn

(nnn)

Primary disabling condition is a mental health condition

$nnn

$nnn

(nnn)

$nnn

(nnn)

Primary disabling condition is not a mental health condition

$nnn

$nnn

(nnn)

$nnn

(nnn)

Cumulative Benefit Payments Since Random Assignment (n= )

All

$nnn

$nnn

(nnn)

$nnn

(nnn)

Primary disabling condition is a mental health condition

$nnn

$nnn

(nnn)

$nnn

(nnn)

Primary disabling condition is not a mental health condition

$nnn

$nnn

(nnn)

$nnn

(nnn)

Current Benefit Status (n= )

All

0.nn

0.nn

(nnn)

0.nn

(nnn)

Primary disabling condition is a mental health condition

0.nn

0.nn

(nnn)

0.nn

(nnn)

Primary disabling condition is not a mental health condition

0.nn

0.nn

(nnn)

0.nn

(nnn)

* = p<.05 on t-test. We show robust standard errors in parentheses.
Sources:
Sample:
Notes:
a) ITT = Intent-to-Treat; TOT = Treatment-on-Treated.
b) Control means and impact estimates are regression-adjusted.



As the demonstration continues and we conduct Stage 2 random assignment, we begin to accumulate data on the Stage 2 volunteers. With volunteers randomly assigned into three groups—that get, respectively, the 1-for-2 benefit offset, the 1-for-2 benefit offset plus enhanced work incentives counseling, or current SSDI program provisions (the control group)—we will be able to contrast outcomes between different pairs of these groups to measure impacts, as shown in Exhibit B4. The exhibit shows impact estimates for the same three outcome measures as Exhibit B3, but it is different from the prior shell, since it makes multiple impact comparisons.


Exhibit B4. Impacts on Key BOND Outcomes from Administrative Data, Stage 2


STAGE 2 VOLUNTEERS

Control Mean (C2)

Offset Only (T21)

vs. Control (C2)

Offset + Enhanced Counseling (T22) vs. Control (C2)

ITTa

TOTa

ITTa

TOTa

Current Monthly Benefit (n= )

All

$nnn

$nnn

(nnn)

$nnn

(nnn)

$nnn

(nnn)

$nnn

(nnn)

Employed at baseline

$nnn

$nnn

(nnn)

$nnn

(nnn)

-$nnn

(nnn)

$nnn

(nnn)

Not employed at baseline

$nnn

$nnn

(nnn)

$nnn

(nnn)

$nnn

(nnn)

$nnn

(nnn)

Cumulative Benefit Payments Since Random Assignment (n= )

All

$nnn

$nnn

(nnn)

$nnn

(nnn)

$nnn

(nnn)

$nnn

(nnn)

Employed at baseline

$nnn

$nnn

(nnn)

$nnn

(nnn)

-$nnn

(nnn)

$nnn

(nnn)

Not employed at baseline

$nnn

$nnn

(nnn)

$nnn

(nnn)

$nnn

(nnn)

$nnn

(nnn)

Current Beneficiary Status (n= )

All

0.nn

0.nn

(nnn)

0.nn

(nnn)

0.nn

(nnn)

0.nn

(nnn)

Employed at baseline

0.nn

0.nn

(nnn)

0.nn

(nnn)

0.nn

(nnn)

0.nn

(nnn)

Not employed at baseline

0.nn

0.nn

(nnn)

0.nn

(nnn)

0.nn

(nnn)

0.nn

(nnn)

* = p<.05 on t-test. We show robust standard errors in parentheses.
Sources:
Sample:
Notes:
a) ITT = Intent-to-Treat; TOT = Treatment-on-Treated.
b) Control means and impact estimates are regression-adjusted.



As the demonstration continues and we collect data in the interim and follow-up surveys, we can test other outcome measures for impacts. Exhibit B5 illustrates the presentation of impact estimates for selected key outcomes measured using survey data. The results are for different treatments among the Stage 2 groups, both compared to the same control group. We would generate them from pooled impact regressions. Again, we show the control mean in the first data column and then the ITT and TOT impact estimates for the different treatments—for Offset Only vs. control, for Offset plus EWIC vs. control, and for Offset only vs. Offset plus EWIC.


Exhibit B5. Impacts on Employment Outcomes from Survey Data, Stage 2


STAGE 2 VOLUNTEERS

Control Mean (C2)

Offset Only (T21)

vs.

Control (C2)


Offset + Enhanced Counseling (T22)

vs.

Control (C2)


Offset Only (T21)

vs.

Offset + Enhanced Counseling (T22)

ITTa

TOTa


ITTa

TOTa


ITTa

TOTa

Total Earnings Last Month (n= )

All

$nnn

$nnn

(nnn)

$nnn

(nnn)


$nnn

(nnn)

$nnn

(nnn)


$nnn

(nnn)

$nnn

(nnn)

Employed at baseline

$nnn

$nnn

(nnn)

$nnn

(nnn)


$nnn

(nnn)

$nnn

(nnn)


$nnn

(nnn)

$nnn

(nnn)

Not employed at baseline

$nnn

$nnn

(nnn)

$nnn

(nnn)


$nnn

(nnn)

$nnn

(nnn)


$nnn

(nnn)

$nnn

(nnn)

Total Income Last Month (n= )

All

$nnn

$nnn

(nnn)

$nnn

(nnn)


$nnn

(nnn)

$nnn

(nnn)


$nnn

(nnn)

$nnn

(nnn)

Employed at baseline

$nnn

$nnn

(nnn)

$nnn

(nnn)


$nnn

(nnn)

$nnn

(nnn)


$nnn

(nnn)

$nnn

(nnn)

Not employed at baseline

$nnn

$nnn

(nnn)

$nnn

(nnn)


$nnn

(nnn)

$nnn

(nnn)


$nnn

(nnn)

$nnn

(nnn)

Current Employment Status (n= )

All

0.nn

0.nn

(nnn)

0.nn

(nnn)


0.nn

(nnn)

0.nn

(nnn)


0.nn

(nnn)

0.nn

(nnn)

Employed at baseline

0.nn

0.nn

(nnn)

0.nn

(nnn)


0.nn

(nnn)

0.nn

(nnn)


0.nn

(nnn)

0.nn

(nnn)

Not employed at baseline

0.nn

0.nn

(nnn)

0.nn

(nnn)


0.nn

(nnn)

0.nn

(nnn)


0.nn

(nnn)

0.nn

(nnn)

* = p<.05 on t-test. We show robust standard errors in parentheses.
Sources:
Sample:
Notes:
a) ITT = Intent-to-Treat; TOT = Treatment-on-Treated.
b) Control means and impact estimates are regression-adjusted.



We will likely also use other means of presenting the impact findings in the evaluation reports to SSA, such as simplified impact summaries (covering a large number of contrasts but just showing statistical significance) and graphics. However, we will provide the details of all impact findings through table shells like the ones discussed, whether in text or in report appendices.


B.2.3 Degree of Accuracy Required

It is important to consider the precision with which the evaluation will be able to measure the impacts of BOND, given the sample sizes available. The standard way to assess the precision of the estimates that we derived from an experimental design is to examine the minimum detectable effects (MDEs) obtainable under that design. The minimum detectable effect is the smallest true program impact that we have a good chance identifying with data from a given sample. The smaller the MDE, the more precise the estimate becomes. Specifically, we define MDE as the smallest true impact that has an 80 percent chance of being statistically significant, using a two-tailed hypothesis test at the .05 percent level.


Exhibit B6 shows MDEs for program impacts, under the following assumptions about sample sizes:


  • We will implement the demonstration in 10 SSA area offices.

  • There are 1,200,000 beneficiaries in the 10 offices combined.

  • At Stage 1 random assignment, we randomly assigned 982,000 beneficiaries to T1 and C1. The random assignment ratio is 1:11.

  • We solicited the remaining SSDI-only beneficiaries in the demonstration sites, 218,000 strong, to volunteer for Stage 2 random assignment.6 We also solicited an additional 22,000 newly-enrolled beneficiaries for Stage 2. Of those we invited to participate, 5.4 percent volunteered.7

  • We assign volunteers to the four treatment and control groups involved at Stage 2 random assignment.


Exhibit B6. Minimum Detectable Effects and Sample Sizes





ITT MDEs

Treatment Group

Treatment Group Sample Size

(all sites)

Impact Relative to:

Control Group Sample Size (all sites)

Annual Earnings, Mos. 1-24

Annual SSDI Benefits, Mos. 1-24

Eligible beneficiaries: Offset only (T1)

80,000

Current Law (C1)

902,000

$317

$88


Stage 2 Volunteers:






Offset only (T21)

4,935

Current law (C2)

4,930

$551

$109

Offset with Enhanced Work Incentives Counseling (T22)

3,089

Current law (C2)

4,930

$609

$117

a Minimum detectable affects based on 80 percent power, .05 significance level (2-tail test).



These assumptions imply the following total sample sizes, across all sites:


  • 80,000 eligible beneficiaries randomly assigned to the 50 percent offset in Stage 1;

  • 902,000 eligible beneficiaries randomly assigned to the current law control group in Stage 1;

  • 240,000 eligible SSDI-only beneficiaries solicited for participation in Stage 2; and

  • 12,954 volunteers for Stage 2 assigned to three groups: 4,935 receive the benefit offset alone (T21), 3,089 receive the benefit offset plus enhanced work incentives counseling (T22), and 4,930 receive neither, forming the Stage 2 control group (C2).


Because one of the three Stage 2 treatment conditions is a current law control group (C21), this implies that 8,024 volunteers (12,954 minus 4,930) will receive some demonstration treatment. Of these, 8,024 will receive the 50 percent offset (T21 and T22) and 3,089 will receive in addition enhanced work incentives counseling (T22).


The MDE estimates shown in Exhibit B3 are for national estimates, based on 80 percent power and a two-tailed test of statistical significance at the .05 level. They take into account the effect of the likely cross-site variation in impacts on the precision of the national estimates. MDEs are shown for the two outcomes that are most central to the demonstration’s objectives—earnings and SSDI benefits. The specific measures analyzed here are annual earnings and SSDI benefits over the first 24 months after random assignment.


As shown, for the very large sample of eligible beneficiaries who receive the benefit offset only at Stage 1, we will be able to detect impacts on annual earnings as small as $317, or about 10 percent of the control mean, and impacts on SSDI benefits of $88, or about 1 percent of the control mean. This level of precision, of course, does not apply to the subset of beneficiaries responding to the treatment, but rather to the whole group. We expect a large majority of the Stage 1 treatment group will be unaffected and continue to have zero earnings or earnings below SGA, but a relatively small proportion of beneficiaries will increase their earnings to a level that is high enough to take advantage of the benefit offset (i.e., above SGA). For the effects to be detectable, the effect on this group must be proportionately larger. For example, if only10 percent of those exposed to the offset respond by expanding their earnings, their increase in earnings must be 10 times as large as the MDE shown here to be detectable, because a $10 impact on this subset would raise the overall treatment group mean by only $1.8


Among the volunteers for the Stage 2 treatments, we will have the greatest precision for the benefit offset, taken by itself, because this is the largest treatment group (T21). This will allow us to be confident of detecting impacts of $551 on annual earnings (21 percent of the control mean) and $109 on annual SSDI benefits (about 1 percent of the control mean).


For the contrast of the other treatments (T22) with the current law control group, we will be able to detect with confidence impacts of $609 on annual earnings, or about 23 percent of the control mean. As always, we will have much better precision for estimating impacts on SSDI benefits—we will be confident of detecting impacts of $117 on annual benefits, or about 1 percent of the control mean. The greater precision for impacts on benefits reflects the fact that the variance of benefits, controlling for baseline characteristics, is much lower than the corresponding variance of earnings, and the expected mean of benefits is higher.9


It is important to note that the MDEs presented here are for the estimated impact of the “intent to treat”—i.e., comparisons of outcomes for all individuals assigned to a particular treatment or policy parameter with those of the entire control group, whether or not all those assigned to treatment actually received it. To the extent that there is nonparticipation in the demonstration treatments, the impact of the treatment on those who do participate must be proportionally larger for these “intent to treat” effects to be realized.


Finally, it is important to note that in deriving these MDEs we have used two-tailed statistical tests. For some estimates, one-tailed tests would be more appropriate. One-tailed tests would imply somewhat smaller MDEs.


B.2.4 Procedures with Special Populations

We are targeting BOND to SSDI beneficiaries, individuals who have disabling conditions. It is important that volunteers in Stage 2 understand the requirements of the demonstration and the changes to their benefit structure. Therefore, prior to reviewing the participation agreement BOND Specialists administer a short cognitive screener to potential volunteers. Those beneficiaries deemed cognitively able to provide informed consent are able to respond to the baseline survey as well. For respondents who are unable to respond to all three of these screening questions, we terminate the intake session. Those who can pass the cognitive screener continue with the intake session.


Consistent with the SSDI population, which is by definition disabled, we established data collection procedures to accommodate various types of disabilities. If BOND participants do not meet the cognitive requirements, we identify proxy respondents so as not to exclude these sample members from the data collection effort. The same cognitive screeners we use for the demonstration participants apply to proxy respondents. In other instances, the participant may be cognitively capable of responding to the survey but may need assistive technology to do so. Examples of assistive technology include relay services or Braille show cards (for in-person interviews only). While less frequent, we may also encounter interview respondents whose first language is Spanish. We translated each of the survey instruments and modules into Spanish, for administration in the language most comfortable for the respondent. All preliminary contacting materials and consent forms have a Spanish version.


In addition to Spanish translation, we budgeted for ASL interpreters for respondents who require ASL translation.10 We have not budgeted to translate the instrument into other languages as we expect most respondents will be able to respond in English, Spanish, or with ASL. However, we may be able to accommodate respondents who require a language other than English, Spanish or ASL. We anticipate that the number of respondents requiring a language other than English, Spanish, or ASL is small, and, if needed, we may be able to work with professional interpreters in these cases. In such a situation, professional interpreters relay the questions and responses between the interviewer and the respondent via telephone. We successfully used this situation in other studies. In addition to English and Spanish versions, the contact and advance letters both provide a TTY number for the hearing-impaired. We note any calls to request materials in other languages, so that we can schedule appointments with those respondents with an interpreter included.


B.3 Methods to Maximize Response Rates

All Stage 2 beneficiaries completed the baseline survey during their enrollment into the study. However, the target response rate for the Stage 2 BOND interim and follow-up surveys, and the Stage 1 36-month survey is 80 percent. To achieve this response rate, the Abt team developed a comprehensive plan to minimize sample attrition and maximize response rates. This plan involves preliminary tracking and locating of all sample members, incentive payments, and sample control during the data collection period.


In addition, the target rate for the Stage 1 First Contact Letter survey was 80%. We collected contact information from the SSA administrative database. We expected this information was 95% accurate, and if not, the call center used other databases to update the information. We fielded the survey on a sample of the beneficiaries who received letters in the final mailing. Based on the results of other SSA surveys, this maximized response rates and ensured the most accurate information.


Participant Tracking and Locating

As described in Section A.3.2, the Abt Associates team developed a comprehensive participant tracking system, to maximize response to the BOND 12-month interim and 36-month surveys. This multi-stage locating effort blends active locating efforts (which involve direct participant contact) with passive locating efforts (which rely on various consumer database searches). At each point of contact with a participant, interviewers collect updated name, address, telephone, and email information. In addition, they also collect contact data for up to two people that do not live with the participant, but likely know how to reach the participant. Interviewers only use secondary contact data if our primary contact information proves invalid—for example if we encounter a disconnected phone, or a returned letter marked as undeliverable, etc.


In addition to the direct contact with participants, we conduct several database searches to obtain additional contact information. Passive tracking resources are comparatively inexpensive and generally available, although some sources require special arrangements for access.

Use of Incentive Payments

As described in Section A.9, the use of incentive payments for the BOND surveys help ensure a high response rate, which is necessary to ensure unbiased impact estimates. Exhibit B7 summarizes the proposed incentive payment structure and tracking strategy for the Stage 2 and Stage 1 samples.


Exhibit B7. Methods to Maximize Response Rates

Survey Sample

Proposed Respondent Incentive

Tracking Strategy

Stage Two:

Baseline Survey Respondents

$40

  1. Establishing rapport with respondent through the rigorous outreach and intake efforts conducted under Task 12;

  2. Passive Data (NCOA and phone updates)l;

  3. SSA administrative data updates;

  4. NUMIDENT searches every 6 months;

  5. Inter-wave tracking mailings at 6 month intervals, beginning in Month 19;

  6. Advance letter mailings one month before the 12- and 36-month surveys.

Baseline Monetary Aid for Child Care or Transportation Assistance (75% of baseline respondents)

$10

Interim Survey

$25

36-month Survey

$45

Inter-wave tracking mailing

$5

Stage One:

36-month Survey

$25

  1. SSA administrative data updates

  2. NUMIDENT searchers every 6 months

  3. Advance letter mailings one month before the 12- and 36-month surveys



In addition to the surveys, Stage 2 participants receive letters requesting they update their contact information periodically. Those participants who return their updated contact information to Abt receive $5 in appreciation for their time.


B.3.1 Sample Control During the Data Collection Period

During the data collection period, the contractor minimizes non-response levels and the risk of non-response bias in the following ways:


  • The Contractor recruits interviewers skilled at working with this population. Interviewers receive additional training in working with special populations and assistive technologies.

  • The Contractor uses trained interviewers who are skilled at maintaining rapport with respondents, to minimize the number of break-offs and the incidence of item non-response.

  • Respondents have a choice of time for the data collection.

  • We will take additional field tracking and locating steps, as needed, when we do not find sample members at the phone numbers or addresses previously collected.

  • The use of the Abt Associates Field Management System and Mathematica’s Survey Management System permits interactive sample management and electronic searches of historical tracking and locating data.

  • Both contractors require their survey director and field supervisors to manage the sample to ensure we achieved (or approached) the target response rates evenly for treatment and control groups in each BOND site.


By these methods, the Contractor anticipates being able to achieve the targeted 80 percent response rate for the interim and follow-up surveys.


B.4 Tests of Procedures

Abt Associates conducted a pretest of the baseline survey instrument. A sample of six SSDI beneficiaries served as pretest respondents for the baseline survey. The pretest allowed the Contractor to test the appropriateness of language level and word usage in the questionnaire and to confirm the estimates of interview length. Experienced interviewers conducted the pretests. Based on the results of the pretest, we prepared a pretest report that described the problems encountered and recommended solutions in order to shorten the instrument to conform to the planned length, simplify the language to ensure that respondents understand the questions, and modify question order or skip patterns to make sure that items flow smoothly and logically for respondents.


We did not do pre-tests for the other three instruments (the interim survey, and the Stage 1 and 2 36-month follow-up surveys) because these surveys reference the BOND program. Prior to beginning enrollment, we did not formally introduce the demonstration to the public yet, therefore, respondents would not have learned of it. Aside from the questions about the demonstration, the Stage 1 and Stage 2 36-month surveys mirror closely the baseline survey. The design of the baseline and follow-up surveys took into consideration the findings of the baseline pre-test.


We also did not do a pre-test for the Stage 1 Contact Letter survey. The SSA communications office developed the survey using survey questions already validated by other SSA surveys.


B.5 Statistical Agency Contact for Statistical Information

The individuals shown in Exhibit B4 assisted SSA in the statistical design of the BOND evaluation.


Exhibit B4. Individuals Consulted on the Study Design

Name

Telephone Number

Role in Study

Dr. Howard Rolston

301-634-1820

Principal Investigator

Dr. Larry Orr

301-467-1234

Project Quality Advisor, BOND Implementation

Dr. Jacob Klerman

617-520-2613

Project Quality Advisor, BOND Evaluation

Dr. Stephen Bell

301-634-1721

Co-Director, BOND Evaluation

Dr. Dave Stapleton

202-484-4224

Co-Director, BOND Evaluation

Dr. Stephen Kennedy

617-349-2396

Technical Reviewer


Direct inquiries regarding the statistical aspects of the study's planned analysis to:


Dr. Howard Rolston Principal Investigator Telephone: (301) 634-1820

Dr. Stephen Bell Co-Director, Evaluation Telephone: (301) 634-1821

Dr. Dave Stapleton Co-Director, Evaluation Telephone: (202) 484-4224

1 We will exclude beneficiaries who will reach retirement age before the end of the demonstration.

2 The demonstration will exclude two much smaller groups: those under 20 because most are receiving benefits under the entitlement of a parent, classified as students, and beneficiaries who have participated in another SSA demonstration project. The latter will avoid confounding the impacts of BOND treatments with those of the other demonstrations.

3 Intake workers will inform individuals that they may receive a $1 for $2 benefit offset only, a $1 for $2 benefit offset with enhanced work incentives counseling, or we may assigned them to the control group.

4 Volunteers from the SP group have two random assignment dates and two group assignments, due to the staged intake process. The Stage 2 date and assignment governs BOND treatment for the volunteers.

5 We developed a predictive model of employment using pre-demonstration data. The model identifies background characteristics of beneficiaries (from among characteristics measured in SSA administrative data) that associate most strongly with later employment (as measured by annual earnings records at SSA). We will apply this model to the corresponding background characteristics of demonstration sample members to identify the “most likely to work” portion of the potential survey sample, which we will oversample.

6 Solicitation proceeds in waves, with a random sample of beneficiaries solicited in each wave, until a sufficient number of beneficiaries have volunteered to fill the Stage 2 experimental design cells (i.e., a total of 12,600 is reached).

7 The actual volunteer rate of 5.4 percent was higher than the anticipated rate of 4 percent. This allowed us to reduce the number of beneficiaries who we solicited for Stage 2 from an anticipated 315,000 to 240,000.

8 Although the beneficiaries affected by the offset are not an identifiable group, we can still estimate the impact for those who experience increases in earnings above SGA and, therefore, we exposed to the offset. The procedure, proposed by Bloom (1984), is valid if the impact on treatment group members who do not use the benefit offset is zero. The mean impact on the entire treatment group is divided by the proportion that ever used the benefit offset (i.e., whose benefits were partially reduced due to earnings above SGA in at least one month) to get the mean impact on the latter. We obtain the standard error by dividing the standard error for the whole group by the same proportion.

9 We derived the estimated within-site variance of the outcome and cross-site variance of the impacts used here from Project NetWork data. Project NetWork was an SSA demonstration testing return-to-work services for SSDI and SSI beneficiaries in the 1990s, evaluated using a random assignment design (see Kornfeld, Robert and Kalman Rupp. 2000. “The Net Effects of the Project NetWork Return-to Work Case Management Experiment on Participant Earnings, Benefit Receipt, and Other Outcomes.” Social Security Bulletin 63(1): 12-33). Special analyses of the Project NetWork data provided outcome variances and impact estimates by site for eight sites. For both earnings and SSDI benefits, baseline values of the outcome, as well as other baseline characteristics of the beneficiary, are included as covariates in the impact regression. This set of covariates explains 75-80 percent of the variance of SSDI benefits, largely because most sample members’ benefits change very little over time. Baseline earnings have much less predictive power, because earnings are much less stable over time. Mean annual earnings and SSDI benefits for this sample, in 2006 dollars, were $2,675 and $21,063, respectively.

10 We estimated 3% of the BOND population will require ASL translation.

Part B: Collection of Information Employing Statistical Methods 16

File Typeapplication/msword
Author889123
Last Modified By889123
File Modified2014-01-15
File Created2014-01-15

© 2024 OMB.report | Privacy Policy