AFI OMB Justification Package - Part B - 8.31.15

AFI OMB Justification Package - Part B - 8.31.15.docx

Assets for Independence (AFI) Program Evaluation

OMB: 0970-0414

Document [docx]
Download: docx | pdf






SUPPORTING STATEMENT B FOR INFORMATION COLLECTION FOR THE

ASSETS FOR INDEPENDENCE (AFI)

PROGRAM EVALUATION


OMB No. 0970-0414









Office of Planning, Research, and Evaluation

Administration for Children and Families

U.S. Department of Health and Human Services

370 L'Enfant Promenade, SW

Washington, DC 20447




August 2015












B. COLLECTIONS OF INFORMATION EMPLOYING STATISTICAL METHODS


This document presents Part B of the Supporting Statement for follow-up data collection activities for the Assets for Independence (AFI) Program Evaluation (hereafter, AFI Evaluation). This submission seeks OMB approval for one data collection instrument relating to surveys of the enrolled study sample at and 36 months following baseline:


OMB previously approved data collection plans for the baseline survey, 12-Month Follow-Up Survey, and Implementation Interview guide in a prior package (August 2012). All previously approved information collections will be complete prior to review of this request


1. Respondent Universe and Sampling Methods



Site Description. Two AFI project sites were selected to participate in the evaluation: one in Albuquerque (NM) and one in Los Angeles (CA). The Albuquerque site is operated by Central New Mexico (CNM) Community College, a subgrantee of Prosperity Works (PW). The Los Angeles site is operated by RISE Financial Pathways, an AFI grantee formerly known as the Community Financial Resource Center. These sites were select based on the following criteria:


  • Received their first grant in 2006 or earlier (meaning that they have completed a full five-year grant period for at least one grant);

  • Had a grant that was active during FY 2011;

  • Had at least 600 IDAs opened across all of their grants; and

  • Showed some indication of potential capacity to participate in the evaluation through meeting one of the following three criteria:

(1) had a new grant awarded in FY 2011 of at least $300,000;

(2) had a grant expiring in FY 2011-FY 2013 of at least $300,000 (possibly indicating the capacity to apply for a new grant of that size); or

(3) had at least one grant under which 400 or more IDAs were opened.


The final evaluation sites were selected based on the six criteria noted above, as well as their capacity to recruit sufficient sample and willingness to participate in the random assignment experiment.



Respondent Universe. The respondent universe for Assets for Independence (AFI) Program Evaluation consists of persons aged 18 years old and older who reside within the selected site areas and meet the site specific eligibility criteria to take part in the evaluation. Study participants are low-income workers applying to receive Individual Development Accounts (IDAs) from AFI grantee institutions. IDAs are savings accounts that can be used to fund small-business development, higher education, or the purchase of a first home.


Study Eligibility. The main study eligibility criteria are:


  • Annual household income must be below 200 percent of the federal poverty level ($21,780 for an individual, $29,420 for a couple) or below the eligibility level for the federal Earned Income Tax Credit; and

  • Household assets must be less than $10,000, excluding a residence and one vehicle.





2. Procedures for Collection of Information


Sample Design

The sample design called for the evaluation to be conducted in two sites. Each site randomly assigned approximately 400 AFI-eligible cases between January 2013 through July 2014. Each site randomly assigned sample members to one of two groups: a control group and a treatment group receiving conventional AFI services.


Estimation Procedures

In voluntary programs such as AFI, random assignment takes on special importance for evaluation because under normal, nonexperimental settings people who enter an AFI project may differ in unobservable ways from AFI nonparticipants. Multivariate statistical techniques can account for observable differences (and correlated unobservables), but have difficulty dealing with such unobservables as a client’s work ethic, propensity to save, or self-motivation. If participants are similar to nonparticipants in observed characteristics but are more motivated and thus apply for AFI, the better outcomes for AFI participants may result from the more favorable unobserved characteristics of participants, not from the project itself. With random assignment, the sample of individuals allowed to participate is likely very similar on both observable and unobservable characteristics to the control group sample not allowed to participate.


Random assignment is an effective tool for estimating the effects of alternative AFI project features as well. Although AFI grantees vary naturally in their match rates, required hours of financial education, and other program components, it is risky to use such variation as the basis for natural experiments in program design. One reason is that program variations may reflect other fundamental differences in program settings, such as differences in the financial capacities of the administering program agencies. In addition, the client subpopulations attracted to these differing program models are likely to vary in important ways, confounding any attempt to isolate the effects of the program features from the underlying heterogeneity of the clientele.


Degree of Accuracy Required

A key issue is whether the design is strong enough to detect the expected effects. Per group sample sizes must be sufficient to detect reasonable differences between the treatment condition (A) and the control condition (B), while accounting for loss of sample due to nonparticipation, dropping out, and survey nonresponse and attrition. The statistical power of a study design is the probability of detecting a real difference between two groups on outcomes of interest. Sample size is the primary determinant of the power of the experiment.


Equal allocation of the sample across groups maximizes the efficiency of the sample; unequal allocation requires larger total sample sizes to achieve the same level of power. Some experiments allocate less sample to control groups to gain the participation of sites that might object to a large percentage of applicants being denied the service. For this evaluation, we plan to utilize equal allocation of the sample at each site between the treatment and control groups.


Consider a two-group design in two sites, with total samples of 600 and 500. For survey-measured participant outcomes, assuming a follow-up survey response rate of 85 percent, the expected per-group sample sizes will be 255 (0.85 x 300) for the first site and 213 (0.85 x 250) for the second site. For pooled analysis of survey outcomes across both sites, the per-group sample will be 468 (255 + 213).


One common way to measure power is to calculate the minimum detectable effect (MDE) as

a fraction of the standard deviation of a given variable in the sample. The MDE is “the smallest effect that, if true, has an X percent chance of producing an impact estimate that is statistically significant at the Y percent level,” where X is the desired statistical power and Y is the desired significance level (Bloom 1995).


Our MDE calculations are based on standard statistical assumptions: a desired power of 80 percent, a significance level (two-sided) of 5 percent, and a control-group mean outcome value of 0.50 for a survey-measured short-term outcome such as the incidence of material hardship.


As shown in Attachment C, the minimum detectable effects under balanced two-group designs in each site (equal numbers randomly assigned to groups A and B) are 0.136 in the first site and 0.177 in the second site, for the the treatment-to-control comparisons. These numbers represent proportional effects of 27 to 35 percent. As also shown in Attachment C, pooled-site estimates provide greater precision, resulting in MDEs of 0.109 (proportionally, 22 percent) for the treatment-to-control comparisons under the balanced two-group designs. The MDEs are conservative to the extent that they do not account for the intended use of multivariate models in estimating treatment effects; the inclusion in these models of explanatory variables measured in the baseline survey is expected to improve the precision of impact estimates.


In the context of this study, pooled estimates are not subject to the degree of cross-site variance that might normally be present, as the two sites (both in southwestern U.S. metropolitan areas) have similar client demographic characteristics.


Data Collection Procedures


All previously approved information collections - baseline survey, 12-month follow-up survey, and implementation interviews – will be complete by the time this package is reviewed. For information about data collection procedures for approved and completed information collections, see the previously approved justification package


Informed consent: Before administering the baseline survey, site administrators reviewed the informed consent online with participants. The informed consent form provided participants with enough information to decide about participation, including information about the experiment’s purpose, procedures used, and participation benefits and risks. Site administrators acknowledged receiving informed consent in the programmed survey. Attachment E contains a copy of the informed consent language. The 36-month follow-up survey is the same as the instrument used at the 12- month follow-up, thus the multi-year consent is an informed consent. The consent form allowed for the possibility of the evaluation being extended beyond the 12-month follow-up period, without requiring any re-consent of the enrolled sample members. Annual re-consent would likely result in unacceptably high sample attrition among control cases, as these cases would have little reason to re-consent. Prior to random assignment, the incentive to provide consent comes through one’s understanding that the only way to enter the AFI project is via random assignment, accepting a 50 percent chance of becoming a control case. Once assigned to the control group, individuals have no further incentive to accept this restriction on their access to IDA services.) If the evaluation is extended, the proposed additional information collection will be submitted to OMB.



36-Month Follow-up Survey:

Tracing. Sample mobility and panel attrition are familiar challenges to longitudinal studies. To rigorously determine the effects of AFI participation, study participants will be tracked so they can be interviewed for the 36-month follow-up survey. We will implement panel maintenance activities prior to and during the follow-up survey that will focus on keeping an up-to-date database of sample member contacting information to minimize attrition and nonresponse due to incomplete or incorrect contact information. One round of sample maintenance as well as locating activities will be conducted during the administration of the follow-up survey. The combination of the two will produce accurate contact information on sample members at reasonable costs. Attachment F contains panel maintenance materials.


A round of sample maintenance will be conducted in fall 2015, prior to the launch of the follow-up survey. Sample members’ names will be submitted to batch tracing; they will also be sent a letter asking them to update their information by mail. We expect approximately 15 percent of the sample to return update cards based on results of previous efforts.


Any sample members not located for the 36-month telephone interview will be traced by interactive tracing experts who have access to a variety of databases to locate and verify current addresses and telephone numbers. Interactive tracing specialists contact friends and relatives, use crisscross directories to identify neighbors, and contact directory assistance for possible updates and use a management system to keep a history of calls to subjects and contact.


If interactive tracing does not yield good contact information, trained field representatives will attempt to visit the last known address. Trained, experienced staff will investigate physical locations to verify or disprove the subject’s reported location. Field tracers are trained to establish trust and elicit information from a subject's relatives, neighbors, schools, business associates, and government agencies. If the sample member is no longer at the address, the field tracer will attempt to locate the individual or someone who knows the sample member, following procedures proven to be effective in other studies. For example, at apartment buildings, the field representative will try to get information from the manager; at abandoned residences, the field representative will visit neighbors. When found, sample members will be asked to call and complete the survey, provide a telephone number, or schedule an interview. Sample members will receive a $5 token of appreciation as a part of the tracing efforts. Attachment G contains field locating materials.


Web-based site management system: As described in the previously approved justification, a site management system provides on-demand access to information and facilitate reporting and communication among the project management team. The system is a central hub that designated site administrators and project staff members can access through an Internet browser. This system also allows site administrators and project enrollees from the grantee organizations the ability to complete the baseline and follow-up questionnaires. Screen shots of the system and selected instrument questions can be found in Attachment D.


Web site security: As described in the previously approved justification, since the information contained in this web site will be sensitive, security is important. The web site’s membership comprises the RTI, Urban Institute, MEF Associates, and ACF project teams as well as site administrators designated by ACF. The web site uses Secure Socket Layers to create a secure, gated community where access is restricted and only authorized users will be allowed entry into specific areas and are granted certain functional privileges.


Lead Letters. As approved previously, the evaluation team used a lead letter prior to administration of the 12-month follow up survey. A lead letter is also planned prior to the 36-month survey. Using a letter to inform households about a forthcoming telephone call and giving them a general description of the survey being conducted has been shown to increase survey response rates (DeLeeuw, 2007). The letter : 1) informs sample members of the purpose of the AFI Evaluation; 2) provides useful information regarding the 36-month follow-up survey; 3) includes a toll-free telephone number that respondents can call if they have questions; and 4) includes information regarding the incentive that will be offered to respondents who agree to participate. The content of the letter is the same as for the 12-month follow-up survey, but has been updated to reflect the time period. Attachment H contains a copy of the lead letter.


Data Collection. Households will be contacted by telephone approximately one week after the lead letter has been sent. Interviewers will introduce themselves, ask to speak to the selected respondent and (when applicable) state "You may have received a letter from us” then will inform the potential participant about the study and proceed with the introductory script and informed consents (Attachment I).



Procedures with Special Populations

As with previously approved data collections for the AFI Evaluation, two versions of the follow up instrument will be prepared: an English version and an Other Language version. The other language will likely be Spanish, but will be based on the site and the populations served by that grantee. Both versions will have the same essential content.


3. Methods to Maximize Response Rates and Deal with Nonresponse


Current Response Rates. Overall, we expect response rates to be sufficiently high in this study to produce valid and reliable results that can be generalized to the universe of the study. We are targeting an 80 percent response rate, which is based on experience in other studies with similar populations and follow-up intervals. Data collection for the 12-month follow-up survey is expected to be completed in September 2015. Based on response rates to date (August 2015), the AFI team expects to achieve an 80 percent response rate on the 12- month survey. The following table shows response rates for cohorts released as of August 2015.


Random Assignment Month

Number of Cases

Percent of Total Sample

Response Rate to Date

January 2013

7

1%

43%

February 2013

11

1%

73%

March 2013

13

2%

54%

April 2013

21

3%

86%

May 2013

7

1%

86%

June 2013

18

2%

89%

July 2013

4

0%

75%

August 2013

27

3%

81%

September 2013

12

1%

83%

October 2013

48

6%

85%

November 2013

66

8%

82%

December 2013

19

2%

79%

January 2014

35

4%

89%

February 2014

104

13%

79%

March 2014

125

15%

79%

April 2014

63

8%

67%

May 2014

50

6%

70%

June 2014

138

17%

63%

July 2014

46

6%

74%

Total

814


75%


36- Month Follow-up Survey. To maximize interview response rates, the proactive tracing strategies described above–panel maintenance letters and batch tracing—will be implemented before 36-month follow-up data collection begins. Response rate outcomes will be routinely reviewed during the data collection period to identify the root causes for nonresponse and to develop strategies to increase them. For instance, our goal for many longitudinal studies is to achieve at least an 85 percent response rate. We will conduct a review of the cases to determine whether nonresponse is the result of issues with interviewers contacting respondents, gaining cooperation from respondents, or working fewer hours than expected. When the major causes of nonresponse are identified, tailored strategies are put into place; such strategies may include increasing calling effort during specific call windows or developing different scripts to address specific respondent concerns.

Respondent Tokens of Appreciation. Sample members who complete the follow-up survey within two weeks of their study anniversary date will receive $50 for their participation. After two weeks, ACF will reduce this amount to $40 for those who complete the survey within one month. For those who still have not responded to the survey one month after their anniversary date, ACF will offer an increased amount of $50 to a subsample of non-respondents. This differential payment structure has proven effective at increasing response rates for long-term follow-up studies. This token of appreciation will be mentioned in the lead letter (Attachment J) sent to sample members prior to the survey launch and is intended to encourage, but not obligate, participation. This token will be mailed to them within 2 to 4 weeks after survey completion.


As discussed in Supporting Statement A, a wide variety of research has shown that tokens of appreciation or incentives improve response rates in telephone surveys (Singer, 2002; Cantor, O’Hare, and O’Connor, 2007). Incentives can help gain cooperation through fewer calls, which can help make their use cost effective. Additionally, studies have shown that modest incentives are not coercive (Singer and Bossarte, 2006). Thus, implementing an incentive plan can be a cost-effective way for surveys to improve response rates and lower refusal rates, and could, over the course of data collection, actually reduce costs and burden to respondents by reducing the need for additional calls to potential respondents.


The project team reviewed many designs for this study to maximize participation in the follow-up survey where panel attrition is expected. One consideration was whether to provide tokens of appreciation before the interview (prepaid) or after the interview (promised). Many studies in the survey literature find prepaid incentives to be more effective than promised incentives (e.g., Linsky, 1975 and Armstrong, 1975 for an overview; Church, 1993). However, this has not been demonstrated in the context of a program evaluation with random assignment. As noted in Supporting Statement A, sample members assigned to the control group will be less motivated to complete the follow-up survey. Furthermore, prepaid tokens of appreciation may have differential responses from respondents in the treatment group who maintain an ongoing relationship with the program compared with respondents in the control group who do not. Lacking evidence that a prepaid token will result in less differential nonresponse, we opt to provide more traditional promised tokens of appreciation.


Various studies have demonstrated significant effects of promised incentives compared to a no incentive condition. For example, Cantor et al. (2003) found an almost 10 percent increase in response rate when promising $20 (vs. no incentive) in an RDD survey. In a meta-analysis of 39 controlled experiments, Singer et al. (1998) found that the effect of prepaid incentives on response rates did not differ significantly from the effect of promised incentives. Other studies (e.g., Yu and Cooper, 1983) also found promised tokens of appreciation significantly improved response rates.



Survey Data Collector Training Training. Response rates vary greatly across interviewers (e.g., O’Muircheartaigh and Campanelli 1999). Improving training for individuals that collect survey data has been found effective in increasing response rates, particularly among interviewers with lower response rates (Groves and McGonagle 2001). The following procedures will be used to maximize response rates:

  1. Data collectors will be briefed on the potential challenges of administering a survey on financial experiences with low income families. Well-defined conversion procedures will be established.

  2. If a respondent initially declines to participate, a member of the conversion staff will re-contact the respondent to explain the importance of participation. Conversion staff are highly experienced telephone interviewers who have demonstrated success in eliciting cooperation. Conversion staff will be able to provide a reluctant respondent with the name and telephone number of the contractor’s project manager who can provide respondents with additional information regarding the importance of their participation.

  3. A toll-free number, dedicated to the project, will be established so potential respondents may call to confirm the study’s legitimacy.


Refusal avoidance training will take place approximately two to four weeks after data collection begins. During the early period of fielding the survey, supervisors, monitors, and project staff will observe data collectors/interviewers to evaluate their effectiveness in dealing with respondent objections and overcoming barriers to participation. They will select a team of refusal avoidance specialists from among the data collectors who demonstrate special talents for obtaining cooperation and avoiding initial refusals. These data collectors will be given additional training in specific techniques tailored to the survey, with an emphasis on gaining cooperation, overcoming objections, addressing concerns of gatekeepers, and encouraging participation. If a respondent does refuse to complete the survey, data collectors will attempt to determine their reason(s) for refusing to participate, by asking the following question: “Could you please tell me why you do not wish to participate in the study?” The survey data collector will then code the response and any other additional relevant information. Particular categories of interest include “Don’t have the time,” “Inconvenient now,” “Not interested,” “Don’t participate in any surveys,” and “Opposed to government intrusiveness into my privacy.”


Quality Circle Meetings. The contractor will hold weekly QC meetings with data collectors and supervisors to discuss data collection progress and issues. Our experience has shown that these sessions build rapport and enthusiasm among data collectors and project staff, allow project staff to identify important refusal conversion strategies, assist in the refinement of the instrument, and provide ongoing training for staff. Such meetings have identified previously unrecognized problems with a CATI instrument, such as questions that the respondent does not understand, questions that are difficult to administer, and software problems. These sessions also provide feedback on the data collection procedures and systems.


Data Review. We will periodically review data frequencies from the CATI survey to ensure that the program is working as intended and also to identify areas for interviewer feedback. We will review for high item-level nonresponse rates, recording of complete verbatim responses and contact information, and questions that may be unclear or confusing to interviewers and sample members.


4. Tests of Procedures or Methods to be Undertaken


A preliminary cognitive assessment of the instrument content and format has informed refinements to the follow-up survey instrument. The instrument is the same as that used at the 12-month follow-up data collection.



5. Individuals Consulted on Statistical Aspects and Individuals Collecting and/or Analyzing Data


The basic sample design for the AFI program evaluation was reviewed by senior professional staff at the Urban Institute and RTI International. These staff included Dr. Douglas Wissoker, one of the internal consultants who comprise the Urban Institute’s Statistical Methods Group.


The AFI Evaluation contract was awarded to Urban Institute on September 27, 2011. The AFI Follow-Up Evaluation contract was also awarded to Urban Institute in August 2015. Under the current evaluation, contractor personnel implemented the field assessment, recruited and selected AFI sites, developed the survey instruments, conducted initial data collection and random assignment, implemented participant tracking and conducted the 12-month follow-up survey, conducted the implementation study, conducted data analysis and developed statistical reports. Data collection was conducted during the 2012-2015 calendar years by RTI International, an independent, nonprofit research institute located in North Carolina.



File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File TitleOMB Application for
Authormcl2
File Modified0000-00-00
File Created2021-01-24

© 2024 OMB.report | Privacy Policy