0990-0291

0990-0291.doc

Adolescent Family Life Program: Prevention Demonstration Projects

OMB: 0990-0291

Document [doc]
Download: doc | pdf

B. Collection of Information Employing Statistical Methods

Statistical methods are not used in the collection of information for all AFL demonstration projects using the revised core evaluation instruments; therefore, responses to this section apply only to the methods used for the cross-site evaluation of the AFL program.

1. Respondent Universe and Sampling Methods

The cross-site evaluation (which will be a subset of the projects and respondents to the survey) will include up to approximately 2,661 adolescents receiving abstinence education. Adolescents served by Title XX Prevention projects and those selected to serve as comparison groups will participate in the cross-site evaluation.

A total of 36 Prevention projects serve adolescents. From these projects, 7 Prevention projects involving 30 schools or after-school sites have been selected to obtain the sample of 2,661 participants for the cross-site evaluation. Prevention projects were selected for participation based on the rigor of their evaluation designs, namely those that have equivalent treatment and comparison groups and that avoid contamination by the intervention of comparison group respondents. We also prioritized projects that are located in different geographic regions in order to maximize regional diversity and projects that employ implementation strategies conducive to rigorous evaluation (including appropriate timing of program delivery). Information about evaluation design rigor, implementation strategies, and project characteristics was obtained by reviewing end-of-year reports submitted to OAPP and through discussions with OAPP project officers. Within each project, adolescents will be assigned by AFL project staff to treatment and comparison groups.

We conducted power analyses to determine the optimal sample size for detecting statistically significant differences between treatment and comparison groups. The frequency with which adolescents report they have engaged in communication with their parents about abstinence and related topics serves as the primary outcome measure, and responses will be averaged across 15 items using a 4-point scale (from 0 = no talk to 4 = four times or more in the previous 3 months; Miller et al., 1993) for the purposes of power calculations. Power calculations were based on the comparison between treatment and comparison groups. [Three other outcome measures—attitudes about abstinence, intentions to have sex, and sexual activity—will not be considered in final power analyses because some projects may obtain waivers to omit these questions among very young respondents (aged 9 to 13) or among respondents targeted through organizations, such as schools, that will not allow data collection of such sensitive information.] Several assumptions were made concerning population parameters for power analyses of the parent-child communication outcome. First, we assumed a 0.5 correlation coefficient between outcomes measured at baseline and 18-month follow-up for the same respondent. Although there is little definitive information about the true correlation over 2 years, there is some evidence from 1-year follow-up studies that such correlation is no stronger than we assume here (Sales et al., 2006). Second, we assume that all outcomes between different respondents will be uncorrelated. (Siblings or adolescents living in the same household as an enrolled study participant will be excluded.) The exception to this is that because adolescents in Prevention projects are clustered within schools, neighborhoods, or communities, we assumed a school or community-level intraclass coefficient of 0.10, based on pilot data analyses and prior RTI school-based data about adolescent risk behavior. Third, it was assumed that adolescents will report a mean score of 1.2 at baseline and that treatment adolescents would report a mean score of 1.7 at the end of the second school year, as reported by Miller (1993). Each of these assumptions is very conservative, resulting in increased sample sizes for our evaluation. In contrast, Miller (1993) produced similar effects at 3 months, using an extremely low intensity intervention (a videotape viewed by adolescents and their parents). However, our assumption allows us to include enough subjects in our evaluation to detect small effects, and making a less conservative assumption would create the possibility that the Prevention project interventions are efficacious but that our sample size is not large enough to detect this.

To achieve 0.80 power, analyses indicate that a total of 2,661 adolescents from 24 schools or after-school sites will need to complete the baseline survey. The numbers of adolescents in the respondent universe and in each sample are shown in Exhibit 11. The expected response rate at the second school year follow-up includes all adolescents who participate at baseline, including those who may refuse to participate in the first school year follow-up data collection.

All decisions about assumptions that guided our power analysis were intended to err in favor of a larger sample size to safeguard for the possibility of a worst case scenario in terms of difficulty detecting effects. These assumptions increased our confidence that smaller effects produced by Prevention projects than those found by previous programs would be reasonably detected using the sample sizes we identified.

As noted, our sample design is based on conservative assumptions about survey response. Thus, our estimates of longitudinal retention rates shown in Exhibit 11 should be viewed as “worst case” scenarios that if hold true, would still ensure sufficient sample sizes to reasonably detect small program effects. For Prevention, we estimate that at least 96% of adolescents who are contacted and for whom parent consent is obtained will complete the baseline survey, that at least 85% of adolescents will be retained between the baseline and first follow-up survey, and that at least 80% of treatment adolescents and 70% of comparison adolescents will be retained between the baseline and second follow-up surveys.

Exhibit 11. Longitudinal Response Rates and Numbers of Adolescents

Numbers and Response Rates

Treatment Adolescents

Comparison Adolescents

Total

Number of subjects to be contacted at baseline

1,768

1,786

3,554

Expected parent consent rate

81%

75%


Number of subjects with parent consent at baseline

1,432

1,340

2,772

Expected response rate at baseline

96%

96%


Number of completed baseline surveys

1,375

1,286

2,661

Expected response rate at end of school year

85%

85%


Number of completed first follow-up surveys

1,169

1,093

2,262*

Expected response rate at end of second school year

80%

70%


Number of completed second follow-up surveys

1,100

900

2,000*

*A subset of the original 2,661 baseline respondents.

Exhibit 12 shows longitudinal retention rates for prior studies of various lengths.

Exhibit 12. Longitudinal Completion and Retention Rates for Prior Studies

Project

Institution/
Client

Sample

Survey

Time from Baseline

Follow-up Survey Completion Rate

Baseline to Follow-up Retention Rate

Evaluation of abstinence-based pregnancy prevention program (Project IMPPACT)

Inwood House/U.S. Department of Health and Human Services

7th and 8th grade students

Paper and pencil questionnaire

2 years

75%

59%

Child and Family Well-being Study (The Three Cities Study)

Johns Hopkins University/ National Institute for Child Health and Human Development

Focal children of poor households

Physical measurements and a CAPI/ ACASI questionnaire

Wave 2: 1.5 years

Wave 3: 6 years

82%

80%



It should be noted that while attrition will inevitably occur in this study, as it usually does in any longitudinal study, we do not expect attrition to bias any of the study’s main findings. In sample surveys, there will almost always be missing data due to the attrition (or initial nonresponse) of selected respondents. In longitudinal surveys, this problem is typically exacerbated as a function of time because there may be further attrition at each wave of the survey. Three distinct mechanisms causing missing data can be identified and the cause of missingness determines the extent to which bias may be introduced into the study estimates. These mechanisms include the following:

Data are said to be missing completely at random (MCAR) if the probability of attrition is unrelated to study outcome variables or to the value of any other explanatory variables, including the exposure conditions. No additional bias will be introduced to estimates based on incomplete data due to missingness under MCAR. However, the reduced data set will typically result in larger standard errors.

Data are said to be missing at random (MAR) if the probability of attrition is unrelated to study outcome variables after controlling for other explanatory variables. That is, attrition may vary by demographic characteristics. For example, adolescents of lower income may be more likely to drop out of the survey compared to adolescents of higher income. Thus bias would be introduced into an overall outcome variable estimate for adolescents but not into income-specific estimates. Thus, under MAR, the potential bias in estimates due to missingness can be eliminated (or reduced significantly) if the appropriate explanatory variables, such as income, are controlled for.

Data are said to be missing not at random (MNAR) if the probability of attrition is related to the study outcome variable itself. For example, suppose that adolescents who indicate lower parent-child communication about sex at baseline are more likely to drop out of the survey than adolescents who report more parent-child communication. In this case, the overall estimate of parent-child communication among all adolescents will be biased upward by attrition.

In practice, all three missingness mechanisms may be at work (i.e., different attriters may drop out according to different mechanisms). If MNAR is not dominant, then reasonably unbiased estimates of study outcomes can be constructed through appropriate modeling. In the case of this study, we do not expect MNAR to be present.

2. Procedures for the Collection of Information

To gather sensitive and complex data for the cross-site impact evaluation, AFL demonstration project evaluation staff will administer paper and pencil Teleform surveys with treatment and comparison adolescents.

In order for adolescents aged 17 or younger to be included in the cross-site evaluation sample, their parents must be able to read English or Spanish to provide active consent for their adolescent’s participation (either in writing or by telephone with mailed documentation), and all adolescents must be able to read English or Spanish to provide written consent or assent for their own participation in the study. Consent forms and assent forms are included in Appendix E.

All AFL sites will submit the survey instruments to their site IRB prior to initiating data collection. Copies of local site IRB approvals will be submitted to RTI’s IRB. The questionnaire data will be treated as private and maintained in a manner that satisfies the privacy requirements set forth by the site IRB. Any and all transmission of individual or case level data will also be done in accordance with privacy requirements set forth by their site IRB.

Data collection training, monitoring, and ongoing technical assistance will be provided for projects participating in the cross-site evaluation in order to assure high quality data collection procedures. All AFL project staff administering core evaluation instruments will be trained in survey administration, including consent and assent procedures, privacy guidelines, and identifying respondent distress. In addition, the training will emphasize to AFL project staff the importance of following the data collection procedures, including mailing procedures, in order to ensure that the rationale for data collection procedures is fully understood by those responsible for data collection.

Data collection staff will be encouraged to avoid reading all questions to groups of respondents if possible in order to avoid adolescents looking at each others’ survey responses. Completed instruments will be sealed in envelopes, and project staff will not unseal envelopes containing completed surveys in the presence of respondents. AFL Prevention project staff with access to identifying information will never view responses about respondents’ sexual activity in order to avoid any mandatory reporting requirements in their state. The lists of identifiers and identification numbers will be sent to RTI for safekeeping during the cross-site evaluation. Standard procedures will be developed for identification number assignment and linking for the cross-site impact evaluation, with exceptions made if necessary.

Cross-site evaluation baseline data will be collected by Prevention grantees from October 2008 through November 2009.

For the cross-site evaluation, individual parent consent form return incentives will be provided (such as arm bands, pencils, or mirrors) even if the parent refuses to allow the adolescent to participate. Adolescents will also receive a $10 gift card incentive for baseline data collection because adolescents are a difficult cohort to recruit for a 20-minute survey without the use of a small incentive. The decision to use incentives for this study is based on previous findings in the literature (Abreu & Winters, 1999; Shettle & Mooney, 1999; Singer et al., 1999) and by studies that incentives can significantly increase response rates among adolescents. Although these studies differ in other respects that could account for some variability in response rates, overall, incentives of at least $10 were generally associated with higher response rates compared with no incentive. It is expected that these modest incentives will enhance survey response rates without biasing responses or coercing respondents to participate, as well as higher data validity as adolescents become more engaged in the survey process. Because incentives are geographically and culturally specific, this standardized value will be offered, but individual grantees will determine what is actually provided. A protocol for standardized incentives for the cross-site impact evaluation will be suggested. Additional explanation regarding the use of incentives in this study is provided in Section A9.

Treatment and comparison group adolescents who completed baseline surveys will be surveyed again approximately 1 and 2 years after baseline (from March 2009 through November 2011). A potential threat to the external validity of the proposed longitudinal design is loss to follow-up or attrition (Biglan et al., 1991). In other words, the results of the evaluation may be different among the group of subjects who remain in the study after baseline from those who do not remain in the study after baseline. Potential attrition may be an important consideration in the selection of adolescents, particularly because grantees frequently recruit clients located in areas with high levels of transience and hard-to-reach populations (such as low-income families without telephones). RTI’s experience suggests that by using mail surveys and tracing and locating services and by obtaining extensive locating information from participants at baseline (i.e., cell phone, e-mail, contact information for family or friends), it becomes more likely to successfully survey at follow-up 80% of respondents who completed baseline interviews.

All questionnaire hard copies and electronic data will be stored in a secure area designated by the site IRB. AFL project staff will store completed parent consent and adolescent consent/assent forms in separate locked filing cabinets. Completed Prevention instruments for the cross-site evaluation will be sent via Federal Express to the RTI project director and marked as confidential with no expense to participating demonstration projects within 1 business day of survey administration. No respondent names will be included in the Federal Express package of completed instruments. Assent/consent forms and completed surveys must be shipped to RTI separately and on different days. RTI will be notified and provided a tracking number for each shipment. If shipments do not arrive as scheduled, tracing will immediately be initiated through Federal Express. This process will be monitored and feedback provided to AFL project staff throughout the data collection period. If needed, AFL project staff may be re-trained regarding mailing procedures.

3. Methods to Maximize Response Rates and Deal with Nonresponse

The following procedures will be used to maximize cooperation and to achieve the desired high response rates for the cross-site evaluation:

A $10 gift card will be offered to participants who complete the baseline survey. An additional $10 gift card for each follow-up survey will be offered to participants who complete the follow-up survey at the end of the first school year and at the end of the second school year.

An attempt will be made to locate participants who leave the study before the end of the cross-site evaluation. Location efforts will include mailings of refusal conversion materials designed to persuade participants to complete the study. In addition to using mailed refusal conversion materials, RTI may also conduct telephone-based refusal conversion, contacting each attriting participant via telephone.

RTI and AFL grantees will provide a toll-free telephone number to all sampled individuals and invite them to call with any questions or concerns about any aspect of the study.

AFL grantee data collection staff will work with RTI project staff to address concerns that may arise.

4. Tests of Procedures or Methods to be Undertaken

RTI implemented pilot tests of the core evaluation instruments (OMB 0990-0291) previously approved by OMB with 145 youths in North Carolina. The purpose of the pilot tests was twofold: (1) to assess technical aspects and functionality of the survey instrument and (2) to identify areas of the survey that were either unclear or difficult to understand.


Pilot test data collection was conducted from October through December 2007. Eligible participants originated from a convenience sample of students aged 9 to18 in North Carolina who attended schools in the state with low performance in reading and English and lived in low income communities. Low performance in reading was measured by the percentages at end of grade testing. Schools were eligible if 70% or fewer of their students were at grade level for reading. Parents were recruited to give permission for their children to participate in the pilot study through Parent/Teacher Association (PTA) meeting presentations, principal/school involvement, tabling at school events, flyers at libraries, attendance at fall festivals, and word of mouth through parents who had already agreed for their children to participate in the study. To obtain 145 completed surveys, RTI obtained contact information for 188 parents. Parents who expressed interest for their child(ren) to participate in the study received a lead letter from RTI. A screener conducted with parents or students aged 18 and older was used to determine study eligibility of participants. Students self-administered either the baseline or follow-up instrument at local libraries, community facilities, or schools under the supervision of RTI survey administration staff. A total of 72 baseline and 73 follow-up survey instruments were completed, including questions regarding parent-child communication, attitudes and beliefs about abstinence and sexual risks, involvement in positive activities, beliefs about the future, and demographic characteristics. Three participants completed survey instruments in Spanish. Nine participants aged 14 or older also self-administered new items, including questions regarding sexual activity and contraception.


Of the 188 parents contacted by RTI, 3 refused participation, 20 students whose parents agreed to their participation did not attend survey administration, and 2 students were found be ineligible. An additional 18 parents could not be reached by phone to schedule survey administration. A total of 145 student surveys were completed for a 77% response rate. Analyses of the pilot test data indicated there were few significant technical problems with the survey instrument. Many of the respondents in the pilot study put check marks in the boxes instead of filling them in. RTI has replaced the response boxes with circles to increase the likelihood that responses will be accurately scanned. Many respondents were younger than 13, and some said they skipped questions that referred to “teens” because they did not think such questions applied to them. RTI has changed the term “teens” to “young people” to apply to all youths. Some respondents were unsure about what to answer for their race. RTI has created an additional response option for “other (describe ______________)” for race. Lastly, a few respondents wrote their names on the front of the surveys, even though RTI asked them not to. RTI has added a note to the front of the survey that clearly asks respondents not to do this.


There were no outlier values, and all response options were labeled correctly. All skip patterns appeared to function correctly except questions referring to parents. Some students responded that they did not have a mother (or father) and then answered questions about that person. RTI has changed the language in relevant questions to make it clear that having a mother (or father) does not necessarily mean living with them, and not having a mother (or father) means not having one at all. Our findings suggest that there were no logic problems with the survey and the data were accurately recorded. There were no non-response problems with the survey except for a substantial amount of missing data on the question regarding extracurricular activities. RTI has changed this question to an item assessing the overall frequency of participation in extracurricular activities. The average length of the survey was 22 minutes, with a range of 10 to 50 minutes.


Based on the findings of the pilot test, the survey appears to function as intended and is not overly burdensome, sensitive, or difficult to understand. Therefore, few substantive revisions were made to the survey instrument as a result of pilot testing.


5. Individuals Consulted on Statistical Aspects and Individuals Collecting and/or Analyzing Data

The agency official responsible for receiving and approving contract deliverables is:

Johanna Nestor
240-453-2808
[email protected]
Office of Population Affairs/DHHS
1101 Wotton Parkway, Suite 700
Rockville, MD 20852

The person who designed the data collection is:

Olivia S. Ashley, Dr.P.H.
919-541-6427
[email protected]
RTI International
3040 Cornwallis Road
Research Triangle Park, NC 27709

The person who will collect the data is:

Karen Morgan, Ph.,D.

(919) 485-7779

[email protected]

RTI International
3040 Cornwallis Road
Research Triangle Park, NC 27709

The persons who will analyze the data are:

Georgiy Bobashev, Ph.D.
919-541-6167
[email protected]
RTI International
3040 Cornwallis Road
Research Triangle Park, NC 27709

Michael Penne, M.S.
919-541-5988
[email protected]
RTI International
3040 Cornwallis Road
Research Triangle Park, NC 27709

Marni Kan, Ph.D.

919-485-2756

[email protected]

RTI International

3040 Cornwallis Road
Research Triangle Park, NC 27709

File Typeapplication/msword
File Modified0000-00-00
File Created0000-00-00

© 2024 OMB.report | Privacy Policy