10-30-07 memo

MCP Evaluation Responses2.doc

Evaluation of the Mentoring Children of Prisoners Program

10-30-07 memo

OMB: 0970-0333

Document [doc]
Download: doc | pdf
  1. Please explain how this is a “quasi-experimental” study design.


The study will assess changes in behavior for children of prisoners enrolled in the MCP program over a twelve month period and compare those changes in behavior to changes of a comparison group of youth of similar ages who did not receive mentoring. This comparison group is drawn from another, recent large-scale study of mentoring: Making a Difference in Schools (Public Private Ventures, 2007). The proposed MCP Program Evaluation is a quasi-experimental study design (and not a true experimental study) because it does not use a true control group; youths in the comparison group (1) are not necessarily children of prisoners and (2) answered survey questions in 2004 and 2005. The study will control to the extent possible for baseline characteristics associated with having an incarcerated parent that may be linked to outcomes of interest. The study will form hypotheses about the potential for MCP program impacts based on a comparison between the behavioral and attitudinal growth trajectories of MCP program youth in the Study and the growth trajectories of age-appropriate comparison group youth for outcome variables of interest.


Please explain further how the baseline characteristics of having an incarcerated parent will be “controlled for.” Also, since this mentoring program is meant specifically for kids with incarcerated parents, will you be “cancelling out” the effects of the program if you control for these characteristics?


Unfortunately, due to the nature of the extant data we are able to use for our comparison group, it is not possible for us to control statistically for the characteristic of having an incarcerated parent. We will, however, assess the extent to which the comparison group and youth in our study differ at baseline in terms of a number of factors that are likely associated with having an incarcerated parent, particularly participation in risk behaviors. We will form hypotheses about the potential for program impacts based not only on changes in behavior for youth in the MCP Program but by comparing those changes to changes in the behaviors of youth in the comparison group. Although we recognize that youth in both groups likely will start out at different points in terms of participation in various risk behaviors, we will be able to develop meaningful assessments of the MCP Program’s potential for impact by comparing the trajectories for both groups over time.


2. Please provide a list of study limitations that will be disclosed in the report to Congress regarding this study. For example, this study appears to employ a convenience sampling technique. And the “control group” is not really a “control group” in the sense of RCTs.


Relative to a randomized control trial, our quasi-experimental study has several important limitations. First, as mentioned, we do not have a true control group created through random assignment. For this reason, we do not know the natural trend for children of prisoners for outcomes of interest absent the intervention being studied, i.e. the “control condition.” Second, we do not know whether and to what degree observed differences in MCP participant and comparison group outcomes are the result of mentoring, internal psychological health, or other factors.


Despite these limitations, if we find statistically significant improvements in MCP participant outcomes relative to expected growth trajectories, and particularly if they are of similar scale of proven impacts of other mentoring programs, we can make a strong presumption that MCP contributed to the improvements. In addition, we would argue that those findings justify an investment in a true experimental study.


Given the nature of this study, OMB would not be comfortable with reporting to Congress that the results allow for “a strong presumption that MCP contributed to the improvements” since it is really unclear given all the other confounders. OMB would, however, be comfortable with reporting that there is a “strong suggestion that MCP contributed to the improvements” and that these results should be confirmed or explored further in a true experimental study, should you find the statistically significant improvements you discuss.


Is ACF amenable to this?


We are amenable to this change in language and will characterize the study accordingly.


Please also confirm that it is not ACF’s intent to provide point estimates for this program and generalize it to the entire universe of kids (e.g. to report that these kinds of programs would result in x% improvement).


That is correct. The purpose of this study is to detect the potential for positive impacts of MCP programs on children of prisoners. We will not be claiming that point estimates of differences within differences necessarily indicate promising results for the universe.


3. Please provide more detail on the “other study” from which you will be drawing your comparison sample. What are their characteristics? What was the study about? Did they provide “baseline” and “12 month follow up” data on the sample?


The comparison sample is the control group for an experimental study of school-based mentoring: Making a Difference in Schools (Public Private Ventures, 2007.) Youths in that study were in grades 4 through 9 at baseline. Follow-up surveys were administered at two different time points; the first follow-up occurred approximately 9 months later, and the second follow-up was approximately 14 months after baseline; the study team will use the 14 month follow up, and therefore will administer the baseline survey approximately 14 months after baseline as well.


  • Please provide more information on the characteristics of this comparison group. If we understand ACF correctly, this comparison group is supposed to represent “normal kids” and how they would be absent any intervention. How did ACF come to conclude that this group of kids represents “the norm”? Are they, for example, a nationally representative group of kids?


The comparison group is comprised of youths who were randomly assigned to the control group for a study of school-based mentoring programs. Therefore, they should share some characteristics of youths who do enroll in mentoring program, such as motivation to participate in mentoring. This sample was selected because we anticipate that they will share at least come characteristics in common with the youths enrolled in MCP programs, rather then expecting that they represent the “norm.” For example, the comparison sample includes a large proportion of youths who are economically disadvantaged, and the percentage of youths living in single-parent homes is above the national average. Baseline analyses will determine the extent to which the comparison group is similar to the youths in MCP programs on demographic, attitudinal, and behavioral characteristics.


  • Relative to the “normal” kids in the comparison group, aren’t the kids in this study more likely to be exposed independently to other interventions that could also improve their outcomes (e.g. they are probably more likely to be seeing a counselor or therapist)? Isn’t this an important confounder? How will ACF assess for this?


Again, we expect that youths in the comparison group will be similar to the youth enrolled in MCP programs on a number of characteristics, and that their likelihood of receiving other interventions is on par with kids in the MCP sample. While we will not be collecting information about other services or interventions that either group receive, we anticipate that if the samples exhibit similar characteristics at baseline, receipt of such services will be randomly distributed across samples and therefore should not confound the study’s findings.


  • Also, the age ranges are a bit off (the comparison group is gradtes 4-9 while this study group goes up to age 17). How will this affect the interpretation of results?


While our sample will include youths up to 17 years old, we do not expect our sample to be significantly older, on average, than the comparison group because there are typically fewer first-time mentees on the older end of the range than on the younger end. We will have youths’ ages for both samples, and we can run subgroup analyses by age to determine if there are differences in the potential effects of the program for different age groups.


  • Please clarify which impact(s) ACF will be measuring and reporting. Is it the difference between study kids and the comparison group kids, or is it the change between baseline and follow-up? For example, let’s take hypothetical average test scores. Is ACF interested in result 1 (i.e. the margin of difference came down by 10%) or result 2 (zero difference between improvement in test scores for comparison and study group)? Or is ACF interested in result 3 (e.g. average score of 80% for study group is still lower than 85% for comparison group)?



Baseline

14 month follow-up Result 3 (study group underperforms)

Result 2 (no difference between groups)

Comparison group

70%

85%

+ 15%

Study group

65%

80%

+ 15%

Result 1 (10% improvement)

-15%

-5%



We are measuring the change in the difference between the MCP sample and the comparison sample between baseline and follow up, so, for this example, we would focus on the relative improvement on the MCP youth.


4. It sounds like there is a lot of variation between the grantees and how they implement the “mentoring” program. How will this study account for this variation in how the “intervention” is administered?


The administrative survey of grantees will collect information about program operations, including mentoring services provided to children of prisoners. We will assess ways in which program operations differ between study sites and sites not included in the study. Should programs vary substantially in terms of services for children of prisoners, the study team and FYSB together will identify variables that may affect the “intervention” youths receive—and thereby affect changes in their attitudes and behaviors—and run regression analyses to see if those program characteristics are in fact correlated with changes in outcomes of interest.


Will this survey collect information from the universe of all grantees?


In our submission, we had planned to collect administrative information from 72 grantees that are funded at least through FY2009. Additional grants were awarded in FY2008 (after the OMB package was submitted), which brings the total number of grantees receiving MCP funds through at least FY2009 to 223.


While this survey will be able to yield results that can then be used to adjust for differences in the grantees and the services they provide, how will ACF adjust for differences between the children at the sites who participate in this study and the children at sites who do not participate in the study? In other words, how will ACF demonstrate that the children in this study is representative of the universe of children enrolled in these programs?


We do not know the extent to which we will be able to assert that the children included in the study represent the full universe of youth enrolled in the MCP Program. We will assess the extent to which results for the grantee survey for study sites differ from results of sites not in the study and report on any differences in program operations that are significant. Based on prior studies that employed similar sampling strategies, we hypothesize that we will be including sites that are larger and more fully operational (i.e. sufficiently active to recruit 10 or more youth during the baseline survey administration period) than sites not in the study. If this is the case, we will report that our study represents the MCP Program at fully-implemented, larger scale sites.


5. When ACF says the questions have been tested, have they been formally tested (e.g. validated, etc)? please provide results of the testing.


For the MCP Program Evaluation to be able legitimately to use a comparison group form Making a Difference in Schools (Public/Private Ventures, 2007), it was necessary to use questions identical to questions used in that study for all outcomes of interest. For information about validity and reliability of items from this survey, please see:


Herrera, C., Grossman, J.B., Kauh, T.J., Feldman, A.F., McMaken, J. & Jucovy, L.Z. (2007). Making a difference in schools: The Big Brothers Big Sisters school-based mentoring impact study. Philadelphia: Public/Private Ventures.


6. Has the study obtained the certificate of confidentiality? What are the limitations of the certificate and how will these be disclosed?


Yes, the study has obtained the certificate of confidentiality. The certificate does not prevent the study team from reporting abuse to appropriate authorities, and this is noted on the parental permission form.


7. Where are the consent/assent forms that the study subjects will need to sign?


These were included as Appendix B, and are attached here.


8. Please provide a justification for the incentive amounts being proposed. Why $20 for offering contact information? That seems excessive.


We believe there may have been a miscommunication here. We suggested providing $15 to provide updated contact information. We suggest this amount based on our understanding that it is sufficiently high to encourage participating youth to remain in touch with the study team and that it sets a good precedent for working with youth to get follow-up surveys completed.


Now that the universe of potential study sites is more clearly defined (at the time of OMB submission, we had less information about potential sites), we are reconsidering the use of in-person follow-up in instances in which telephone contact does not work. If we rely solely on phone follow-up, we will eliminate additional incentive payments for parents whom initially we expected to facilitate follow-up in-person interviews. In addition, bolstering the telephone follow-up efforts removes any potential concern about using more than one method for administering the questionnaire.


Therefore, the incentives provided would be $15 for youths when they provide their contact information for the follow up survey, and a $20 gift certificate after they complete the second questionnaire. Abt has submitted revised permission forms to its IRB which reflect this change.


So ACF has decided to, in fact, do away with the in-person follow-ups? This is fine, but we would like clarification.


We have done away with in-person follow-ups


OMB still finds the $15 incentive amount rather high since the participant is only being asked to provide contact information (which is not a particularly burdensome task). How did ACF decide to provide $15? Why not $2-$5, which is what other agencies have used for similar kinds of studies?


It is our belief that providing a high incentive will increase our likelihood of achieving a high response rate for this study. We also are aware that children of prisoners are likely to be a particularly mobile population and so, if we are to achieve a relatively high response rate, it will be critical for us to obtain contact information. If OMB objects strongly, however, we are happy to reduce this payment.


9. How will you collect this contact information for the follow-up study? Is there a separate instrument for this?


Parents will be asked to provide contact information on study permission forms (part of baseline data collection), as well as names and contact information for friends/family who may help the study team obtain contact information in the event that the family moves or discontinues services with the mentoring program.


How will the children be providing the contact information (and thereby receiving the $15 incentive)? Is there a form they will have to submit?


Our subcontractor, Moore & Associates, will telephone youth, using the contact information youth provide at baseline. There will not be a mailed form for this information.


10. What methods will be used to recruit subjects? How do you know you will receive an 80% response rate? What will you do if you don’t get an 80% response rate?


Grantees that will be funded at least through 2009 comprised the pool of potential sites. Sites were selected to participate in the study based on their size and capacity to administer the baseline youth survey. Sites will be recruited by Abt staff who will call them to determine their projections for the number of new youth enrolled during the time at which the study team expects to facilitate baseline survey administration. During these calls, Abt will explain the study and ask program contact’s to facilitate eligible youths’ participation.


We encourage staff involved to recruit subjects for the study by informing targeted youth of the incentives they will receive for participating and reminding them that all survey data will be kept confidential. We also ask them to inform targeted youth that the study will help mentoring programs such as theirs learn more about how well they work. We do not expect too much resistance to study participation on the part of targeted youth because there is no control group and youth targeted for the study will not be denied program services.


The selected sites will administer baseline surveys to all children of prisoners who enroll in their programs during the survey administration period, until the sample reaches 625. We expect to achieve an 80% response rate for the follow-up survey by (1) offering study participants incentives for providing up-to-date contact information and for completing the follow-up survey and by (2) using an experienced sub-contractor, Moore and Associates, to conduct the follow-up survey. Moore and Associates has worked with Abt Associates on a number of related engagements (including the Student Mentoring Study for the U.S. Department of Education) and has had success in achieving high response rates for similar populations of youth.


If we do not get an 80% response rate, we will perform statistical analysis to determine the representativeness of the follow-up sample. We also will report clearly on the ways in which having a low response rate compromises study results in all reporting.


11. Please clarify how the questionnaire will be administered. At times, the supporting statement says online surveys. At other times, there are references made to putting surveys in an envelop (suggests a paper-based survey), interviews by phone, and in-person interviews.


To reduce burden to MCP grantees, the Agency Survey to collect information about agency characteristics will be administered electronically. The youth baseline survey is a paper questionnaire that will be administered in-person by MCP program staff. Survey administrators will read the instructions, survey items, and response categories aloud to each child or group of children (individual or group administrations are permitted), and children will circle their responses on their own copy of the questionnaire. When they are finished, children will seal their surveys in an envelope before returning them to the survey administrator.


Follow-up surveys will be administered to youths by Abt Associates’ subcontractor, Moore and Associates, over the telephone. Youths will be asked to say their responses, and the administrator will mark the responses on a paper questionnaire which will be entered into an electronic database by Abt Associates staff.


12. If 20 grantees are each recruiting 5 youth for the study, that would seem to be a sample of 100. Where does the 625 figure come from?


The language in the supporting statement meant to convey that at least five youth from at least 20 sites would be included in the sample. The current expectation is that approximately 25 sites will be included in the study, and approximately 20 – 100 youths from each site will complete the baseline questionnaire.


We’re still a bit confused about this. If there are 25 sites and each will have 20-100 youths participating, isn’t the total sample size 500-2500?


We anticipate that only a few sites will enroll more than 50 children during the survey administration period; however, we expect that at least one or two will survey up to 100 youths. We expect that most will be closer to 20. The adequate sample size is 625, so we need to survey at least that many youths. As such, we are targeting slightly more.


13. How will these 5 youth be selected? What are the criteria? Is it random selection?


At grantee sites involved in the study, all youths who are children of prisoners between the ages of 9 and 17 who enroll in MCP programs during the survey administration period will be asked to complete the baseline questionnaire.


This seems to contract what was said earlier about stopping enrollment at 625 children. Please clarify.


We will ask sites to administer the baseline surveys as described above; however, if the adequate sample size is reached before the baseline survey period is over, we will ask grantees to cease administering the survey. We do not want grantees selecting particular youths to survey, and we do not want some to stop surveying youths before others. Therefore, the initial instructions will be to survey all children of prisoners in our age range during the defined survey period.


14. The race question on the survey does not comply with OMB standards. Please revise.


The question will be revised to the following:



What is your race? (Please check one or more.)


1 American Indian or Alaskan Native

2 Asian

3 Black or African American

4 Native Hawaiian or other Pacific Islander

5 White


15. Why are the questions about the relationship between the child and the parent relevant to assessing the outcomes of a mentorship program? (question 8 on the baseline survey)


Improving relationships and promoting strong families are among ACF’s objectives for the MCP program (http://www.acf.hhs.gov/programs/fysb/content/youthdivision/programs/mcpfactsheet.htm).


16. Since the children in this program will have mentors, won’t all respondents say “yes” to the question about whether they have a special adult in their lives?


The baseline youth survey will be administered to youths before they begin meeting one-on-one with their mentors, and it is possible that matches will have terminated by the time of follow-up survey administration. In addition, the study team does not presume that all mentors will live up to the description of “special adult” as provided in the survey, and therefore included this question as an outcome measure of the strength of mentor/mentee relationships.


  1. What is the relevance of questions 1 and 2 on the follow-up survey?


These questions were included to determine if there were differences in responses from baseline to follow-up (specifically, did youths who responded “No” at baseline respond “Yes” at follow up, indicating that the parent had been absent during the preceding year). This allows the study team to obtain information regarding extended parent absences for the entire duration of the study.

File Typeapplication/msword
AuthorPeabodyB
Last Modified ByJulie E. Hocker
File Modified2007-10-30
File Created2007-10-26

© 2024 OMB.report | Privacy Policy