Response1

Memo to OMB - responses to Questions on SHM 12-mo 3-5-08 (3).doc

Supporting Healthy Marriage (SHM) Evaluation 12 month follow-up data collection

Response1

OMB: 0970-0339

Document [doc]
Download: doc | pdf

Date: March 5, 2008



To: Brenda Aguilar

Office of Management and Budget



From: Naomi Goldstein, Director

Office of Planning, Research and Evaluation

Administration for Children and Families



Subject: Response to Questions on ACF Healthy Marriage Information Collection Request: 12 Month Follow-Up and Implementation Research Data Collection Instruments for the Supporting Health Marriage Project


Thank you for the opportunity to provide additional information and clarification on the proposed information collection. Below we address each of the questions presented in the email sent to us February 21. We have not received response to our proposals for dates for an oral discussion of these questions and responses. If we need to discuss these responses further, we will arrange a conference call as quickly as possible.


Question/Comment #1


While we admire the comprehensiveness of the study’s approach, we wonder if the burden placed on respondents could be reduced by bringing more focus to the study. Towards this end, can you tell us what specific hypotheses you are intending to test through this study? What types of causal linkages are you planning to evaluate as well?


Response


We have aimed for as much parsimony as possible in the survey instrument, working with several national experts to streamline the data collection as much as possible. At the same time, estimating the impacts of the Supporting Healthy Marriage program requires the comprehensive effort proposed for three major reasons. First, the survey and observational study are the main data sources for our impact study. There are not administrative records that can provide us with information on relevant outcomes. Second, as outlined further below, based on the prior literature, we hypothesize that marriage education will affect several important domains in family life. Third, within the domains of marital expectations and attitudes and relationship quality, which are the most direct targets of the intervention, it is difficult to narrow down the items to be included in the survey. There are a number of constructs that comprise each of these important domains, yet the science of measuring these constructs for low-income couples provides little guidance on which items will capture these constructs most effectively. For all of these reasons, the range of domains and items included in the survey and observational study are important.


Primary Hypotheses


We hypothesize that compared to their counterparts in the control group:


  • The program group will have higher rates of participation in marriage education services.


  • The program group will be more likely to report knowledge, attitudes, and expectations associated with healthy marital relationships:


    • greater willingness to sacrifice and work at their marriages,

    • greater understanding that all married couples have some disagreements,

    • stronger belief that two-way communication is important in marital relationships,

    • less acceptance of violence as a way of handling disagreements, and

    • less favorable views toward how divorce may affect children.


  • The program group will demonstrate increased relationship quality:


    • more positive interactions (clearer and more empathetic communication, more effective conflict resolution skills, higher levels of emotional and physical intimacy, more time spent together);

    • fewer negative interactions (fewer antagonistic, hostile, or abusive behaviors during disagreements, lower levels of sexual and emotional infidelity);

    • greater marital satisfaction; and

    • greater marital commitment.

  • The program group will spend more time living together with their children:


    • lower rates of separation and divorce, and

    • delay in separation or divorce for families who do experience them.


  • The program group will show improvement in both co-parenting and individual parenting:


    • higher levels of mutual support, cooperation, and problem-solving in shared childrearing duties and activities;

    • more involvement by fathers with raising children, whether resident or non-resident at time of follow-up; and

    • improved parenting for both parents (higher levels of warmth, involvement, and engagement; lower levels of harsh parenting behaviors; greater regularity in family routines; lower levels of parental stress and aggravation).


  • The children of program group members will show fewer social, emotional and behavioral problems.


  • Adults in the program group will exhibit improved mental, behavioral, and physical health outcomes.


  • The program group will have higher family incomes than their counterparts in the control group (primarily due to reduced rates of family disruption).


  • The program group will report higher levels of instrumental and emotional support, and larger social networks.


With respect to causal linkages, the primary analyses will link the SHM program as a whole to the outcomes measured. In relation to individual domains or measures, this is an area in which the SHM project can make an important contribution to the current knowledge base about how the quality of couple relationships affect other outcomes, such as marital stability and child well-being. We consider these to be an exploratory aspect of our impact analysis rather than our primary impact question, because the study has not been designed to definitively isolate the relative importance of the different pathways by which marital stability/quality or children may be affected by these interventions. However, given that we are likely to see variation in program impacts on the key outcomes and potential mediators, such as program participation, family structure, couple relationships, parenting, and child well-being in different sites and different subgroups of the sample, we are likely to observe patterns that are consistent with particular hypotheses and inconsistent with others, providing important new information about how participation rates are associated with program impacts, and how interventions to improve marital stability and quality may affect not only couple relationships but also parents’ mental health, parenting, co-parenting, and children’s well-being.


Question/Comment #2


Are there any plans to adjust for multiple comparison biases in the analyses?


Response


We agree that multiple comparison bias is an important issue, and have participated in many recent efforts among social scientists to develop strategies for addressing this issue. In the SHM project, we have not proposed such an adjustment at this time because our reading of the literature and the debate is that there is not yet a clear consensus on how to handle this issue. However, we will apply such adjustments as appropriate based on the state of the art at the time that we conduct our impact analysis.


In addition, we plan to address the underlying issue of multiple comparisons by limiting the number of comparisons that we make. To accomplish this while still contributing as much new knowledge as possible from the project, we will divide the impact analysis into two phases.


The first phase of analysis will focus on a small number of outcomes and a small number of subgroups for which we have firm hypotheses about program impacts, as stated above. For each hypothesis, we will aim to summarize results using a small number of scales representing the relevant domains and constructs of interest. Conclusions about the effectiveness of the intervention will be based on this relatively small number of comparisons.


The second phase of the analysis will go beyond the core analysis to explore other aspects of the effects of the program. For example, this phase will break down scales to investigate whether SHM programs appear to have larger effects on some specific items than others. We would also explore effects of the program on additional subgroups that are somewhat more speculative than the core group. This exploratory phase may also include exploring links between outcomes of interest to understand the mediating pathways by which SHM interventions, and parents’ relationships, might affect child well-being. Given the early stage of research on interventions for low-income married couples, this more exploratory phase of the analysis has the potential to make an important contribution to research on low-income families.


As noted, we will continue to investigate methods for adjusting for multiple comparisons. There are varying opinions in statistics or evaluation fields about how to make these adjustments. Many different methods have been proposed (e.g., Bonferroni corrections, Benjamini-Hochberg corrections [1995]1, the methods of Tukey, Fisher, Scheffé, Dunnett, Duncan, and others as reviewed by Darlington [1990]2, and resampling methods such as those described by Westfall). Moreover, there currently is no consensus on whether corrections should be made across all outcomes, or within certain domains, as well as whether the purpose of multiple comparisons is to avoid bias in conclusions about whether the intervention had any effects at all, or in conclusions about particular effects. We will keep abreast of the use of various techniques for adjusting for multiple comparison biases and apply such adjustments based on any emerging consensus for how to proceed at that time.


Question/Comment #3


To what extent have the survey measures been validated? It is also not clear to us whether the questions that have been validated would still be valid if they are taken out of the context in which they were validated.


Response


When they are available, we have proposed measures drawn from scales that have been shown to have strong psychometric properties. Many of these have also been used in other government sponsored studies with similar populations addressing similar outcomes. These include the Enhanced Services for the Hard-to-Employ project, the Building Strong Families 15-month follow-up survey, the National Study of Families and Households, the National Evaluation of Early Head Start, and The Early Childhood Longitudinal Study – Birth Cohort, and related non-government sponsored studies such as the Iowa Youth and Families Project and the Fragile Families and Child Well-Being Study.


Thus, for domains that have been the subject of prior surveys of low-income families, such as program participation, marital stability, family income, adult mental health and behavioral health outcomes, parenting outcomes, and child well-being, the survey measures that we have proposed have been validated for populations similar to those being surveyed. For many measures of marital relationship quality we are proposing what we believe to be the best measures available to address key outcomes relevant to the focus of the intervention. We acknowledge that there is more work needed on the psychometric properties of such measures with low-income and racially and ethnically diverse couples. The SHM study will contribute to the knowledge in this area. The research team has conducted a series of one-on-one interviews with low-income and racially and ethnically diverse couples drawn from Oklahoma, Texas and Washington, DC to explicitly assess the face and content validity of the proposed measures in the draft of the survey instrument. After we have collected the survey data, we will also conduct psychometric analyses to determine which relationship quality measures are correlated with each other, as hypothesized, and can be used together in scales.


Question/Comment #4


What happens when one partner consents to participate, but the other does not?


Response


Collecting information from both spouses has implications for both fielding and analysis. In terms of fielding, both members of the couple will have already consented to participate in the study at the time that they volunteered for the study at baseline. However, there is still the possibility that either spouse will decline to respond to the 12 month follow-up. The individuals in the couple will be treated as independent sample members. Thus, if one partner refuses to participate in the in the 12-month follow-up data collection, the survey firm will still attempt to contact and administer the survey with the other partner.


For most outcome measures, we consider the couple to be the unit of analysis.  While the analysis will benefit when we can construct these measures using responses from both members of the couple, we will have some ability to create impact measures at the couple level even when one spouse does not respond.  For example, in relationship domains such as "marital satisfaction," "positive interactions," or "negative interactions," the items that we are collecting will enable us to construct outcome measures even for couples in which only one member of the couple has responded.  (Our analysis will then include a control variable for whether the question was constructed using one or both spouses' responses.)  Even for domains that focus primarily on one of the parents, such as "fathers' involvement," as often as possible we ask both spouses about their perceptions, so that if one of them does not respond, we can use the second spouse's response rather than lose that couple's response entirely from the analysis. For a minority of constructs, and for some of the more exploratory analyses described above, the type of impact being considered may require us to rely solely on one spouse’s response, so that response rates and sample sizes will be limited to those for wives or husbands alone.


Question/Comment #5


How will the privacy of study participants’ responses be ensured from their spouses/partners? If the other spouse/partner is potentially listening and in the same room, how will you ensure that the couples are answering truthfully and providing valid/reliable responses? What are implications in situations when there is domestic violence present? What is the potential impact on the response rate?


Response


The following procedures will be implemented to ensure the privacy of study participants during the administration of the survey instrument:


  • Interviewers will ask the study respondents to schedule the interview at a time and place when they expect to be alone. The privacy status of the respondent will again be confirmed at the start of the interview.


  • Self-administered questionnaire techniques and computer-assisted interviewing tools allowing for non-verbal responses will be used to administer sections of the survey that are likely to be most sensitive, so that if anyone else is present they will not be able to hear the respondent’s answers.


  • Respondents are reminded at several points during the process that they do not have to respond to any question, so that if they feel at risk for any reason, they can choose not to respond.


  • If it turns out that others are present at the time of the interview, despite our scheduling efforts, the interviewer will provide study participants with the option of rescheduling their interview or calling a toll-free call-in number to complete the survey at a time that is more convenient for them.


Many of the questions in this data collection, including the sensitive questions, were drawn from existing instruments used successfully with both members of a couple as respondents. One example is the Building Strong Families (BSF) Project 15 month survey (OMB Control # 0970-0304), currently being administered for ACF by Mathematica Policy Research to low-income parents. The BSF survey is being administered in much the same way as is proposed for the SHM 12-month survey. Interviewers ask both mothers and fathers in the study to schedule the survey interview at a time when they will have privacy. The BSF 15-month survey is still being fielded at this time. However, Mathematica staff report that they are on target to achieve an 80 percent response rate and have had minimal problems with sensitive questions. To date, the BSF survey team reports non-response of less than 1 percent on sensitive questions such as whether respondent or partner/spouse has cheated, perceived likelihood of cheating in the future, whether respondent has been physically or sexually assaulted by partner/spouse/other, alcohol or drug use, and on symptoms of depression.


With regard to the truthfulness in responses, we do not expect that the degree of truthfulness will vary between the program and control group. Therefore, if there is some amount of under reporting in response to specific sensitive questions (e.g., drug use), it will not affect the evaluation of program impacts.


Below, we provide additional details about each of the procedures for ensuring the privacy of study participants during the administration of the survey instrument.


Scheduling the Interview with Study Participants. When scheduling the interview with study participants, Abt Associates’ interviewers will be instructed to work with study participants to identify a time and place when study participants are most likely to be alone -- in the absence of spouses, other household members, relatives, and/or children, to the extent possible. Administering the survey in private, either over the telephone or in person, helps protect the privacy and accuracy of study participants’ responses.


If, despite our efforts, a telephone or in-person survey is held at a time that someone else is present, interviewers are trained to ask the respondent if it is possible to separate themselves from others in order to complete the survey in private. They are also trained to listen for cues that indicate that it might not be a good time to conduct the interview and that the respondent may not be comfortable answering questions freely. In this situation, the interviewer is trained to offer the respondent the option of rescheduling the interview for a different time or having the respondent call a toll-free number to complete the interview at their convenience.


Survey Administration. The SHM research team, including both MDRC and the survey firm Abt Associates, has for several decades conducted surveys including sensitive questions concerning topics such as HIV risk behaviors, illicit drug use, sexual behaviors and activities, and forms of criminal victimization including domestic violence and child abuse. The most important foundation for this type of sensitive data collection is that participants are fully informed about the purpose of the survey and its topics, understand that responding to the survey and every individual question is completely voluntary, and are assured that their responses will be kept private. Providing this combination of information to the respondents helps us to achieve high response rates, while assuring respondents never feel pressured to answer any questions that they feel places them at risk for any reason.


Sensitive Questions Administered Over the Telephone. Survey interviewers will use computer-assisted interviewing tools to administer particular sections of the survey that are likely to be most sensitive. During telephone data collection, we will use technology that allows study participants to select an answer using the telephone keypad instead of communicating their choice verbally. This method have been found to make study participants feel more comfortable answering these questions without fear others might overhear their responses and to increase response rates. These techniques have been successfully used by Abt Associates in past survey efforts to collect data on sensitive topics, such as sexual history information.3 We are proposing to use this technique for the sections of the survey that deal with physical, verbal and emotional abuse, as well as the respondents’ and the spouses’ infidelity experiences.


Sensitive Questions Administered In Person. We also have a protocol to protect the privacy of the study participants in an in-person interview, though we expect that most of the study participants will be able to complete the survey by telephone. If others are present in the home at the time of an in-person interview, study participants will be given the option to answer the most sensitive questions themselves directly on the interviewer’s computer. Using this method eliminates the possibility of anyone in the house overhearing the responses to these questions and should make the study participant comfortable in answering the sensitive questions accurately.


We have found that many study participants are comfortable using a computer and experience little difficulty using the computer to complete the self-administered portion of the survey. However, if a study participant continues to be uncomfortable with the questions, or has difficulty using the computer, the interviewer will offer to complete the interview at another time, as mentioned above.


Additional Implications if Domestic Violence is Present. As part of the study design, study participants were screened for domestic violence prior to study entry. Couples experiencing serious domestic violence are not enrolled in the study. Therefore, we expect that the presence of domestic violence in the household to be rare. We believe the use of non-verbal responses (i.e., entering response via telephone keypad or computer) and permission to skip any question that makes someone uncomfortable will encourage participation and yet ensure privacy.


Question/Comment #6


We would like to know more about what the 80% expected response rate is based on. That seems a bit high given the burden involved. For example, the CATI/CAPI surveys can take almost two hours if the husband and wife are interviewed back-to-back.


Response


The estimate of an 80% response rate is based primarily on the firms’ prior experience. Both MDRC and Abt Associates have achieved 80% response rates in numerous mixed-mode surveys of low-income populations that are similar in duration to the survey proposed here. Most recently, in MDRC’s Enhanced Services for the Hard-to-Employ project (OMB No: 233-01-0012), two sites that have a parent-child focus are currently fielding a 15-month survey with low-income parents. We are successfully meeting our 80% target for the parent surveys; in cohorts that have finished fielding to date, we have achieved an 82% response rate in the Kansas/Missouri site, and an 87% response rate in the Rhode Island site.


We are also encouraged by our experience as we currently field the Control Group Services survey for the SHM project (OMB control # 0970-0330). Respondents’ reactions to this survey have been quite positive so far, and we are on track to achieve high response rates in that survey.

And, as mentioned above, Mathematica Policy Research is conducting a 15-month follow-up survey that is of similar duration in the Building Strong Families project, which is on target to achieve an 80 percent response rate. Thus, our experience to date suggests that the couples in our sample are likely to respond at the rates predicted.


For the SHM 12 month survey, we plan to treat each husband and wife as separate sample members for data collection purposes. Husbands and wives will most often be interviewed over the phone and on separate occasions rather than back to back. Thus, the fact that we are interviewing both husbands and wives should not affect our response rate negatively. Instead, since this is a sample of married couples, we are likely to have an easier time finding and interviewing the second spouse once we have spoken with the first.


Question/Comment #7


Some parts of the study also seem to presume the presence/availability of a caretaker to look after the children while their parents are participating in the study. Is one being provided or is this being presumed? If the latter, what impact will this have on the response rates?


Response


We assume you are referring to the in-home observational component of the study. Based on MDRC’s and Abt Associates’ prior experience fielding direct child assessments and parent-child interactions in single-parent households, we have not experienced problems with response rates because other children are present in the household. The plan for administering the observational study protocol calls for two interviewers working as a team to be present during the observational study interactions. One interviewer will be responsible for monitoring other children and will provide them with game-like activities (e.g., puzzles, coloring books), while the other interviewer takes the lead in overseeing the administration of the observational study interactions.

 

Question/Comment #8


Please describe for us what the pilot is actually assessing. Please also provide results from the pilot and let us know how the baseline component of the study has been going so far.


Response


The purpose of the pilot phase is to ensure that the sites can fully implement the program before they move into full scale program operations and recruit their impact samples and appropriately conduct all evaluation required tasks (baseline data collection, informed consent, etc). This is particularly important because most of the sites have not operated marriage education programs for low-income couples or participated in a rigorous evaluation prior to this study. They therefore need to demonstrate several aspects of program operation before they move into the “full evaluation” phase of the project. During the pilot period we are assessing the extent to which:

  • they can recruit sufficient numbers of couples to achieve the sample sizes needed for the study

  • they operate all of the components of the program at satisfactory quality

  • they are able to collect the baseline data needed for the study

  • they have instituted the management and supervisory systems needed to operate the program

  • they have instituted fiscal controls appropriate to the level of funding that they are managing


To date, three of the eight sites have moved from the pilot phase into the full evaluation phase of the project. We are continuing to monitor the remaining five sites. The staff in these sites are working to meet expectations and we anticipate that all of them will move into the full evaluation phase by this summer.


The baseline data collection (OMB#0970-0299) is going very well. The SHM baseline survey consists of two primary components: the baseline information form (BIF) and the self-administered questionnaire (SAQ). The BIF contains basic demographic and socio-economic questions that intake staff members complete with each member of the couple. The SAQ asks about marital quality, attitudes toward marriage, stressors, mental health, substance abuse, and childhood abuse and neglect. Both the BIF and the SAQ have high response rates; the BIF has a 100% response rate and the SAQ has a 98% response rate. 



1 Benjamini, Yoav and Yosef Hochberg, “Controlling the False Discovery Rate: a Practical and Powerful Approach to Multiple Testing.” Journal of the Royal Statistical Society, 57(1): 289-300, 1995.


2 Darlington, Richard B. Regression and Linear Models. 1990. New York: McGraw-Hill Publishing Company.

3 Blumberg, S.J., et al, (2003). The Impact of Touch-Tone Data Entry on Reports of HIV/STC Risk Behaviors in Telephone Interviews. The Journal of Sex Research.

10


File Typeapplication/msword
File TitleDate: March 5, 2008
AuthorNancye C. Campbell
Last Modified Byaguilar_b
File Modified2008-04-23
File Created2008-04-23

© 2024 OMB.report | Privacy Policy