ICARIS-2 Phase-2 Cover Note
The ICARIS-2 Phase-2 survey is Methodologically the same as the ICARIS-2 Survey (OMB No. 0920-0513) with two exceptions: (1) rather than enumerate the number of adult males, adult females, and children in the household as was done in ICARIS-2, we ask for the total number of persons in the household, how many are adults, and how many of these adults are men; and (2) we replaced respondent selection using the nearest birthday method with selection using a random number generator to randomly select an adult respondent of the selected gender in the household. The change in approach for household enumeration is considered to be less sensitive in that it does not specifically ask about women and children in the household, while the nearest birthday approach for respondent selection was replaced with one that is considered to be truly random. In addition, with the exception of income, all demographic information was moved to the first module of the survey. Collecting this information up front will allow us to more completely characterize refusals/break-offs that complete the demographic module, as well as include data from partial interviews in our weighted estimates.
The ICARIS-2 Phase-2 OMB package was initially submitted in September of 2003. The submission was suspended in January of 2004 due to problems with the interviewing scheduler process for the ICARIS-2 (Phase-1) survey. The problem was discovered some time after completion of the Phase-1 component of the survey in February of 2003. The suspension was based on the rationale that Phase-2 could not begin until problems with the administration of the initial ICARIS-2 survey had been resolved.
At the time of suspension, the OMB package for ICARIS-2 Phase-2 was close to being cleared. Below are some of the more substantive OMB comments made on the initial submission which are still relevant to this updated submission, along with NCIPC’s responses to these questions.
OMB Comments on Second Injury Control and Risk Survey, Phase 2
Supporting statement B3
P.19: Will any analysis be conducted to look at potential non-response bias?
First we will compare the demographic distribution of our sample, prior to post-stratification, with that of the U.S. population to identify any subgroups in which non-response may be problematic in the survey sample.
Second, in order to reduce the possible biasing effects of non-responding persons or households in our sample and to make the sample more closely approximate the U.S. population on selected characteristics, we will compute post-stratification weight adjustments using the most recent population source available.
We will compare estimates for selected questions from ICARIS-2 Phase-2 with estimates for the same questions asked in other well known surveys (see OMB Supporting Statement, Attachment #4, Sources of Study Questions).
Finally, using a detailed call history for each number dialed, we will compare responses of those participants who responded after refusal conversion with those of the remaining respondents on selected demographic and other key variables; the rationale being that respondents obtained via refusal conversion will serve as a proxy for non-respondents. Census-tract information for non-respondents, respondents, and refusal conversion respondents will also be assessed for similarities and differences.
…. cash payments tend to be more effective in raising response rates. Were there specific reasons for choosing a phone card and charitable contribution instead?
We are not aware of any work that has directly compared the effect of small cash payments vs. in-kind payments (such as phone cards or gift certificates) vs. charitable contributions. In ICARIS-2 Phase-1, more than 70% of respondents chose charitable contributions over the phone card. Further, cash payments would have to be made in the form of a check or money order which would need to be personalized. This would require us to have the respondent’s name, thus affecting confidentiality. Note that this problem is avoided by using telephone cards in that these can be mailed to “resident” at any address.
Willingness to Pay to Prevent Child Maltreatment
How do these questions relate to surveillance and prevention of injury?
A key priority in the research agenda established by CDC’s Injury Center includes “Evaluating the efficacy and effectiveness of interventions and policies to prevent perpetration of child maltreatment.” A methodologically sound public health approach for preventing child maltreatment necessarily includes first evaluating prevention programs for efficacy and effectiveness before the widespread implementation and dissemination of these interventions in our communities. Evaluation efforts at this level are now standard for public health, and increasing in this direction in child maltreatment. The first step in evaluating a program is efficacy research in a somewhat controlled or limited setting, followed by evaluating the program delivered in a less controlled real world situation for effectiveness. Yet a third type of evaluation, that is - evaluating a program for efficiency, is often missing from the process. Evaluation that focuses on outcome-based efficiency uses the tools of economic evaluation - including cost analysis, cost-benefit analysis, and cost-effectiveness analysis. These tools provide a means for assessing program costs, cost-of-illness and productivity losses averted because of the intervention, valuing improvements in length and quality of life, quantifying pain and suffering caused by maltreatment, and methods for directly comparing costs to benefits. The questions provided in this section are designed to estimate the benefits measure for use in cost-benefit analyses of child maltreatment prevention programs.
How will the information collected from the survey be used to measure willingness to pay (WTP)? What procedures/methods will you use to estimate WTP?
We will estimate WTP using maximum-likelihood method for a standard double-bounded or interval-data method (Alberini, JEEM 1995). The implicit assumption is that one underlying WTP value drives the responses to both dichotomous-choice questions. If this is true, then the second question provides a tighter interval around the true WTP value and the maximum-likelihood optimization model is appropriate. We will test the sensitivity of our results to “yea-saying” and other follow-up effects by estimating an alternative, single-bounded model using only the response to the first question. In addition, we will test the effect of alternative distributional assumptions (lognormal and Weibull), with and without socio-demographic and other variables of interest (i.e., confidence in response).
How will the estimate of WTP be used?
We will obtain estimates of WTP that are meaningful for future cost-benefit analyses of child maltreatment prevention interventions.
We have serious concerns about whether these questions will yield meaningful estimates of willingness to pay.
The categories of child maltreatment (physical abuse, physical abuse, sexual abuse, neglect) are not very well defined. Unlike a well defined event such as death, there are large variations in the severity of maltreatment within these categories.
We have defined child maltreatment based on the U.S. Department of Health and Human Services “Child Maltreatment” report that is published annually. No standard definition of child maltreatment exists, and severity is not specified. Even state guidelines for prosecuting child maltreatment cases vaguely, if ever, define severity. Further, program goals for preventing child maltreatment refer to cases prevented and not severity of abuse prevented. While respondents’ perception of severity of the abuse may differ, we plan to use the median and mean WTP estimate to control for this variability.
How will question WP4 be used in the analysis? Shouldn't a reminder of the respondent's budget constraint and what they may be giving up to pay for the program come before asking the WTP questions?
Based on the above comment, we modified WP2 to remind respondents about their budget constraint. The question now reads: “If this program were available to your state, would you be willing to pay ($X) in extra taxes per year to sponsor this program given your household income and other expenses?”
The intent of WP4 is to conduct a sensitivity analysis on the robustness of the WTP responses. Our hypothesis is that the confidence interval around the mean WTP estimate will be greater for those who are less confident in their response, compared to those who are most confident in their response.
How will respondents be assigned to anchoring value? Is it associated with the level of risk reduction or (assigned) at random? How were the anchor values chosen?
Respondents will be assigned to the initial bid values by random. Bid values are not associated with the level of risk reduction. These values were chosen as reasonable after consulting with the child maltreatment working group here at CDC and then running the anchor points through 2 rounds of cognitive testing.
How will the analysis check for validity of these estimates?
Because we have 2 levels of risk reduction per type of child maltreatment category, we’ll be able to check validity by conducting a scope test. The scope test, at a minimum, should show that WTP increases with increases in risk reduction. Other validity tests include checking for starting point bias, non-response bias, and income effects.
Have these specific set of questions been pre-tested?
At the time of the first OMB submission, pre-testing of questions had taken place and cognitive testing was being planned. Since that time, all questions in the survey went through two rounds of cognitive testing. Each round of cognitive testing was conducted in two phases, to provide the opportunity for minor “tweaking” within a round. Based on results of the first round of cognitive testing, provided to us in February of 2005 by the contractor, we made modifications to the survey instrument (primarily in the Child Supervision and Willingness to Pay modules, as well as some changes to the Screener Script). We went through a second round of cognitive testing and results were provided to us by the contractor in August of 2005. The amended IRB package was then submitted in September of 2005.
File Type | application/msword |
File Title | Comments on Second Injury Control and Risk Survey, Phase 2 |
Author | Fumie Yokota |
Last Modified By | gzk8 |
File Modified | 2006-07-13 |
File Created | 2006-04-14 |