SSB - Clean 3-29-2012

SSB - Clean 3-29-2012.docx

Measuring Preferences for Quality of Life for Child Maltreatment

OMB: 0920-0930

Document [docx]
Download: docx | pdf

23






Measuring Preferences for Quality of Life for Child Maltreatment




Supporting Statement B



29 March 2012





Department of Health and Human Services

Center for Disease Control and Prevention

National Center for Injury Prevention and Control

Division of Violence Prevention



Project Officer: Sarah Beth Link, MA

4770 Buford Highway, NE, Mailstop F-64

Atlanta, GA 30341

Telephone: (770) 488-3969

Fax: (770) 488-1360

Electronic Mail: [email protected]



B. Collections of Information Employing Statistical Methods


B.1 Respondent Universe and Sampling Methods

The respondent universe for this exploratory research study is men and women living in the United States ages 18 and older. This respondent universe consists of potentially 235 million U.S. adults, of which 2255 will be drawn as a national sample to achieve 1850 completed survey instruments. Although we will draw a national sample, this exploratory research is effectively a pilot study seeking to advance the scientific literature in both the methods and the subject where applied. The survey sample for this exploratory study will be drawn by Knowledge Networks (KN) from its national panel, the KnowledgePanel ®. KN is a subcontractor to RTI International, who is CDC’s contractor for this study. The current KnowledgePanel consists of more than 45,000 adults who complete a few surveys per month while they remain in the KnowledgePanel. KN’s panel is scientifically recruited and maintained to track closely to the U.S. population in terms of age, race, Hispanic ethnicity, geographical region, employment status, and other demographic elements. The KnowledgePanel has been previously approved by OMB for many public health research applications. (For example, the Evaluation of NIAID’s HIV Vaccine Education Initiative (OMB approval # 0925-0618) and a survey for the Food and Drug Administration of its Toll Free Number for Reporting Drug Side Effects (OMB approval # 0910-0652).) KnowledgePanel is drawn by a combination of probability-based random digit dialing (RDD) and address-based sampling but does not require internet access or computing hardware at the time of sampling. Households without internet access or computers are provided internet connectivity and personal computers. KN maintains basic demographic data on this population and, for this study, will limit survey invitations to persons ages 18 and older. Panel demographic information will be used by KN to generate survey weights and design variables for statistical adjustments for nonresponse in data analysis.

Measuring the HRQOL burden of child maltreatment (CM) across the lifecycle requires measuring both HRQOL and CM history in order to statistically estimate the difference in HRQOL between CM victims and non-maltreated individuals. The overall objective of this study is to measure the HRQOL impacts of any of five types of CM (physical abuse, sexual abuse, physical neglect, emotional neglect, and emotional abuse). Our study is designed to measure HRQOL impacts occurring at two different time periods, both as a result of CM: the HRQOL impact in childhood and adolescence (ages 5-17), and the HRQOL impact in adulthood (ages 18+) (Afifi et al. 2007).

The HROQL impact at these two different ages will be assessed using two national samples, which combined equal n=1850 respondents. The impact in childhood and adolescence will be assessed from a national sample of persons ages 18-29, using questions which ask respondents to recall their HRQOL experiences during ages 5-17; the impact in adulthood will be assessed from a national sample of all U.S. adults, ages 18 and older, using questions which ask respondents to report their current HRQOL experiences. Both samples will complete both types of questions; however, the recall reporting is experimental among the all adult sample as our pretests and focus groups suggest greater salience among the ages 18-29 sample.

CM history will be measured for all respondents, along with the type of abuse, using the Childhood Trauma Questionnaire (CTQ). Corso et al. (2008) found that among respondents to Wave 2 of the Adverse Childhood Experiences (ACE) Study, the lifetime prevalence of any of the same five types of CM was 45.6%. (By type, the prevalence was 26% for physical abuse, 21% for emotional abuse, 14% for emotional neglect, 10% for emotional abuse, and 9% for physical neglect.) These rates are comparable to those found in other national surveys (e.g., Hussey et al., 2006; Felitti et al., 1998). Based on the lifetime prevalence of CM of 45.6%, we estimate that this study will need a total n=1850 respondents (n=750 ages 18-29, and 1100 ages 18+) to achieve at least 0.80 power if the estimated utility differences are much lower than expected or if the prevalence is lower than expected. (See further details in Section B.2.). The expected power based on mean parameters from prior studies exceeds 0.90. The secondary research question of eliciting preferences over health states (to convert the HRQOL impacts into standardized economic utility values) is powered based on a minimum sample size of n=781. The 1,850 respondents exceed this in total to allow for investigation of all research objectives in this exploratory research study.

To meet this total, we will collect data from a random sample of n=1100 respondents ages 18 and older and another random sample of n=750 respondents ages 18-29. The numbers of respondents in the sample and expected response rates are included in Table B.1-1.



Table B.1-1: Expected Sample Size, Acceptance and Response Rates

Survey Component

Description

Number of respondents per survey component

Invitation From KN

Panel members who receive an initial invitation email to take the survey

2255

(1341 ages 18+, 914 ages 18-29)

IRB Consent

Number and percent of panel members who accept invitation, screen in as eligible adults and consent

1917

(1140 ages 18+, 777 ages 18-29)

(expected response rate: 85%)

Survey Instrument

Number and percent of panel members who screen in, consent and complete the survey.

1850

(1100 ages 18+, 750 ages 18-29)

(overall response rate 82%)


B.2 Procedures for the Collection of Information

Procedures

This is a one-time data collection: respondents will not be re-contacted. No stratification methods are required or will be employed in this data collection, as we simply seek a national sample of respondents to test these exploratory methods for measuring the HRQOL burden of CM. Two samples will be drawn by KN, one of ages 18-29 and another of ages 18 and older, without replacement, so that no respondents overlap between the two groups. No unusual problems requiring specialized sampling procedures are anticipated.

The design for this survey was modeled on previous survey research and economic studies led by Dr. Derek Brown of RTI, including an OMB-approved stated-preference survey of sedentary older adults drawn from an alternative online panel (OMB approval # 0920-0720). KN performs a number of internal quality assurance tasks to ensure proper coding of the survey and data reliability. First, the questionnaire programming code is tested and reviewed by a KN Project Manager before being rigorously examined through a data correspondence check to verify 100% input/output correspondence. Second, RTI and CDC will review the survey instrument in an online format to ensure proper presentation and programming of all technical aspects. Fielding does not commence until CDC and RTI give final approval. Next, KN will begin fielding with a “soft start” in which a small number of survey invitations are sent to obtain roughly 50 completed observations; immediately after the soft start is completed, KN will send an encrypted, de-identified data file to RTI for preliminary analysis as a further quality control check (as described in Section A.10B). Any issues identified for correction by CDC or RTI will be adjusted before KN begins the full fielding. (Note: these soft start data are included as completes in the response rates in Table B.1-1.) After fielding, the final data file is generated following strict quality control procedures at KN, review by multiple supervisors at KN, and randomly checking on a case-level to ensure proper merging and formatting. Again, KN will de-identify and encrypt the data before final delivery to RTI.

Participants are permitted to complete the survey only once, since each respondent has a unique code. Survey invitations will be sent by KN to a sample of U.S. adults ages 18 and older from its standing panel (Attachment D). A respondent’s initial log-in directs to an IRB-approved online consent form (Attachment F), which provides general information about the study and any possible risks. To participate in the study, respondents must click a box to indicate that they have read the information, confirm that they are 18 years of age or older, and that they voluntarily consent to participate in the study; otherwise, they may decline to participate by clicking a “do not consent” box or by simply closing the consent screen.

Consenting participants will begin the survey on a short introduction page, and then proceed through several questions on aspects of health and quality and life. In addition to current health status, participants will answer questions about their health status during adolescence (ages of 12-17) and childhood (ages 5-11). These questions follow best practices for assessing health-related quality of life (HRQOL) (e.g., Juniper et al., 1996).

Next, to facilitate the estimation of preference-based health-state utilities, the survey will present a series of stated preference comparisons, or discrete choice experiments (DCEs), which will incorporate both the TTO methods. The DCE questions ask the respondent to select the most preferred of two hypothetical health states and have been designed following methods in Flynn et al. (2010), Hauber et al. (2010), Bijlenga et al. (2009), and Ratcliffe et al. (2009). The health states in the DCE questions are defined by the aspects of HRQOL shown in the first part of the survey; they will be randomly drawn for each respondent from a D-efficient, fractional factorial, orthogonal design based on Johnson et al. (2007). After data collection, we will statistically estimate conditional and mixed logit choice models based on each respondent’s stated choice. A mixed logit model will be used for the final results after conditional logit is used in initial data analysis. DCE papers today continue to report both models, although the mixed logit is preferred for its superior econometric properties (Bridges et al. 2011).

The parameter estimates from the mixed logit model will be used to predict the utility of each health state on 0.0-1.0 scale. This procedure follows Ratcliffe et al. (2009) and Brazier et al. (2002), a seminal article that estimated preference weights and a 0.0.-1.0 utility score from 6 HRQOL questions from the SF-36, an algorithm now known as SF-6D. In these choice models, the respondent’s choice is the dependent variable, and the amount of time traded off (TTO method) enters as an attribute, or independent variable. Other independent variables are indicators for each health-state domain and all levels within these domains, plus possible interactions (minus omitted categories for statistical identification). To convert to 0.0-1.0 scales, the time tradeoff is used as a numeraire, with preferences over other attributes relative to changes in the numeraire. In summary, the DCE data from this section of the survey provides a preference-based elicitation of health-state utilities to scale the HRQOL questions from the first page on a 0.0-1.0 range.

The final section of the survey (Attachment E) consists of a short series of socio-demographic questions, the Childhood Trauma Questionnaire (CTQ), and selected questions on other childhood experiences. The CTQ is a validated 28-item instrument (see “Childhood Experiences” section on Attachment E) that is used by researchers to determine if adult respondents experienced childhood emotional, physical, or sexual abuse, and emotional or physical neglect (Bernstein, 2003). The CTQ is the briefest and most nonintrusive instrument accepted by CM researchers for assessing child maltreatment history, and it has also been used by other researchers in similar survey formats (e.g., Stolz et al. 2007). The final questions on other childhood experiences are items from the Health and Retirement Study (HRS) (http://hrsonline.isr.umich.edu/ ), the National Longitudinal Study of Adolescent Health (Add Health) (http://www.cpc.unc.edu/projects/addhealth ), and the Adverse Childhood Experiences (ACE) Study (Felitti, 1998). These items will be used as control variables for multivariate logistic regression analysis of the HRQOL impacts of CM, similar to the analysis in Corso et al. (2008). Specifically, the predicted health-state utility for a respondent (the 0.0-1.0) score—as assessed by the part 1 HRQOL questions and scaled into a 0.0-1.0 value using the health-state utility algorithm based on the DCE in part 2—will be modeled as a function of maltreatment history (from the CTQ), sociodemographic variables, and other childhood experiences. The latter are required since child maltreatment may co-occur with other types of adverse experiences, such as poverty, instability, or substance abuse in a child’s family or household.


Sample Size and Power Calculations

Two statistical results are relevant to the power calculations: (1) the estimation of preference-based health-state utility weights based on the DCE paired comparisons, and (2) the estimation of the difference in actual HRQOL utility scores, based on the aforementioned utility weights and on responses to the HRQOL questions on the first section of the survey. The difference in HRQOL scores (2) will be assessed separately for its impact in childhood and adolescence (ages 5-17) and adulthood. Childhood impacts will be measured by HRQOL recall questions reported by the n=750 sample ages 18-29, and adulthood impacts will be measured by current HRQOL reported by the n=1100 sample. All these analyses must be powered, so we discuss each briefly below.

First, the estimation of health-state utility weights is a secondary output for this study, but it is required to generate the main study result, discussed below. Power calculations for DCE estimates vary by a number of factors (Orme 2006; Bridges et al. 2011), including the number of tasks shown to each respondent (t), the number of alternatives shown in the task (a), the maximum number of attribute levels among the HRQOL domains, and whether a linear main-effects design is estimated or a design with partial or complete interactions (Orme 2006; Flynn 2010). We will set a=2 for paired comparisons and will collect 12 tasks per respondent (t=8). The HRQOL domains are scored on the survey instrument (Attachment E) on a 5-point scale, so c=5. Orme (2006) shows that a robust, powered DCE study requires a sample size n such that (nta/ c) >= 500 for a main-effects design or (nta/ c2) >= 500 for a fully-interacted model. Thus, n must be >= 156 for a main-effects design or >= 781 for a fully-interacted design. We will estimate a mixed logit model where only one attribute is interacted with the others (length of life with the other HRQOL domains), so n >= 781 represents a conservative estimate of the required sample; moreover, research recently presented at a conjoint analysis conference (Yang et al., 2010), showed that the Orme rule was conservative in 26 of 28 studies examined. The DCE tasks will be completed by all respondents (n=1850). This clearly exceeds the n=781 minimum recommendation for powering the health-state valuations.

The primary statistical result to be reported is the difference in mean HRQOL scores (a 0.0-1.0 utility) between CM victims and non-maltreated individuals. The study has been designed with sufficient power (at least 0.8) to detect a difference in health state utilities, measured on a 0.0-1.0 scale, for both childhood and adolescent impacts (ages 5-17) and adulthood impacts. Childhood impacts will be measured by HRQOL recall questions reported by the n=750 sample ages 18-29, and adulthood impacts will be measured by current HRQOL reported by the n=1100 sample. These sample sizes are based on power calculations for estimating mean differences in health-state utilities (Table B.2-1) between CM victims and non-maltreated individuals, using a 2-sided t-test, α=.05, and =0.126 for the standard deviation of the health state utilities (based on unpublished data from Corso et al. 2008). Fixing these parameters, we vary both the difference in utilities and the prevalence of abuse, using data from Corso et al. (2008), the only published study that has estimated the HRQOL utility difference associated with CM. Mean utility for the ages 18+ sample is assumed to equal the same ages in Corso et al (2008), 0.028; we also use the 5% and 95% confidence interval, 0.022 and 0.056. For ages 18-29, we use the closest group in Corso et al. (2008), ages 19-39, with a mean of 0.042 (95% confidence interval, 0.027-0.056). For both samples, we assume the same average adulthood prevalence of CM, 45.6% and use 75% and 125% multiples for sensitivity analysis; confidence intervals and prevalence by age is not reported in Corso et al. (2008).

Based on these inputs, power calculations for different types of maltreatment are shown in Table B.2-1. At the mean utility difference and average prevalence, power is 0.9562 for the n=1100 all-adult sample and power is 0.9951 for the n=750 ages 18-29 sample. In a “worst case” of the 5% confidence interval for mean utilities and finding 75% of the observed prevalence (45.6*0.75=0.342), power just meets a 0.80 threshold.


Table B.2-1: Power for Difference in HRQOL Scores, Varying Utility and Prevalence Assumptions

Analysis sample

HRQOL difference

Prevalence of CM

n=

Power

All adults, ages 18+

0.028

Mean

0.456

Mean

1100

0.9562

All adults, ages 18+

0.022

5%

0.456

Mean

1100

0.8217

All adults, ages 18+

0.034

95%

0.456

Mean

1100

0.9937

All adults, ages 18+

0.028

Mean

0.342

75% of mean

1100

0.9373

All adults, ages 18+

0.022

5%

0.342

75% of mean

1100

0.8036

All adults, ages 18+

0.034

95%

0.342

75% of mean

1100

0.9887

All adults, ages 18+

0.028

Mean

0.57

125% of mean

1100

0.9541

All adults, ages 18+

0.022

5%

0.57

125% of mean

1100

0.8171

All adults, ages 18+

0.034

95%

0.57

125% of mean

1100

0.9932

Adults ages 18-29

0.042

Mean

0.456

Mean

750

0.9951

Adults ages 18-29

0.027

5%

0.456

Mean

750

0.8313

Adults ages 18-29

0.056

95%

0.456

Mean

750

0.9999

Adults ages 18-29

0.042

Mean

0.342

75% of mean

750

0.9911

Adults ages 18-29

0.027

5%

0.342

75% of mean

750

0.8046

Adults ages 18-29

0.056

95%

0.342

75% of mean

750

0.9999

Adults ages 18-29

0.042

Mean

0.57

125% of mean

750

0.9947

Adults ages 18-29

0.027

5%

0.57

125% of mean

750

0.8273

Adults ages 18-29

0.056

95%

0.57

125% of mean

750

0.9999


Note that the power estimates in Table B.2-1 may be conservative for two reasons. The mean utility difference in Corso et al.’s (2008) results likely underestimates the HRQOL utility difference (Prosser and Corso 2007), since it is based on the SF-36, a “generic” HRQOL instrument which does not capture all of the relevant domains of CM impacts. In contrast, our exploratory study will collect data for newly developed HRQOL questions, which are designed specifically to capture impacts from CM; thus, our instrument should be a more sensitive measure and may detect larger utility differences. Additionally, Corso et al. (2008) reports only the difference in HRQOL utility among adults, while our exploratory study will include a retrospective assessment of HRQOL during childhood as well as during adult (current) ages. If the HRQOL impacts of CM fade as a victim ages, as suggested in some literature (Prosser and Corso 2007), then we may again find larger differences than in Corso et al. (2008).

In summary, the health state utility difference (0.0-1.0 score) and the preference-based valuation of health states from DCE questions, will be more than adequately powered with our sample size of n=1850 adult respondents from the general public.


B.3 Methods to Maximize Response Rates and Deal with Nonresponse

The survey and the data collection methods have been designed by CDC, RTI, and Knowledge Networks to minimize non-response bias. We fully anticipate meeting or exceeding an 80% response rate for this exploratory study, as described below. First, we describe methods to reduce non-response bias within the KN panel. Second, we will describe our efforts to measure and detect non-response bias within the KN panel, and how the extent of any non-response bias will be reported. Third, we provide estimates of comparable response rates drawn from a similar sampling frame on health topics. Finally, another source of non-response bias in terms of joining the KN panel should also be considered; we will take two approaches to assessing this form of non-response bias.


Methods to reduce non-response bias (within the KN panel). The following steps have been undertaken in the survey and sampling design to minimize non-response bias and to ensure high response rates:

  • Pretesting. The survey has been carefully designed and pretested to ensure the best possible respondent experience. The research team has extensively evaluated the survey to improve the questionnaire and the online survey experience. The survey was also pretested with n=9 individuals from the general public in May 2011 and additional edits were made to further improve the survey and maximize response rates. (A second round of pretests, reflecting changes to the instrument discussed with OMB, is being held during March 2012.)

  • Sensitivity to human subjects and topic. Although this survey does include questions on child maltreatment (through the validated Child Trauma Questionnaire (CTQ)), these have been reviewed by an approved Institutional Review Board (IRB), as described in the full study packet. During the process of IRB review, several enhancements to reduce the sensitivity for respondents were identified and made. For example, the CTQ questions come after a soft “notice” screen that alerts respondents that they are entering a more sensitive section of the survey. By building up to these and being forthright, we believe that respondents will be prepared and will not terminate the survey at unusual rates.

  • Limited length. The research team has scrutinized all items and reduced the survey to the minimum length possible in order to reduce respondent burden and maximize response and completion rates.

  • Additional modest incentives. Although each KN survey provides “points” for panel members, a bonus honorarium of 7,500 KN “points” (equivalent to $7.50 cash) is being provided for this study.

  • Reminders. Two email reminders will be sent to non-responders a few days after the initial survey invitation, followed by a telephone reminder, if necessary.

  • Toll-free numbers. KN will provide toll-free telephone numbers in the survey invitation and welcome screen for potential or enrolled respondents to call with any questions or concerns about any aspect of the study. RTI will also provide a toll-free telephone number for participants who have any questions about the study or their rights as a study participant.

  • KN’s national panel has very high response rates on average. The sample will be drawn by KN from its standing national panel (KnowledgePanel). As outlined above, these approximately 45,000 adults have previously been contacted regarding ongoing participation in studies performed by KN. All individuals complete a few surveys per month while they remain in the KnowledgePanel. Thus, they expect survey invitations with some frequency and respond at very high rates on average.


Methods to detect and report on non-response bias (within the KN panel). Our first priority in dealing with non-response is to minimize it from occurring by ensuring high response rates using the methods described above. However, some degree is likely unavoidable despite the best efforts of any survey methodologist, so we briefly describe our efforts to identify and report its potential impact on our results.

Non-response bias may affect the results in ways that are easily observable, such as differences in the Current Population Survey (CPS) characteristics described above under weighting, and in less obvious ways, such as differences in child maltreatment rates among respondents relative to non-respondents (data that is not part of the CPS). The latter type of bias is more difficult to detect since the potential answers of non-respondents are not known from CPS data. However, KN maintains a range of “profile” data for all of its panel members on topics including—and going beyond—CPS questions. Thus, we have much more information about non-respondents than would be the case if the entire sampling frame were new for this study. To analyze this, KN’s deliverable to CDC and RTI will include profile data for all sampled individuals (non-respondents & respondents). The specific elements that will be provided are:

  • Date & time survey started, completed, and total duration in minutes

  • Age (integer)

  • Education (highest Degree Received, 14 categories)

  • Race and Hispanic ethnicity

  • Gender

  • Household head status of respondent (yes/no)

  • Household size (integer, number of members)

  • Housing type (5 categories)

  • Household income (19 categories)

  • Marital status (6 categories)

  • MSA status (2 categories, MSA/metro or non-MSA/metro)

  • Ownership status of living quarters (3 categories)

  • State

  • Current employment status (7 categories)

  • Internet access (KN provided, yes/no)


After data collection, we will conduct and report descriptive analyses of these variables and basic statistical tests of differences in proportions and means (e.g., chi-square tests, t-tests). These will be included in the final report to CDC as well as in appendices to academic research papers which are submitted for publication.

Additionally, we will use the responses collected from the CTQ to compute the prevalence rates of history of child maltreatment for any type of maltreatment, and separately by type of abuse for respondents. These will be compared overall, and by matching age categories, to the estimated rates in surveillance studies such as Felliti et al. (1998), Corso et al. (2008), and Hussey et al. (2006).


Expected and comparable response rates.

After designing and pretesting our instrument, and developing the data collection design with Knowledge Networks, we fully anticipate meeting or exceeding an 80% response rate for this exploratory study. (We assumed 82% in Table B-1-1.) In particular, the 7,500 points exceed KN’s minimum (2500) for a survey of this length. As an example, Table B.3-1 provides a listing of selecting of completion rates (study-specific response rate) from related studies fielded by KN.


Table B.3-1: Completion Rates for Prior OMB Approved Knowledge Networks Studies

KN Project

Year

Incentive Value (KN points)

Assigned

Complete

Completion Rate

Prescription Meds 2002

2002

0

5820

5233

90%

2005 Cancer Survey:  General Population and Survivors

2005

6000

3473

2841

82%

Smoking Ad Survey 2006

2006

0

8541

7610

89%

MA Cancer Communication

2006

10000

686

558

81%

National Seafood Consumption Survey

2006

0

31971

26967

84%

Chronic Opioid Survey Wave 2

2007

5000

1430

1144

80%

Retirement Perspective Survey 2007

2007

7500

3118

2666

86%

Societal Implications of Individual Differences in Response

2009

5000

4413

3559

81%

CA Long-Term Care Survey

2010

0

1454

1218

84%

Follow-up Weight & Diet 2010

2010

0

910

752

83%


Methods to detect and report on non-response bias (within the KN panel). Another source of non-response bias is when individuals who are statistically recruited by KN do not join the panel. Therefore, to compare respondents from this study to the general population—including non-KN panel members—we will benchmark our survey estimates by comparing to measures from gold standard surveys (e.g., the Behavioral Risk Factor Surveillance System (BRFSS), National Health and Nutrition Examination Survey (NHANES), CPS, National Health Interview Survey (NHIS)) or the best available alternative. Specifically, we will compare:

  • Prevalence of child maltreatment (CM) overall and by type, based on the CTQ responses (Attachment E), to estimated rates in leading surveillance studies based on survey data, such as Felliti et al. (1998), Corso et al. (2008), and Hussey et al. (2006).

  • General self-rated health and “healthy days” measures of HRQOL (a validated instrument, which is included on the survey questionnaire, Attachment E) to rates in the BRFSS and NHANES.

  • HRQOL measured by the EQ-5D (a validated instrument, which is included on the survey questionnaire, Attachment E) in the National Health Measurement Study (Fryback et al., 2007).

  • Other experiences before age 18 (on the survey questionnaire, Attachment E) to corresponding reports of these in the HRS, Add Health, and the ACE Study (Felitti, 1998).

A limitation of this approach is that some of the other surveys are from older time periods and may have used other modes of data collection. As a result, data differences observed from our study and those could be the result of time shifts or mode differences (Dennis, 2010; Smith & Dennis, 2005). Therefore, additional approaches will be used.

To identify possible self-selection effects at the KN panel recruitment and retention stages, we will statistically compare demographic and household characteristics of the sample invited to the KN panel and the subset of actual survey participants. This approach attempts to measure self-selection bias among the estimating sample making up the completed interviews. Further, this can be approached in two ways. First, KN will work with RTI and CDC to provide information from commercial databases (e.g., Experian, infoUSA, and Acxiom) to append to the sample frame observed and modeled information at various levels of aggregation. Nonresponse bias is analyzing by comparing the ancillary data available for the entire sample invited to join the KnowledgePanel against the small subset of recruited study participants that participate in our study. Statistically significant differences in marginal distributions of person-level and household-level characteristics would indicate non-response bias relative to the invited sample. Statistical comparisons for specific studies can be made between the total invited sample for the panel recruitment and the estimating sample for the characteristics noted above (e.g., categories of age, education, race, ethnicity, gender, head of household status, household size, housing type, income, marital status, metropolitan residence, home ownership, state, employment, internet access). DiSogra et al. (2010) provides additional detail. An aggregate error rate can be calculated as the sum of the differences in the distributions between the expected values from the total invited sample compared to the actual values (from the estimating sample of completed interviews). Second, to assess broader representation of the responding sample relative to U.S. population characteristics, we will compare distributions of the same variables with those in the most recent Current Population Survey (CPS).


B.4 Tests of Procedures or Methods to be Undertaken

The survey for this exploratory study has been developed through several different steps, as described in Supporting Statement A. First, all activities began with a literature review, to identify the possible HRQOL impacts of CM and to identify whether any prior data collections on this subject had been undertaken. Next, focus groups (3 small groups, totaling n=9 participants) were held to review domains and attributes of HRQOL impacts and to begin drafting of potential questions for the data collection. The results of the focus groups were used to narrow the list of impacts, which were reviewed through a series of teleconferences with CM experts and clinicians under a consulting agreement. After this, the full instrument and all methods were drafted, and again reviewed with additional experts under consulting agreements. Finally, revised survey materials were pretested by RTI staff in a guided cognitive interview format in May 2011 with n=9 adults from the general public in the Raleigh-Durham, NC area. These cognitive interviews included asking respondents to complete all items in the presence of a trained interviewer, plus review of the survey items using a structured protocol of questions with standardized probes and tailored follow-up questions. Select revisions to the survey were made following the cognitive interviews, and the final instrument was reviewed again by RTI, CDC, and KN.

After discussion with OMB about the instrument, modifications were made in February 2012. These changes were assessed in a second round of cognitive interview pretests in March 2012 with n=9 adults from the general public in the Raleigh-Durham, NC area. These interviews were held in a dedicated cognitive lab facility at RTI’s main office. A trained survey methodologist from RTI’s Program for Research in Survey Methodology (PRISM) led these pretest interviews. All tests were also observed by the primary RTI project staff. A formal pretest report is included (Attachment I).



B.5 Individuals Consulted on Statistical Aspects and Individuals Collecting and/or Analyzing Data

The data collection methodology for this exploratory study was designed by Dr. Derek Brown and Dr. Charles Strohm of RTI International and Dr. Xiangming Fang and Sarah Beth Link of CDC. Data analysis will be performed by RTI international under the direction of Dr. Derek Brown.


Curtis S. Florence, Ph.D.

Senior Health Economist

Health Economics and Policy Research Team Lead

Prevention Development and Evaluation Branch

Division of Violence Prevention

Centers for Disease Control and Prevention

Phone: 770.488.1398

Email: [email protected]


Xiangming Fang, Ph.D.

Economist

Program Development and Evaluation Branch

Division of Violence Prevention

Centers for Disease Control and Prevention

Phone: 770.488.1572

Email: [email protected]





Sarah Beth Link, MA

Associate Service Fellow

Program Development and Evaluation Branch

Division of Violence Prevention

Centers for Disease Control and Prevention

Phone: 770.488.3969

Email: [email protected]


Derek S. Brown, Ph.D.

Research Health Economist

RTI International

3040 Cornwallis Road

P.O. Box 12194

Research Triangle Park, NC 27709-2194

Phone: 919.316.3514

Email: [email protected]


Charles Q. Strohm, Ph.D.

Survey Methodologist,

RTI International

3040 Cornwallis Road

P.O. Box 12194

Research Triangle Park, NC 27709-2194

Phone: 919.541.8798

Email: [email protected]




References

Afifi TO, Enns MW, Cox BJ, de Graaf R, ten Have M, Sreen J. Child abuse and health-related quality of life in adulthood. J Nerv Ment Dis. 2007; 195(10):797-804.

Bernstein DP, Stein JA, Newcomb MD, et al. Development and validation of a brief screening version of the Childhood Trauma Questionnaire. Child Abuse Negl. 2003;27(2):169-90.



Bijlenga D, Birnie E, Bonsel GJ. Feasibility, reliability, and validity of three health-state valuation methods using multiple-outcome vignettes on moderate-risk pregnancy at term. Value Health. 2009;12(5):821-27.

Brazier J, Roberts J, Deverill M. The estimation of a preference-based measure of health from the SF-36. J Health Econ. 2002;21(2):271-92.

Bridges JFP, Hauber B, Marshall D, et al.. Conjoint analysis applications in health—a checklist: a report of the ISPOR Good Research Practices for Conjoint Analysis Task Force. Value Health. 2011;14(4), June/July. (Early view online publication expected spring 2011.)

Corso PS, Edwards VJ, Fang X, Mercy JA. Health-related quality of life among adults who experienced maltreatment during childhood. Am J Pub Health. 2008;98(6):1094-100.

Dennis JM. KnowledgePanel®: Processes & procedures contributing to sample representativeness & tests for self-selection bias. Knowledge Networks working paper, March 2010. http://www.knowledgenetworks.com/ganp/docs/KnowledgePanelR-Statistical-Methods-Note.pdf

DiSogra CJ, Dennis JM, Fahimi M. On the quality of ancillary data available for address-based sampling. Conference Proceedings of the 2010 Joint Statistical Meetings and in review. http://www.knowledgenetworks.com/ganp/docs/jsm2010/On-the-Quality-of-Ancillary-ABS-2010-JSM-submission.pdf

Felitti VJ, Anda RF, Nordenberg D, et al. Relationship of childhood abuse and household dysfunction to many of the leading causes of death in adults. The Adverse Childhood Experiences (ACE) Study. Am J Prev Med. 1998;14(4):245-58.

Flynn TN. Using conjoint analysis and choice experiments to estimate QALY values: issues to consider. Pharmacoeconomics. 2010;28(9):711-22.

Fryback DG, Dunham NC, Palta M, Hanmer J, Buechner J, Cherepanov D, Herrington SA, Hays RD, Kaplan RM, Ganiats TG, Feeny D, Kind P. US norms for six generic health-related quality-of-life indexes from the National Health Measurement study. Med Care. 2007 Dec;45(12):1162-70.

Hauber AB, Mohamed AF, Johnson FR, Oyelowo O, Curtis BH, Coon C. Estimating importance weights for the IWQOL-Lite using conjoint analysis. Qual Life Res. 2010;19(5):701-9.

Hussey JM, Chang JJ, Kotch JB. Child maltreatment in the United States: prevalence, risk factors, and adolescent health consequences. Pediatrics. 2006;118(3):933-42.

Johnson FR, Kanninen B, Bingham M, Özdemir S. Experimental design for stated-choice studies. In: B.J. Kanninen, ed. Valuing Environmental Amenities Using Stated Choice Studies. Dordrecht; Springer; 2007:159-202

Juniper EF, Guyatt GH, Jaeschke R. How to develop and validate a new health-related quality of life instrument. In: B. Spilker, ed. Quality of Life and Pharmacoeconomics in Clinical Trials. 2nd ed. Philadelphia; Lippincott-Raven Publishers; 1996:49-58.

Orme BK. Sample size issues for conjoint analysis. Getting Started with Conjoint Analysis. Madison; Research Publishers; 2006:49-57

Prosser LA, Corso PS. Measuring health-related quality of life for child maltreatment: a systematic literature review. Health Qual Life Outcomes. 2007;5:42.

Ratcliffe J, Brazier J, Tsuchiya A, Symonds T, Brown M. Using DCE and ranking data to estimate cardinal values for health states for deriving a preference-based single index from the sexual quality of life questionnaire. Health Econ. 2009;18(11):1261-76.

Smith TW, Dennis JM. Online versus in-person: experiments with mode, format, and question wordings. Public Opinion Pros. 2005; Dec. http://www.publicopinionpros.norc.org/from_field/2005/dec/smith.asp

Stoltz JA, Shannon K, Kerr T, Zhang R, Montaner JS, Wood E. Associations between childhood maltreatment and sex work in a cohort of drug-using youth. Soc Sci Med. 2007;65(6):1214-21.

Ware JE Jr, Sherbourne CD. The MOS 36-item short-form healthy survey (SF-36). Conceptual framework and item selection. Med Care. 1992;30(6):473-83.

Yang J-C, Johnson FR, Mohamed AF. When more is less and less is more: the art of calculating sample sizes for conjoint analysis studies. Presented at: 3rd Conjoint Analysis in Health Conference; October 5-7 2010. Newport Beach, CA.



23


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorSarah E. Arnold
File Modified0000-00-00
File Created2021-01-30

© 2024 OMB.report | Privacy Policy