SUPPORTING STATEMENT: PART B
OMB No. 0920-XXXX
VALIDATION OF A NEW CASE DEFINITION FOR PARENT- OR SELF-REPORTED TRAUMATIC BRAIN INJURY (TBI)
July 2018
Point of Contact:
Lara DePadilla
Behavioral Scientist
Division of Unintentional Injury Prevention
National Center for Injury Prevention and Control
Centers for Disease Control and Prevention
4770 Buford Highway, NE MS F-62
Atlanta, Georgia 30341
(770) 488-1568
FAX (770) 488-1317
E-mail: [email protected]
TABLE OF CONTENTS
Section Page
B. COLLECTIONS OF INFORMATION EMPLOYING STATISTICAL METHODS 4
B.1. RESPONDENT UNIVERSE AND SAMPLING METHODS 4
B.2. PROCEDURES FOR THE COLLECTION OF INFORMATION 6
B.3 METHODS TO MAXIMIZE RESPONSE RATES AND DEAL WITH NONRESPONSE 21
B.4. TEST OF PROCEDURES OR METHODS TO BE UNDERTAKEN 23
B.5 INDIVIDUALS CONSULTED ON STATISTICAL ASPECTS AND INDIVIDUALS COLLECTING AND/OR ANALYZING DATA 24
LIST OF ATTACHMENTS
A-1. Public Health Service Act
A-2. TBI Reauthorization Act of 2014
B-1. 60 day Federal Register Notice
C-1. Retained Consultants and Government Agencies Consulted
D-1. Privacy Impact Assessment
D-2 PIA Correspondence
D-3 ICF IRB Approval Letter
E-1. Eligibility Screener – English
E-2. Adult-Proxy Screener – English
E-3. Adult-Proxy Survey – English
E-4. Adolescent Screener English
E-5. Adolescent Survey – English
E-6 Adolescent Survey (web) -- English
E-7. Eligibility Screener – Spanish
E-8. Adult-Proxy Screener – Spanish
E-9. Adult-Proxy Survey – Spanish
E-10. Adolescent Screener Spanish
E-11. Adolescent Survey – Spanish
E-12. Adolescent Survey (web) -- Spanish
F-1. Introduction Script – English
F-2. Consent Adult Not Proxy – English
F-3. Consent Adult Proxy – English
F-4. Consent Adult for Adol – English
F-5. Intro-Assent for Adol – English
F-6. Introduction Script – Spanish
F-7. Consent Adult Not Proxy – Spanish
F-8. Consent Adult Proxy – Spanish
F-9. Consent Adult for Adol – Spanish
F-10. Intro-Assent for Adol – Spanish
G-1. Table Shells
H-1. Contact Attempt Protocols
Universe and Sample
The goal of the proposed data collection is to explore the validity of a three-tiered case definition designed to assess whether a TBI was sustained using self-report survey data. For the purpose of this collection, head injuries will be classified based on the case definition: Tier 3: Probable TBI, Tier 2: Possible TBI, Tier 1: Delayed Possible TBI and Non-Case head injuries (i.e. head injuries without self-reported symptoms that do not meet the lowest tier of the case definition).
The universe for this research includes the English and Spanish speaking non-institutionalized population 5 years of age and older residing in the 50 States and the District of Columbia (DC). The research focuses on adults and children 5 years of age and older (299M people nationally), with sub-populations addressing specific research aims described in more detail below: children 5 to 17 years of age (54M people nationally), children 14 to 18 years of age (11M people nationally) and children 13 to 17 years of age (11M people nationally). This is presented in tabular form with aims addressed below. Respondents will be selected through Random Digit Dialing (RDD) of landlines and cell phones. According to the most recently available population estimates of the cell-only population as measured by the National Health Interview Survey (NHIS), 97% of the population lives in a household with a landline and/or cell phone (1).
Table B.1.1. Population, Universe, Sample and Aims Addressed
Population and sub-populations |
Universe |
Sample |
Aim(s) |
Adults and children 5 years of age and older |
299,000,000 |
14,050 |
1.1, 2.2 |
Children 13 to 17 years of age |
11,000,000 |
1,440 |
1.2 |
Children 5 to 17 years of age |
54,000,000 |
4,050 |
2.2 |
Children 14 to 18 years of age |
11,000,000 |
1,340 |
2.3 |
Expected Response Rates
We expect to achieve landline and cell phone response rates similar to those obtained from the Behavioral Risk Factor Surveillance System (BRFSS) surveys (OMB Clearance #0920-1061, exp. 3/31/2018). In 2014, the median response rates for all BRFSS states and territories was 48.7% for landline and 40.5% for cell phone (2) interviews using the American Association of Public Opinion Research (AAPOR) Response Rate #4 formula (3). We acknowledge that these response rates do not meet the OMB standard of an 80% response rate. We have outlined our methods to maximize response rates in detail in section Support Statement B, Section B.3.
Statistical Justification: Estimates of TBI in Each Population and Degree of Accuracy
Brief Review of Aims and Samples in This Collection
Aim 1 described in Supporting Statement A is to explore the validity of the proposed case definition with the proposed collection through 1) Testing construct validity by analyzing TBI outcomes that serve as indicators of severity and 2) testing the reliability of parents reporting TBI on behalf of adolescents by comparing parent proxy report and adolescent self-reports of TBI in the past 12 months. For Aim 1.1, the sample will be adults and children 5 years of age and older. For Aim 1.2, the sample will be children 13 to 17 years of age.
Aim 2 described in Supporting Statement A is to compare the proposed case definition estimates from the proposed collection with externally collected data sources to explore the extent to which current databases over and under estimate current TBI estimates. For Aim 2.2, the sample will be from the population five years of age and older. Specifically, for the comparison to national estimates from hospital settings and emergency departments, the sample will be from the population five years of age and older who were treated in hospital settings and emergency departments. For the comparison to Truven Health Analytics MarketScan Research Databases, Medicaid and Commercial Claims, the sample will be the population 5 to 17 years of age. For the comparison to the 2017 Youth Risk Behavior Survey (YRBS), which includes a question about sports- and recreation- related (SRR-)TBI, the sample will be the population 14 to 18 years of age.
TBI Among Adults and Children (Aim 1.1, 2.2): Twelve-month TBI incidence in the population of those age 5 years and older has been estimated at 0.8%, (4). It is important to note that this estimate is based on prior estimates of TBI-related emergency department visits, hospitalizations, and deaths, and underestimate TBI incidence more generally due to excluding TBIs for which care was sought elsewhere and TBIs for which care was not sought at all (5-7). Therefore, we use 1% as the low end of our expected percentage of this population reporting a TBI. This estimate addresses the power calculations for the analyses described in Aim 1.1 and the comparisons to HCUP-NIS, HCUP-NEDS, NEISS-AIP and the Truven Health Analytics MarketScan Research Databases described in Aim 2.2.
Sports and Recreation-Related (SRR)-TBI Among Adolescents (Aim 1.2, 2.3): There is evidence that SRR-TBI incidence among youth may be higher than that for all mechanisms among all ages. As mentioned in Supporting Statement A, Section A.4, several states collected TBI-related data using optional questions on the YRBS. A measure of estimated self-reported 12-month TBI for high school students participating in sports was collected in the 2013 Ohio and Connecticut Youth Risk Behavior Surveys (YRBS). Each included a question about occurrences of head injuries while playing with a sports team in which the student may have experienced TBI-related symptoms from those injuries. The state YRBS question is limited to TBIs experienced while participating in team sports. It provides only five symptoms as part of the question, rather than a comprehensive list as employed by the case definition used in this proposed data collection and precludes any assessment of level of certainty. These surveys found that 12.4% of high school students in Ohio (8) and 13.5% of high school students in Connecticut (9) reported experiencing self-reported symptoms consistent with concussion while playing with a sports team. While these surveys are specific to only two states and use a different instrument than the one in this collection, they provide an applicable estimate for the sample design for this sub-population. As these two surveys focused on high school students, we will use a comparison incidence of 13% for the aims that will be addressed with the adolescent population.
For Aim 1.2, the sample data will be restricted to adolescents 13 to 17 years of age in order to align with data collected by adolescent self-report. For Aim 2.3, the sample data will be restricted to adolescents 14 to 18 years of age in order to align with the population of the 2017 YRBS. Recognizing that these estimates are for high school students and are based on results from two states, we consider 13% the high end of our expected percentage of this population reporting a TBI. This is the estimate we applied to the correspondence between parent proxy and adolescent self-reports described in Aim 1.2 and the comparison to the 2017 YRBS described in Aim 2.3.
Degree of Accuracy
We base the minimum sample size of 10,000 on having enough cases for analyses described in each of the aims. These include the group comparisons described in Aim 2.1 and point estimates for the comparisons to national estimates described in Aim 2.2. The degree of accuracy for the point estimate of 12-month TBI among adults and children will have a 95% error margin of 0.50 such that the 95% confidence interval, based on a 1% estimate will be 0.50% to 1.50%. The degree of accuracy for the point estimate of the estimate of SRR-TBI among adolescents 14 to 18 years of age will have a 95% error margin of 3.0%, such that the 95% confidence interval, based on a 13% incidence, will be 10.0% to 16.0%. Additionally, the sample size for adolescents 13 to 17 years of age must also be powered to test correspondence between parent proxy and adolescent self-reports. This statistical test is based on a 13% estimate and 70% as the minimum acceptable positive agreement between teens and parents. The sample size is determined based on a one tailed test (alpha=0.05) with 80% power for Aim 2.3. The statistical tests and power calculations will be described in detail in Supporting Statement B, Section B.2.
Sample Selection
Dual Frame Allocation
The TBI Surveillance System will utilize a national dual-frame sample of landline and cell phone numbers. The dual-frame will be an overlap design in which we will interview dual-users in the cell phone sample as well as the landline sample. According to the most recently-available population estimates of the cell-only population, as measured by the NHIS, the percentage of children living in cell-only households is 55% and the percentage of adults is 47% (1).
Using these population percentages and the relative cost for landline and cell phone administration, we identified an optimal allocation for a dual-frame sample with overlap. The optimal allocation depends on the population size, the cost, and the expected distribution of cell-only and dual-users in the cell phone sample, as well as the distribution of landline-only and dual-users from the landline sample.
In general, interviewing cell phone respondents costs 1.5 to 2 times the cost of interviewing landline respondents (10). However, recent surveillance data has shown that the likelihood of reaching a parent in the cell phone sample is higher than in a landline survey. 1 Based on data from the 2014 BRFSS, 36% of respondents interviewed by cell phone reported having a child under the age of 18 in the household, compared to 21% in the landline survey.
The BRFSS does not record the ages of children. According to 2014 NHIS public use data, among households with children, 82% have at least one child 5 to 17 years of age.2 Based on the BRFSS data mentioned above (36% of cell phone households and 21% of landline households have at least one child) we expect that 17% of landline households and 29% of cell phone households to have a child in our age range of interest (5 to 17 years old). Since we expect 29% of households in the cell phone sample to include at least one youth 5 to 17 years of age, we will need to screen an average of 3.4 (1/29%) households to find at least one with a youth 5 to 17 years of age. Similarly, we will need to screen an average of 5.9 (1/17%) households in the landline sample to find at least one with a youth 5 to 17 years of age. Using these screening rates, and a base cost ratio of 1.75:1.00, we adjusted the cost ratios as follows:
Youth cell phone cost = 1.75 x 3.4 = 6.0
Youth landline cost = 1.0 x 5.9 = 5.9
Therefore, when factoring in screening efficiency, the cell and landline costs for a youth survey are nearly equal.
An additional consideration is the calculation of a projected distribution of the sample by landline and cell phone sample administration. Using ICF internal data for ten state recent health surveys, we found that 61% of interviews were with cell phone-only households and 39% were with dual-user households, among cell phone respondents with children. Similarly, we found 11% of interviews were with landline-only households and 89% were with dual-user households among landline respondents with children.
Using these assumptions, the optimal sample allocation is 84% cell and 16% landline (11). From this allocation, we expect the sample to be comprised of 52% cell phone only households, 3% landline only households, and 45% dual-user households.
Selecting the RDD Samples
The landline and cell phone RDD samples will be selected through Marketing Systems Group’s Genesys Sampling System. The RDD frame is constructed based on information from the North American Numbering Plan Administration, which governs the assignment of 1,000-blocks to service providers. A 1,000-block is the series of 1,000 telephone numbers defined by the last three digits of a 10-digit phone number (NPA-NXX-Z000 - NPA-NXX-Z999). The 1,000-blocks dedicated to cell service or landline service are identified by codes from the Telcordia® LERG (Local Exchange Routing Guide). Those dedicated to landline service comprise the landline frame, while those dedicated to cellular service comprise the cell phone frame. The sampled telephone numbers will be purged for known businesses by matching the numbers to business directories and many non-working numbers will be removed at the time of fielding by an automated dialing system used only for landline phone number call attempts.
We will select the landline sample using RDD using the equal probabilities of selection method (EPSEM) from working banks. A “working” bank is a 100-block (NPA-NXX-ZZ00 - NPA-NXX-ZZ99) where at least one telephone number is assigned to residential service. Note that this frame definition is improved over traditional list-assisted frames, in which blocks with one or more “listed” telephone numbers were included in the frame. The traditional list-assisted frame excluded zero-blocks, which typically excludes about 5 percent of residential households (12). The assignment-based frame includes households that would have otherwise been excluded.
We will select the cell phone sample using RDD with EPSEM. All telephone numbers from the cell phone frame will be manually dialed in accordance with laws that prohibit cell numbers from being called by an automated dialer.
The CATI station computers will be provided with the area code and the prefix to select the appropriate region for calling. The computer will then randomly select the last four digits of the phone number.
Respondent Selection
We will select up to three respondents per household:
A randomly selected adult to collect information on up to three self-reported injuries to the head during the previous 12 months and symptoms associated with TBI.
A knowledgeable parent/guardian of children in the household to collect proxy information on up to three injuries to the head during the previous 12 months and symptoms associated with TBI for all of the children in the home. If the randomly selected adult is sufficiently able to provide the child TBI proxy information there will be no need for an additional adult.
A randomly selected child 13-17 years of age to measure correspondence with parent proxy report.
The order of the interviews will depend on who is currently available. Below we discuss the most common scenarios.
Landline Sample
Once a landline telephone is answered, we will read the introductory text and confirm that we have contacted a private residence. After that, we will ask if the person we are speaking to is 18 years of age or older; if not, we will ask to speak to an adult in the household. Once an adult is on the phone, we will:
Conduct a household roster in which the adult informant gives information on the number and gender of all adults, and the number and ages of all children in the household.
Identify parents/guardians of children in the household to facilitate the parent proxy interviews and obtain consent for interviewing the 13-17 year old if there is a 13-17 year old living in the household.
After these initial steps, the within-household selection procedure depends on the number of adults and the number and ages of children in the household.
No children 5-17 (83%)3
This scenario, which will be the most common scenario, will result in one interview with a selected adult. We will select one adult using the Rizzo, Brick, and Park (RBP) (13) selection method:
If there is one adult in the household (35%), that person is automatically selected.
If there are two adults in the household (40%), the screener adult will be selected one-half of the time. If the screener adult is selected, the interview will continue with that adult about him or herself. If the screener adult is not selected, we will ask to speak to the other adult in the household and complete the interview if possible, or schedule a callback.
If there are three or more adults in the household (8%), the screener adult will be selected 1/Ath of the time, where A is the number of adults. If the screener adult is selected, the interview will continue with that adult about him or herself. If the screener adult is not selected, we will ask to speak with the adult in the household who had the most recent birthday and complete the interview, if possible, or schedule a callback.
Children 5-17 (17%)
Select one adult using RBP (same as above). For the youth proxy interviews, the adult will be asked questions about all children in the household who are 5-17 years of age. If there is more than one adolescent 13-17 years of age, then one will be randomly selected to be directly interviewed for the correspondence study. The procedure for respondent selection in the landline sample based on the adult(s) in the household is outlined below.
One adult (3%)
1-2 respondents (adult or adult and adolescent)
We will conduct the adult portion of the interview with that one adult. Having verified that the adult is the parent/guardian of the children (during the household rostering), and that the adult is a sufficiently informed reporter of the children’s health information, we will conduct a proxy interview with that adult regarding all children 5-17 years of age.
For the correspondence study: If there is one adolescent 13-17 years of age, we will ask permission to speak to that adolescent. If there is more than one adolescent 13-17 years of age, we will randomly select one adolescent and ask permission to speak to that adolescent. If granted, and the adolescent is available, we will obtain assent and conduct the survey over the phone. As noted, recontact information (cell phone and email) will be requested for the adolescent if he/she cannot take the survey at the time of the initial call. The adolescent will then be invited to complete the survey by using a link provided via cell phone and/or email. Reminder texts/emails will be used to invite nonresponders to complete the survey. Adolescents will complete the assent procedure via web before completing the online survey. We will attempt to complete telephone interviews with nonresponders for the web survey.
Two or more adults, Selected adult is a sufficiently knowledgeable parent (10%)
1-2 respondents (adult or adult and adolescent)
We will conduct the adult portion of the interview with a randomly selected adult. Having verified that the selected adult is the parent/guardian of the youth (during the household rostering), and that the adult is sufficiently knowledgeable about the youths’ health information, we will conduct a proxy interview with that adult regarding all youth 5-17 years of age.
For the correspondence study: If there is one adolescent 13-17 years of age, we will ask permission to speak to that adolescent. If there is more than one adolescent 13-17 years of age, we will randomly select one adolescent and ask permission to speak to that adolescent. If granted, and the adolescent is available, we will obtain assent and conduct the survey over the phone. As noted, recontact information (cell phone and email) will be requested for the adolescent if he/she cannot take the survey at the time of the initial call. The adolescent will then be invited to complete the survey by using a link provided via cell phone and/or email. Reminder texts/emails will be used to invite nonresponders to complete the survey. Adolescents will complete the assent procedure via web before completing the online survey. We will attempt to complete telephone interviews with nonresponders for the web survey.
Two or more adults, Selected adult is not a sufficiently knowledgeable parent (4%)
2-3 respondents (two adults or two adults and adolescent)
If the selected adult has conducted the screener, we will conduct the adult portion of the interview with the adult. We then ask to speak with a sufficiently knowledgeable parent for all youth 5-17 years of age. If the parent is available, we will conduct a proxy interview related to all youth 5-17 years of age. If not available, we will schedule a call back.
For the correspondence study: If there is one adolescent 13-17 years of age, we will ask permission to speak to that adolescent. If there is more than one adolescent 13-17 years of age, we will randomly select one adolescent and ask permission to speak to that adolescent. If granted, and the adolescent is available, we will obtain assent and conduct the survey over the phone. As noted, recontact information (cell phone and email) will be requested for the adolescent if he/she cannot take the survey at the time of the initial call. The adolescent will then be invited to complete the survey by using a link provided via cell phone and/or email. Reminder texts/emails will be used to invite nonresponders to complete the survey. Adolescents will complete the assent procedure via web before completing the online survey. We will attempt to complete telephone interviews with nonresponders for the web survey.
If the knowledgeable parent has conducted the screener, but is not the selected adult, we will conduct all proxy interviews with the knowledgeable parent.
For the correspondence study: If there is one adolescent 13-17 years of age, we will ask permission to speak to that adolescent. If there is more than one adolescent 13-17 years of age, we will randomly select one adolescent and ask permission to speak to that adolescent. If granted, and the adolescent is available, we will obtain assent and conduct the survey over the phone. As noted, recontact information (cell phone and email) will be requested for the adolescent if he/she cannot take the survey at the time of the initial call. The adolescent will then be invited to complete the survey by using a link provided via cell phone and/or email. Reminder texts/emails will be used to invite nonresponders to complete the survey. Adolescents will complete the assent procedure via web before completing the online survey. We will attempt to complete telephone interviews with nonresponders for the web survey.
We will then ask to speak with the selected adult. If the adult is available, we will conduct the interview. If not available, we will schedule a call back.
In the case of households with more than two adults, if neither the knowledgeable parent nor the selected adult conducted the screener, we will:
Ask to speak with a knowledgeable parent for all youth 5-17 years of age. If the parent is available, we will conduct all proxy interviews.
For the correspondence study: If there is one adolescent 13-17 years of age, we will ask permission to speak to that adolescent. If there is more than one adolescent 13-17 years of age, we will randomly select one adolescent and ask permission to speak to that adolescent. If granted, and the adolescent is available, we will obtain assent and conduct the survey over the phone. As noted, recontact information (cell phone and email) will be requested for the adolescent if he/she cannot take the survey at the time of the initial call. The adolescent will then be invited to complete the survey by using a link provided via cell phone and/or email. Reminder texts/emails will be used to invite nonresponders to complete the survey. Adolescents will complete the assent procedure via web before completing the online survey. We will attempt to complete telephone interviews with nonresponders for the web survey.
If the knowledgeable parent is not available, we will ask to speak with the selected adult. If the parent is available, we will ask to speak with the selected adult after the proxy interviews are complete. If the selected adult is available, we will conduct the interview. If not available, we will schedule a call back.
Cell Phone
Once a cell phone is answered, we will read the introductory text and confirm that it is safe for the respondent to talk on their phone. After that, we ask if the person we are speaking to is 18 years of age or older. If they are not, we will terminate the interview. If we are speaking to an adult, we will determine whether the respondent is a parent or guardian of youth living in the same household. If yes, we will obtain the number and ages of youth in the household.
No children 5-17 (71%)
Conduct the adult interview with the cell phone respondent.
Children 5-17 (29%)
1-2 respondents (adult or adult and adolescent)
We will conduct the adult interview with the cell phone respondent and if the adult is sufficiently knowledgeable, we will conduct the proxy portion of the interview with that adult related to all youth 5-17 years of age.
For the correspondence study: If there is one adolescent 13-17 years of age, we will ask permission to speak to that adolescent. If there is more than one adolescent 13-17 years of age, we will randomly select one adolescent and ask permission to speak to that adolescent. If granted, and the adolescent is available, we will obtain assent and conduct the survey over the phone. As noted, recontact information (cell phone and email) will be requested for the adolescent if he/she cannot take the survey at the time of the initial call. The adolescent will then be invited to complete the survey by using a link provided via cell phone and/or email. Reminder texts/emails will be used to invite nonresponders to complete the survey. Adolescents will complete the assent procedure via web before completing the online survey. We will attempt to complete telephone interviews with nonresponders for the web survey.
Estimation
As stated in Supporting Statement A, the analysis of Aim 2.2 incorporates the development of sample weights in order to produce weighted estimates in order to explore potential over and under estimation and its sources by comparing 12-month estimates to estimates based on administrative data for diagnosed 12-month TBI and estimates based on self-reported 12-month TBI. The weights are not intended to produce nationally representative estimates of TBI. The completed interviews will be weighted using dual-frame methods for combining landline and cell phones. We will compute weights separately for those 18 years of age or older, youth 5 to 17 years of age (proxy interviews), and adolescents 13 to 17 years of age selected for the correspondence study.
First, we will compute the sampling weight, or inverse of the selection probability, for the landline and cell phone samples. The sampling weight is the total number of records on the frame (NRECSTR) divided by the total number of records selected (NRECSEL). For the landline sample, this weight is adjusted for multiple landline households by dividing by the number of telephone lines as recorded during the survey (PHONES).
For the adult landline survey, we will randomly select one adult in the household. We will adjust for the within-household selection probability by multiplying by the number of adults in the household (ADULTS.) Similarly, we will adjust for the selection of an adolescent 13 to 17 years of age by multiplying by the number of adolescents in the household (TEEN). For the child survey, we will attempt to collect proxy information for all youth in the household (PROXY). Therefore, no within household selection weight is necessary.
In summary, the design weights are calculated as follows:
ADULTS |
Landline: DESIGN_WT = (NRECSTR/NRECSEL) x (ADULTS/PHONES) Cell: DESIGN_WT = (NRECSTR/NRECSEL) |
PROXY |
Landline: DESIGN_WT = (NRECSTR/NRECSEL) x (1/PHONES) Cell: DESIGN_WT = (NRECSTR/NRECSEL) |
TEEN |
Landline: DESIGN_WT = (NRECSTR/NRECSEL) x (TEEN/PHONES) Cell: DESIGN_WT = (NRECSTR/NRECSEL) x TEEN |
To account for the overlapping landline and cell phone dual frame design, we will use a composite weight, averaging the dual users from the cell phone sample and the dual users from the landline sample. The composite weight is a ratio of the effective sample sizes, c = neff1 / (neff1+ neff2), where neff=n/deff is the effective sample size; deff = n x Σ(DESIGN_WT2) / (ΣDESIGN_WT)2 is a measure of variability of the design weights (DESIGN_WT) and n is the sample size for each group.
The survey is partitioned into two parts, 1) information collected about TBI experiences for each youth in the household and a selected adult and 2) detailed information collected about the most recent incident of TBI. Expecting some attrition between the first part and second part of the survey, we will adjust for nonresponse using a propensity score adjustment. The propensity score adjustment will use a logistic regression to model response as a function of characteristics collected in the first part of the survey.
As the final weighting step, we will post-stratify into demographic categories and ratio adjust the weights so that the final weighted sample matches the population with respect to those demographic characteristics. We will use a raking algorithm for these adjustments. Raking will be done separately with adults, proxies and adolescents.
The purpose of the correspondence study is to compare adolescent self-reported TBI versus their parent’s proxy-reported TBI. We will use the TEEN weights, adjusted for nonresponse. Nonresponse for the correspondence comes from nonconsenting parents and adolescents not completing the survey. We will adjust for nonresponse using a propensity score adjustment. The propensity score adjustment will use a logistic regression to model response as a function of characteristics collected in the parent interview. Additionally, we will adjust for nonresponse based on whether or not the adolescent was home at the time of the parent interview.
Implementation
Data Collection Process
The methodology for the study is a household survey conducted as a random-digit dial telephone survey utilizing a dual frame that includes both landline and cellphones. After providing consent to participate in the survey, adult respondents will be asked about their own TBI history. In addition, adult parents/guardians with children 5 to 17 years of age will be asked to serve as proxy reporters and answer questions about the TBI history of all of their children in the household who are in that age group. Adult interviews will be conducted by Computer Assisted Telephone Interviewing (CATI).
To collect survey data directly from adolescents 13 to 17 years of age, ICF International will conduct an observational study of two modes, with CATI as the primary response mode and Web as an alternative. Adult respondents will be asked if they consent to having their adolescent child complete the survey following the completion of all proxy interviews. After the introduction and assent process for the adolescent is completed on the telephone, the adolescent will be offered the opportunity to complete the survey on the phone or, if preferred, the Website address and a unique login code will be provided so that the adolescent may complete the survey online. Like CATI, Web data collection is 100% electronic. Specifically, ICF has developed a Website for administering the survey to adolescents 13 to 17 years of age. ICF designed the Website to facilitate the interview process for the respondent and reduce burden. These features include:
• Basing the visual layout of the questions on heuristic principles that people follow in interpreting visual cues;
• Making the survey easily navigable from page to page;
• Incorporating user assistance tools, such as help screens for certain items (e.g., the respondent could click a link to get a definition that would come up if needed);
• Inserting placeholders so that respondents can pause and leave the system and then re-enter (at the point of departure) without losing the responses previously entered; and
• Programming in consistency checks.
ICF International has tested the Website by using several different devices (e.g., laptops, smartphones, and tablets) and operating platforms to ensure that the survey functions properly and is easily navigated in the many ways respondents will access the survey.
Randomly selected adolescents 13 to 17 years of age will be directly asked about their own TBI history and will be offered both telephone and web options as response modes. Before directly interviewing these adolescents, their parent/guardian will be asked if they consent for their adolescent to complete the survey. If consent is given, adult respondents will be asked if their adolescent is available to complete the survey at that time over the phone. If yes, the CATI interviewer will complete the survey with the adolescent via telephone after obtaining adolescent assent. If the adolescent is unavailable to take the survey on the phone at the time of the initial parent interview, the parent will be asked for contact information for the adolescent; this will include an alternative phone number and email address. The adolescent will then be invited via cell phone and/or email to complete the survey by using a link to the web version of the survey. Reminder texts/emails will be used to invite nonresponders to complete the survey. Adolescents will complete the assent procedure via web before completing the online survey. As described in Respondent Selection, within a household, we will potentially interview up to three respondents: one adult, one parent (if the randomly selected adult is unable to sufficiently provide information about children in the home), and one adolescent 13 to 17 years of age. During data analysis, adolescent responses to the CATI and Web modalities will be compared for differences in level of agreement with the parent proxy reporter, missing data and respondent breakoff.
Included in the appendices are an eligibility screener for the household in English and Spanish (Attachment E-1 and E-7, respectively) that will be administered after the introduction script in English and Spanish (Attachment F-1 and F-6, respectively). Also included are adult-proxy and adolescent screeners to assess whether a respondent has experienced a head injury in the past 12 months, as this will appreciably influence the level of burden for an individual respondent. The screeners are included in English and Spanish for the Adult-Proxy (Attachment E-2 and E-8, respectively) and Adolescent (Attachment E-4 and E-10, respectively). These Adult-Proxy Screener will be administered after the introduction script in English and Spanish (Attachment F-1 and F-6, respectively), and the appropriate consent based on whether the adult will be serving as a proxy reporter for youth 5 to 17 years of age. Adults who will not be serving as proxy reporters will receive the Consent Adult Not Proxy in English or Spanish (Attachment F-2 and F-7, respectively) while adults who will be serving as proxy reporters will receive the Consent Adult Proxy in English or Spanish (Attachment F-3 and F-8, respectively). If there is an adolescent 13 to 17 years of age in the household, the Consent Adult for Adolescents will be delivered to the adult in English or Spanish (Attachment F-4 and F-9, respectively) after the proxy interviews have been completed. The Adolescent Screener will be administered after the Introduction and Assent for Adolescent script in English or Spanish (Attachment F-5 and F-10, respectively). As shown in the Appendices, the only differences in the adolescent Introduction and Assent between the phone and web modes are that the web will use a “continue” button to advance to the survey and the CATI survey will begin with “Hello, my name is ____ calling on behalf of the Centers for Disease Control and Prevention.” We anticipate that it will take adolescents the same amount of time to complete the screener and/or survey regardless of whether they are completing on Web or CATI.
For those respondents who indicate that they or their child have experienced a head injury in the past 12 months, we included English and Spanish surveys for the adult/proxy (Attachment E-3 and E-9, respectively) and the adolescent (Attachment E-5, E-6, and E-11 and E-12, respectively). The same introduction and consent scripts that were used for the Adult-Proxy Screener and the Adolescent Screener will be used for the surveys.
The Adult-Proxy instruments and the Adolescent instruments are similar so that we can assess the level of agreement; however, they are not identical. For instance, we do not ask adolescents for any demographic information on the screener or survey as we do of adults on behalf of themselves and their children, nor do we ask questions about returning to school or play on the survey as we do of adults on behalf of their children. Please see the Attachments for each screener and survey.
Data Collection Staff
Survey data will be collected by trained interviewers employed by ICF International. ICF has created a public health interviewing team for its BRFSS and BRFSS-protocol surveys. To be selected for the team, individuals must meet minimum standards with respect to tenure, response rate, non-response conversion capabilities, and interviewer performance based on monitoring sessions. To retain membership, interviewers are required to attend regular retraining sessions in refusal avoidance, non-response conversion, and general interview technique. ICF maintains a core group of at least 110 public health interviewers at any time, and it is from this group that it will select interviewers to collect data for the TBI Surveillance System.
New ICF interviewers participate in a rigorous two-day training that gives them an excellent foundation in proper interviewing techniques, teaches techniques for gaining respondent trust and cooperation, provides instruction in how to work efficiently within the CATI program software, and emphasizes the importance of survey work and the interviewer’s role for each project. Interviewers are taught the importance of reading verbatim, scheduling re-contact attempts at optimum times, following procedures properly, and using non-leading probes. They then undergo extensive hands-on practice with the CATI system and work through an exhaustive series of practice interviews and interviewing situations.
All interviewers, supervisors, and quality control personnel (who monitor data collection) participate in project-specific trainings that include the survey’s purpose and scope, a detailed review of the questionnaire, the appropriate technique for conducting the screening portion of the interview, flagging a record for an interview in a language other than English, definitions and pronunciations of key terminology, dealing with uncooperative respondents, and interviewing techniques for different types of respondents, such as busy or distracted individuals. ICF will administer a short quiz within the last two hours of the project training. Interviewers will need to demonstrate a superior level of knowledge regarding the project before being allowed to collect survey data.
Data Coding Quality Control Procedures
During data collection, ICF project management staff will check the CATI system settings to ensure that the call attempt and callback protocols are being met. These settings, and other project status reports, are available in real-time on an internal web site, called I-Site. I-Site also provides custom-designed reports, which typically include sample status, telephone numbers that need further attempts to meet any established protocols, the telephone numbers currently assigned to interviewers specially trained in refusal conversion, and the total number of completed interviews. These reports are updated nightly. Project management staff will also monitor calls throughout fielding and provide feedback to CATI supervisors regarding specific interviewer performance, as well as individual and overarching training needs.
Also during data collection, ICF will maintain a database of all CATI calls that took place over the prior 14 days. In addition to conducting live monitoring, recorded interviews allow ICF to conduct additional QC tasks. The database houses the majority of attempts, which includes everything from completed interviews and introductions, to no answers (e.g., answering machines and privacy managers). These recordings can be used by ICF to listen first-hand to the nuances of a respondent's reactions and answers, train newer interviews by having them listen to and evaluate exemplary interviews, and train all staff by sharing excellent refusal aversion and conversion scenarios. These procedures encourage interviewers to consistently achieve the highest performance standards and adhere to all study protocols, thereby improving data quality.
During data processing, the ICF project management team will review open-ended and “other/specify” responses in the first few weeks of data collection, and then periodically throughout fielding, to identify potential coding or training issues. Prior to delivering the dataset, we will clean the data (to correct grammatical and typographical errors) and when applicable back-code open-ended responses. After converting and cleaning the data, ICF will produce frequency tabulations of every question and variable to detect missing data or errors in skip patterns, similar to the checks performed during questionnaire programming. ICF will also perform a variety of other checks using SAS programs designed specifically by programmers. For each question, responses outside of the expected range are flagged. Checks are also performed across questions to evaluate consistency. In most cases, inconsistencies discovered are the result of minor errors in the CATI program that affect how the data are stored in the data file. These can usually be resolved by further inspecting the individual record. They are also fixed in the program, so the error does not occur again if the survey is to be fielded in the future.
Data will also be analyzed to discern any potentially problematic questions or survey sections that may be increasing respondent break-off. Although the survey was cognitively tested to address key questions asked of most respondent who report having had a head injury, a larger sample is required to provide an adequate sample for all skip patterns. Datasets will be carefully reviewed for seemingly aberrant or inconsistent responses that might also signal problems of comprehension, recall, or reporting in either the questions or the response categories
A cleaned, unweighted, data file, including variable and value labels, will be provided to CDC via a secure FTP site. All data files will be submitted with the format or layout files.
Statistical Tests and Power Analysis
Aim 1.1: Statistical Tests and Power Analysis for Analyzing TBI Outcomes that Serve as Indicators of Severity
As stated in B.1, Aim 1.1 is to test associations between TBI and TBI outcomes that serve as indicators of severity.
Power analyses for comparison of means (e.g. ANOVA, t-test) are described in terms of comparing three groups (Tier 3 TBI, Tier 2 TBI and Tier 1 TBI) and two groups (Tier 3 TBI, Tier 2 TBI and Tier 1 TBI versus non-cases). The size of the groups is unknown and the calculations will take into account the possibility of an unbalanced design. Based on assumptions about our estimate for this aim (1% among adults and children 5 years of age and older), we expect about 112-169 respondents in total to endorse TBI symptoms and thus be classified as Tier 3, Tier 2, or Tier 1 (see Table B.2.1). We present power for detecting between a small and medium overall effect to a large overall effect (f = 0.15 to 0.40; d = 0.30 to 0.80) using overall sample sizes from 120, 150, 180 and 210. We present power for comparisons between three groups and two groups. All power calculations are based on two-tailed tests at an 0.05 significance level. Nonparametric tests (Kruskal-Wallis for 3 groups and Wilcoxon Rank-Sum for two groups) will be slightly less powerful. The asymptotic relative efficiency (ARE) for these tests is 0.955, meaning a sample of 105 (100/0.955) for the Kruskal-Wallis test will produce the same power as a sample of 100 for the F-test.
It is important to note that these estimates of sample size are conservative given the existing TBI estimates in the population are likely underestimates. In the case of the two group analyses between TBI and non-cases, the samples will also include head injuries without self-reported symptoms that are most likely not captured in the existing estimates of TBI.
Figure B.2.1. Power to Detect Group Differences: 2 Groups
Figure B.2.2. Power to Detect Group Differences: 3 Groups
Aim 1.2: Statistical Tests and Power Analysis for Comparison of Parent Proxy and Adolescent Self-Reports
To establish the minimum sample size for the test of correspondence, we will use Cohen’s kappa coefficient (14), , where and . The parameters used in the calculation are described in Table B.2.2.
Table B.2.2. Parameters for Comparison of Adolescent Report to Parent Proxy Reports
|
Adolescent Report |
||
Parent Proxy Report |
Yes |
No |
Total |
Yes |
|
|
|
No |
|
|
|
Total |
|
|
|
As stated in Supporting Statement A, Section A.16, CDC researchers have determined that 70% is the target minimum level of agreement for parent proxy reporting, meaning that 70% of the positive responses reported by the parent are confirmed by the adolescent ( ) and 70% of the adolescent positive responses are confirmed by the parent ( ). Using the assumption that the percentage of the sub-population of study reporting a head injury with at least one symptom will be 13% (same as above), this means that a minimum of 9.1% of the responses will be positive responses by both the parent and adolescent. This level of agreement indicates a Cohen’s kappa of about 0.66. The acceptable range of is considered to be .40 to .75 (15). We will test the null hypothesis that is less than 0.66 ( ), versus the alternative that is greater than 0.66 ( ). Since 0.66 is the minimum acceptable, we will assume a higher agreement level for developing sample size. We will assume an 80.0% level of agreement on positive responses, or a kappa of 0.77.
Based on a one-tailed test at α=0.05 and 80% power, the minimum required sample size is 623 matched parent/adolescent responses.4
Using data from the BRFSS and NHIS as described above, we estimate that 14.4% of households nationwide will have an adolescent 13 to 17 years of age. Therefore, the sample size of 10,000 households described above creates the potential for 1,440 households with an adolescent 13 to 17 years of age. Adult respondents will be asked if they consent for their adolescent to complete the survey following the completion of all proxy interviews. The protocol for contacting adolescents will proceed as described earlier in this section. However, adolescents may be unavailable at the time of the parent interview so there is the potential for attrition. Additionally, adolescents may start but not complete the survey via phone or web modalities. Thus, we use 50% as a conservative rate for obtaining matched pair responses (responses from both the parent and adolescent) from the anticipated number of households estimated to have an adolescent 13 to 17 years of age. Therefore, we anticipate obtaining 720 matched pairs. This number is sufficient to meet the minimum required for Aim 1.2.
Aim 2.2 and 2.3: Computation of Level of Precision for Comparison to National Estimates
Aim 2.2 and 2.3 are to compare our estimates to national incidence estimates of TBI for adults and children 5 years of age and older, children 5 to 17 years of age and youth 14 to 18 years of age participating in sports and recreation. We focus on these three groups in developing the sample size. The sample size for estimating a proportion is based on the size of the proportion (e.g. 0.5, 0.1), the expected design effects inherent in conducting a survey from a complex sample such as a dual-frame RDD, and the desired level of precision for the estimate (e.g., 95% confidence interval width). Specifically, sample size is calculated as:
p=estimate
As described in Supporting Statement B, Section B.1, TBI is a rare outcome for which current incidence estimates are subject to a number of limitations, we have provided the expected number of cases for 12-month estimates that range from 1% to 13%, depending on the sub-population examined.
Design Effects
We will assume a design effect (deff) of 1.5. The design effect represents inefficiencies inherent in sample design and operations. Since the sample design is based on a RDD telephone sample of landline and cell phones, there will be dual-frame weighting adjustments, as well as weighting adjustments for non-response. Further, collecting TBI information for multiple people per household could introduce clustering effects. Given our optimal allocation, we expect a small effect for combining the dual-frames, 1.02.5 For the nonresponse weighting effect, we used data from the 35 states that calculated child weights in the 2014 BRFSS. The average increase from the design weights to the final weights for these states is 1.37. Multiplying this by the dual-frame adjustment results in a design effect of 1.4. We increased this to 1.5 to account for the effects potential caused by intracluster correlation.
The smallest sub-sample for comparison to national samples are youth 14 to 18 years of age who might sustain a SRR-TBI, so we have based our minimum sample size on this population. The minimum sample size to obtain a 3.0% error margin for a 95% confidence interval is 725 interviews. However, given that youth who participate in sport and recreation among youth 14 to 18 years of age are a subset of the population of children 14 to 18 years of age, we will need a larger sample of children 14 to 18 years of age to obtain the minimum sample size for the SRR children 14 to 18 years of age.
According to a recent report, 58% of high school students nationwide played on at least one sports team run by their school or community groups in the last twelve months during 2015 (16). We will use this percentage in sample size calculations, although this is likely conservative. It is conservative because many more youth who do not participate in organized sports do participate in recreational activities, such as bike riding, skate-boarding, and skiing. Nevertheless, in the course of screening for the 725 SRR interviews among those 14 to 18 years of age, we expect to conduct 1,250 (725 X 1/.58) interviews with children 14 to 18 years of age. According to the NHIS, the average number of children in this age range, among households with at least one child in this age range is 1.25 per household. To be conservative, we assume that we will collect TBI information from one eligible youth per household.
According to data from the 2014 Behavior Risk Factor Surveillance Survey (BRFSS), 21% of households contacted in the landline sample have children 17 years of age or younger. Similarly, in the cell phone sample, 36% had children 17 years of age or under. Using our projected cell phone and landline allocation, we expect 33.6% of households contacted to have children 17 or under. We adjust this to 12% based on data from the 2014 NHIS indicating that 36% of households with children had children in the 14 to 17 years of age. Similarly, 11% of the BRFSS cell respondents were between the ages of 18-24 and 2% of landline respondents were 18-24. Assuming an equal distribution by age, we expect 1.6% (11% X 1/7) of the cell interviews to be with an 18 year old and 0.3% (2% X 1/7) of the landline interviews. Using the cell and landline allocation, we expect 1.4% of interviews with 18 year olds. Adding the 14-17 and 18 year olds together, we expect to obtain a 14-18 interview with 13.4% of households contacted. Therefore, we will need to contact and screen at least 9,328 households to produce a sample of 725 SRR proxy interviews with parents of youth 14 to 18 years of age. We have increased this number to 10,000 to be conservative in our approach.
In summary, we will conduct 10,000 household surveys from a national sample. For each randomly selected household, we will conduct a TBI Case Definition Validation Study survey for at least one adult, resulting in a sample of 10,000. In addition, we expect to obtain approximately 4,050 surveys for all-cause TBI with children 5 to 17 years of age that will be applicable to the comparisons to TBI healthcare visits reported in the Truven Health Analytics MarketScan Research Databases. This estimate is based on adjusting the 33.6% of households with children 0-17 to 27% based on data from the 2014 NHIS indicating that 82% of households with children had children in the 14 to 17 years of age. Further, the average number of children in this age range, among households with at least one child in this age range is 1.7 per household. We use an average of 1.5 interviews per household to estimate 4,050 total child interviews (10000 X 27% X 1.5). Combining the 4,050 proxy interviews with the 10,000 adult interviews results in a sample of 14,050 interviews applicable to the group comparisons described in Aim 1.1 and the comparisons to TBI incidence reported in hospital and ED population described in Aim 2.2. Finally, we expect to complete 1,340 surveys of children 14 to 18 years of age, with 775 surveys to be conducted with those who participated in SRR activities and are applicable to the comparison to SRR-TBI incidence reported in the 2017 YRBS described in Aim 2.3 (see Table B.2.1).
Table B.2.1. Sample Size and Expected Number of TBI Cases
Aim |
Group |
Sample size |
Potential Range of TBI Estimates |
+/-95% CI |
Expected number of TBI cases |
1.1, 2.2 |
5 + Adults and Children |
14,050 |
5% |
0.44% |
703 (640-765) |
|
|
|
3% |
0.35% |
422 (373-470) |
|
|
|
1%a |
0.20% |
141 (112-169) |
2.2 |
Children 5-17 |
4050 |
5% |
0.82% |
203 (169-236) |
|
|
|
3% |
0.64% |
122 (95-148) |
|
|
|
1%b |
0.38% |
41 (25-56) |
2.3 |
Youth 14-18 - SRR |
775 |
13%c |
2.90% |
101 (78-123) |
|
|
|
10% |
2.59% |
78 (57-98) |
|
|
|
5% |
1.88% |
39 (24-53) |
|
|
|
3% |
1.47% |
23 (12-35) |
a Indicates the estimated percentage for TBI among adults and children ages 5 years and older
b Indicates the estimated percentage for TBI among children 5 to 17 years of age
c Indicates the estimated percentage for SRR-TBI among adolescents 14 to 18 years of age
Declining response rates is an industry wide trend affecting all modes of data collection (17). Our methodology is based on best practices for maximizing response for RDD CATI research such as:
Using highly trained interviewers (including bilingual Spanish speakers) with effective interviewing techniques
Using a sample management approach that ensures a high number of contact attempts (15 for landline numbers and up to 8 for cell phone numbers)6 (see Attachment H-1 for details)
Calls distributed across days and times (day, evening) with increased scheduling during peak times
Dedicated nonresponse conversion team
Table B.3.1 details ICF’s strategies for maximizing response rate.
Table B.3.1. Techniques for Maximizing Response Rates
Strategy |
Description |
Outcome |
Focus on Minimizing Partially Completed Interviews |
Separate the mid-terminate suspended records and put them into a special study, and create a report that shows how far each record is from completion. Records with selected respondents and non-terminal dispositions are attempted up to the maximum number of attempts. |
A call center floor supervisor or Quality Assurance (QA) specialist calls these respondents and lets them know how much we appreciate the time they have already invested, and how close they are to allowing their responses to be counted. This strategy improves cooperation and overall response, and reduces the number of partial completes. |
Collect Data With a Dedicated Public Health Interviewing Team |
Maintain a group of highly skilled interviewers specifically trained to conduct BRFSS-protocol surveys. |
A dedicated team of high performers understands the importance of obtaining high response rates, and their familiarity with the survey and respondent questions and concerns enables them to respond effectively, promoting cooperation. |
Use Dedicated Non-Response Conversion Staff |
Use a group of specially trained interviewers/floor supervisors/QA specialists to call back 100% of soft refusals and partial-completes. |
Deft interviewers have proven their abilities to convert respondents, or have shown exceptional refusal aversion methods on non-conversion attempts, resulting in more completed interviews. |
Prioritize Scheduled Appointments |
Run daily reports that list the times of all scheduled callbacks for the day to ensure that the project is always staffed to accommodate all callbacks. |
Honoring scheduled callbacks results in reaching willing respondents more reliably. |
Create a CATI-Programmed Frequently Asked Question (FAQ) Screen |
Enable interviewers to access project information with a few simple keystrokes so they can address respondent questions quickly, uniformly, and accurately. |
Increasing respondent confidence results in increased cooperation. |
Allow Appointments Outside Usual Calling Hours |
Schedule appointments when a respondent requests an appointment outside of normal calling hours. We retrieve these records manually to ensure no other telephone numbers are attempted that did not request the call at that time. |
Increasing respondent convenience results in more completed surveys. |
Implement an Interactive Voice Response Respondent (IVR) Help Line |
Develop an in-language IVR system that includes options for talking to a floor supervisor or the project manager (or a representative from the Department, if desired), learning about participant confidentiality, etc. |
Promote informed survey response and provide 24-hour survey information. |
Display Caller Identification |
Display a caller ID number linked to the IVR system. |
We reach respondents with call block and privacy manager devices; informing respondents of the importance of the National TBI Survey research effort is critical to achieving anational survey sample. |
Focus on First Contacts |
Develop a dedicated group of exceptional interviewers to make the first few critical call attempts. |
Because the majority of completed interviews will occur on the first or second attempt, a small group of interviewers with proven success on first contacts will result in more completed interviews. |
Expected Response Rates
As noted, we expect to achieve a 48.7% landline and 40.5% cell phone response rate based on AAPOR’s response rate #4 (RR4). These response rates match the median response rates for all BRFSS states and territories in 2014 (2, 3). We have outlined our methods to maximize response rates above. Below, we describe our plan to analyze the survey data for non-response and representativeness, and to develop weighting adjustments to increase the representativeness of the sample.
Non-Response Bias
As stated above, we acknowledge that these response rates do not meet the OMB standard of an 80% response rate. To mitigate the risk of non-response bias, we will develop weighting adjustments to increase the sample representativeness relative to the population. We will evaluate the representativeness by comparing the RDD sample to benchmarks such as the American Community Survey (ACS) and/or Current Population Survey (CPS). This analysis will be based on propensity scores measuring the probability of observing the respondent in the TBI Case Definition Validation Study versus observing them in the benchmark survey (e.g. CPS.) We will use a logistic regression model with the outcome (1=observed in TBI sample, 0=observed in the CPS). The outcome will be modeled based on age, gender, race/ethnicity tenure, educational attainment, marital status, and census region. This analysis provides an evaluation of demographic representativeness, which will be quantified in the form of an R-indicator as described by Schouten et al (18). The R-indicator measures the variability of the propensity scores ( ), . Values close to 0 indicates weak representativeness and values close to 1 indicate strong representativeness, relative to the independent variables used in the model.
Finally, we will compare our results with healthcare administrative databases as well as a nationally representative survey of high school youth to understand how they are similar or different as well as to assess non-response bias.
Cognitive Interview Study
As part of the development of the system, key aspects of the survey were cognitively tested. In order to test the clarity of wording, understanding, ease of recall, and perceived burden of the survey, the Division of Unintentional Injury Prevention contracted with the National Center for Health Statistics’ Division of Research and Methodology’s Collaborating Center for Questionnaire Design Evaluation Research to conduct a cognitive interviewing study of the survey (Agreement # 16-HS16-2041-HCPCHC, OMB Clearance #0920-0222). Testing the survey through cognitive interviewing increases the certainty that respondents understand the meaning of questions in the same way we intended. The cognitive testing results were available in September 2016. Improvements were made prior to OMB review but after the 30 Day Federal Register Notice was published and after the period for public comment. The instrument has been updated in order to focus on the exploration of the validation of the case definition.
The following individuals have reviewed technical and statistical aspects of procedures that will be used during the validation of a case definition for parent- or self-reported traumatic brain injury (TBI) study.
Shelley N. Osborn, PhD
Senior Project Manager
ICF International
1 Ada Parkway, Suite 100
Irvine, CA 92618
(415) 677-7199
Robert Tortora, PhD
Chief Methodologist
ICF International
530 Gaither Road, Suite 500
Rockville, MD 20850
(301) 572-0351
Randal ZuWallack, MS
Senior Sampling Statistician
ICF International
126 College Street, Suite 2
Burlington, VT 05401
(802) 264-3724
Marcie-jo Kresnow-Sedacca, PhD
Statistician
Division of Analysis, Research and Practice Integration
National Center for Injury and Control
Centers for Disease Control and Prevention
4470 Buford Highway Northeast
Atlanta, GA 30341
(770) 488-4753
Yang Liu
Mathematical Statistician
Division of Analysis, Research and Practice Integration
National Center for Injury and Control
Centers for Disease Control and Prevention
4470 Buford Highway Northeast
Atlanta, GA 30341
(770)488-3909
Kristen Miller
Behavioral Scientist
Division of Research and Methodology
National Center for Health Statistics
Centers for Disease Control and Prevention
3311 Toledo Road
Hyattsville, MD 20782
(301) 458-4625
Meredith Massey
Behavioral Scientist
Division of Research and Methodology
National Center for Health Statistics
Centers for Disease Control and Prevention
3311 Toledo Road
Hyattsville, MD 20782
(301) 458-4275
ICF International, headquartered in Fairfax, Virginia, will collect all survey data. Thomas Brassell, PhD and Randal ZuWallack, MS will be the primary parties to review and approve data analysis of the data.
Agency Personnel Responsible for Receiving and Approving Contract Deliverables:
Lara DePadilla
Behavioral Scientist
Division of Unintentional Injury
National Center for Injury Prevention and Control
Centers for Disease Control and Prevention
4470 Buford Highway NE
Atlanta, GA 30341
(770) 488-1568
Matt Breiding
Traumatic Brain Injury Team Lead
Division of Unintentional Injury
National Center for Injury Prevention and Control
Centers for Disease Control and Prevention
4470 Buford Highway NE
Atlanta, GA 30341
(770) 488-1396
REFERENCES
1. Blumberg SJ, Luke JV. Wireless substitution: Early release of estimates from the National Health Interview Survey, January -- June 2015. National Center for Health Statistics; 2015.
2. Center for Disease Control and Prevention. Behavioral Risk Factor Surveillance System 2014 summary data quality report. 2015.
3. American Accociation for Public Opinion Research. Standard Definitions, 9th Edition 2016 [Available from: http://www.aapor.org/Standards-Ethics/Standard-Definitions-(1).aspx.
4. Taylor CT, Bell JM, Breiding MJ, Xu L. Traumatic brain injury-related emergency department visits, hospitalizations, and deaths -- United States, 2007 and 2013. MMWR Morb Mortal Wkly Rep. 2017.
5. McCrea M, Hammeke T, Olsen G, Leo P, Guskiewicz K. Unreported concussion in high school football players: Implications for prevention. Clinical Journal of Sport Medicine. 2004;14(1):13.
6. Meehan W, Mannix R, O'Brien M, Collins M. The prevalence of undiagnosed concussions in athletes. Clinical Journal of Sport Medicine. 2013;23(5):339-42.
7. Voss JD, Connolly J, Schwab KA, Scher AI. Update on the epidemiology of concussion/mild traumatic brain injury. Current Pain and Headache Reports. 2015;19(7):1-8.
8. Ohio Department of Health. 2013 Youth Risk Behavior Survey results, Ohio high school survey detail tables. 2013.
9. Connecticut Department of Public Health. 2013 Youth Risk Behavior survey results, Connecticut high school survey detail tables. 2013.
10. Guterbock TM, Lavrakas PJ, Tompson TN, ZuWallack R. Cost and productivity ratios in dual-frame RDD telephone surveys. Survey Practice. 2011;4(2).
11. Brick JM. Dual Frame Theory Applied to Landline and Cell Phone Surveys. Survey Research Methods Section Webinar American Statistical Association. 2009.
12. Boyle J, Bucuvalas M, Piekarski L, Weiss A. Zero banks: Coverage error and bias in RDD samples based on hundred banks with listed numbers. Public Opinion Quarterly. 2009;73(4):729-50.
13. Rizzo L, Brick JM, Park I. A minimally intrusive method for sampling persons in random digit dial surveys. Public Opinion Quarterly. 2004;68(2):267-74.
14. Cohen J. A coefficient of agreement for nominal scales. Educational and Psychological Measurement. 1960;20(1):37-46.
15. Landis JR, Koch Gg. The measurement of observer agreement for categorical data. Biometrics. 1977;33:159-74.
16. Centers for Disease Control and Prevention. Youth Risk Behavior Surveillance -- United States, 2015. 2016 June 10, 2016.
17. Czaika JL, Beyler A. Declining response rates in federal surveys: Trends and implications. Report submitted to Office of the Assistant Secretary for Planning and Evaluation: US Dept of Health and Human Services; 2016.
18. Schouten B, Cobben F, Bethlehem J. Indicators for the representativeness of survey response. Survey Methodology. 2009;35(1):101-13.
1 Calculations are based on unweighted 2014 BRFSS data for all 50 states and the District of Columbia. The variable used for the calculations was _CHLDCNT = Number of children in household. Responses of don’t know or refused were excluded. The 2014 BRFSS data is available from http://www.cdc.gov/brfss/annual_data/annual_2014.html.
2 The NHIS public use data is available for download from http://www.cdc.gov/nchs/nhis/nhis_questionnaires.htm.
3 Based on estimates from the 2014 BRFSS and NHIS.
4 The sample size calculation is based on the R package kappaSize. The documentation is available at https://cran.r-project.org/web/packages/kappaSize/kappaSize.pdf.
5 The design effect due to the dual-frame adjustment is based on the weighting required to combine the landline and cell phone samples. Since people with a cell phone and a landline (“dual-users”) have a chance of selection in both the landline sample and the cell phone sample, they have an increased chance of being selected for the survey. The increased probability of selection for the dual-users causes an unequal weighting effect that increases the variability of survey estimates.
6 Recent calling protocols for the American Community Survey and the National Immunization Survey (NIS) indicate a maximum of 9 attempts and 12 attempts for landlines, respectively. The NIS further indicates a maximum of 6 attempts for cell phones. The NIS also allows for increased call attempts if contact is made, particularly for households in which there are eligible children. As our sampling method is similar to the NIS (adults as proxy reporters for children), we will examine our results in the early stages of our collection to consider reducing our maximum attempts to 12 and 6 for landline and cell phone, respectively, based on whether we see diminishing returns for our efforts at those levels.
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
File Modified | 0000-00-00 |
File Created | 2021-01-15 |