Information Collection Request
New
Supporting Statement Part B
Determining Causes of Sudden, Unexpected Infant Death: A National Survey of U.S. Medical Examiners and Coroners
August 15, 2013
Submitted by:
Carrie K. Shapiro-Mendoza, PhD, MPH
Senior Scientist
Phone: (770) 488-6263
Supported by:
Division of Reproductive Health
National Center for Chronic Disease Prevention and Health Promotion
Centers for Disease Control and Prevention
1. Respondent Universe and Sampling Methods
2. Procedures for the Collection of Information
3. Methods to Maximize Response Rates and Deal with Nonresponse
4. Tests of Procedures or Methods to be Undertaken
5. Individuals Consulted on Statistical Aspects and Individuals Collecting and/or Analyzing Data
Bibliography
List of Tables
Table B. 1-1 Estimated Size to Respondent Universe and Proposed Study Sample
List of Attachments
Attachment 1: Authorizing Legislation
Attachment 2: Federal Register Notice and Summary of Public Comments
Attachment 2a: Federal Register Notice
Attachment 2b: Summary of Public Comments
Attachment 3: Data Collection Instrument: Survey of Medical Examiners and Coroners
Attachment 4: Introductory and follow up letters to respondents and telephone interview scripts
Attachment 4a: Telephone Screener (including Frequently Asked Questions [FAQ])
Attachment 4b: Survey Cover Letter
Attachment 4c: Thank you/Reminder Postcard
Attachment 4d: Reminder Call
Attachment 5: Battelle IRB Approval Letter
Attachment 6: Illustrative Table Shells
1. Respondent Universe and Sampling Methods
The population of interest for the survey is medical examiners and coroners (MECs) responsible for determining the cause and manner of death reported on death certificates. About 2,000 individuals meet this description (R. Hanzlick, personal communication, August 2011). Approximately 800 MECs will be selected for inclusion in the study sample and will be invited to participate in the survey.
Selecting the MEC sample will involve the following steps:
First, we will randomly select U.S. counties (with replacement) with probability proportional to the number of SUID-related deaths that they reported in 2005-2009. For counties that reported fewer than 10 in that timeframe, the probability of being selected in to the sample will be small, and proportional to the number of births that occurred there in that timeframe. These two factors together are meant to increase the likelihood that respondents have some experience certifying infant deaths. Second, in each randomly selected county, administrative data will be used to contact the authorities responsible for certifying infant deaths. In each selected county, a sampling frame (list of persons who meet the survey inclusion criteria) will be established by Battelle, and the appropriate number of names will be randomly selected from the list. Altogether, a total of 800 MECs will be selected for inclusion in the study sample and will be invited to participate in the survey. The sampling weight associated with each person will be the probability that their county was included in the sample (as calculated from the numbers of reported SUID deaths and the numbers of births) multiplied by the probability that that individual was selected from among all those eligible medical examiners or coroners in the county. MECs who participated in either of the pretests and the MECs who currently serve as consultants on this project will not be eligible to complete the survey.
The sampling strategy for this survey has been designed such that it will yield a nationally representative of persons who certify infant deaths. In private communication, Dr. Randy Hanzlick, who is a past president of NAME, and the current chairman of the organization’s data committee, estimates that there are probably about 2,000 individuals who certify infant deaths in the 50 U.S. states (R. Hanzlick, personal communication, August 2011). Our proposed sample of 800 would represent 40% of those individuals.
In order to make quantitative estimates of any quantities or proportions estimated in the survey (i.e., the proportion of coroners and medical examiners who respond in a certain way to a specific survey question) survey weights will be employed to calculate appropriate confidence intervals on those estimates. If we performed a simple random survey of individuals from an infinitely large population, then 800 respondents would yield 95% confidence intervals of 3.5% on an estimated proportion of 50%. If the proportion were closer to 0% or 100% then the confidence interval would be more narrow. The structure of this survey is complex with the possibility of clustering within counties, so the correct number is not likely to be 3.5%; it will be necessary to employ the survey weights to estimate quantitative summaries and their associated confidence intervals.
Based on our past experience with surveys of medical professionals (e.g., Montano et al., 2003), we expect that about 80% of the MECs selected for the study will return a completed survey.
Table B.1-1 lists the sampling frame size, sample size, and expected response rate by respondent group.
Table B. 1-1 Estimated Size to Respondent Universe and Proposed Study Sample
|
Number in Universe |
Sample Size |
Eligible
|
Target Sample Size Based on Expected Response Rate of 80%
|
Coroners |
1,800 |
720 |
720 |
576 |
Medical Examiners |
200 |
80 |
80 |
64 |
Total |
2,000 |
800 |
800 |
640 |
2. Procedures for the Collection of Information
Data Collection Methods
Data will be collected using a paper survey instrument (Attachment 3). The survey instrument contains questions about respondents’ organization/reporting jurisdiction (Section A), classification of death for a series of hypothetical infant death cases (Section B), knowledge and opinion regarding interpretation and reporting of infant deaths (Section C), reporting jurisdiction practices and training (Section D), respondent characteristics and demographics (Sections E and F), and jurisdiction-specific training and resource needs and general comments (Section G).
Following a telephone screening call (Attachment 4a), a survey packet will be sent by Federal Express to the individual identified during the screening call. The packet will include: (1) the survey questionnaire with a pre-printed ID number; (2) a personalized cover letter emphasizing the importance of the study; (3) a postage-paid, self-addressed return envelope; and (4) a $40 honorarium. Respondents will be asked to return their completed survey in the postage-paid, return envelope.
Within two weeks of the initial mailing, a thank you/reminder postcard will be sent to each respondent to encourage survey completion (Attachment 4c). The postcard will include a toll-free number that can be called if the respondent has any questions about completing the survey or needs to have another copy of the survey mailed.
A survey tracking database will be used to track all returned surveys. Two weeks after the postcard reminder is mailed, a telephone call will be placed to respondents who have not returned a completed questionnaire. This call will serve as a reminder, and allow the opportunity to answer any questions that maybe delaying survey completion (see copy of follow-up call script, included as Attachment 4d). A second telephone call will be made if a completed survey is not received within two weeks following the first follow-up telephone call (Attachment 4d). A third (and final) telephone call will be made if a completed survey is not received within two weeks following the second call (Attachment 4d). At any point, if a respondent requests an additional survey to be mailed, Battelle’s software application will include functionality for the interviewer to request this. This will automatically notify our tracking system that a second survey mailing was requested, so that it can be sent out within one day of the request.
All survey data will be keyed into a study-specific data entry application. After all data have been checked for quality assurance purposes, the data file containing all survey response data will be converted into a de-identified data set for analysis.
3. Methods to Maximize Response Rates and Deal with Nonresponse
In the past, collecting data by mail has been shown to be the best approach among a variety of groups. This is particularly true for physicians and other medical professionals. Other alternatives, including face-to-face interviews and computer-assisted telephone interviews, have their own advantages and disadvantages, strengths and weaknesses. For example, personal face-to-face interviewing has generally resulted in the highest response rates (between 70-90%) but is also the most expensive type of data collection effort and takes the greatest amount of time to complete. The costs of using this method for this survey would be considered prohibitive. Telephone surveys have traditionally had response rates comparable to face-to-face interviews (between 70-90%) while costing substantially less to conduct. However, telephone interviews must be kept shorter. It is more difficult to keep a respondent's attention while on the telephone than in a face-to-face interview situation. Methods researchers recommend that telephone interviews be kept to 20 minutes for an optimal response rate. Survey operations researchers find that they are spending more time screening for valid telephone numbers because of the growth of new telephone numbers due to cell phones, pagers, modems, and faxes. In addition, many individuals have telephone answering services or voice mail, allowing them to screen out unwanted calls. With multiple unusable numbers, telephone data collection is becoming less efficient and more costly. The cost and effort of contacting MECs and scheduling a personal or telephone interview would be very high. Electronically administered surveys often yield lower response rates among medical professionals than paper surveys (VanGeest, Johnson & Welch, 2007). For example, among primary care physicians who were offered options of completing surveys by telephone, fax, email, or online, 88% of surveys were returned by mail, 10% were returned by fax, 2% were completed online, and none were completed by telephone or email (Nicholls et al., 2011).
Mailed surveys are the least expensive form of data collection, but researchers have usually had to contend with much lower response rates; approximately 20-40 percentage points lower with one mailing and no follow-up compared to one mailing with additional contacts (Dillman, 2000). The disadvantage of mail surveys is that the decision of whether to participate is under the complete control of study respondents. The length of the survey has been shown to affect this decision. The optimal length for a self-administered mail survey, without negatively affecting response rates, is about 10-12 pages, or about 125 close-ended items on a questionnaire (Dillman, 1978). For the same response time burden, one can ask more questions with a self-administered mail survey than in a telephone interview, thus allowing self-administered questionnaires to be longer than telephone interviews, although not as long as in-person interviews. Research has shown that self-administered mail surveys can be longer if the topic is of high interest or importance to respondents.
To overcome the low response rates typically encountered with mail surveys, Dillman (1978) proposed a mailed survey methodology that was based on social exchange theory. His method, called the Total Design Method (TDM), has been shown to increase response rates among mail survey respondents to as high as 77%, comparable to telephone and in-person response rates (Dillman, 2000). The Total Design Method described by Dillman in 1978, now called the Tailored Design Method, consists of a number of suggested steps to improve survey response rates. The basis for TDM is that researchers can encourage higher response rates through the use of social exchange theory by rewarding respondents through non-monetary or monetary means, reducing perceived costs to respondents by reducing effort, and establishing trust through treating the respondent as a partner in the process. Dillman recommended that, in operationalizing these factors based on social exchange theory, researchers must pay attention to the details of contact with respondents, wording of letters, incentives related to completion, length of questionnaires, mailings, and follow-up with study participants (Dillman, 2000).
The methods proposed for this study will include a reminder postcard, multiple follow-up phone calls after the initial survey has been sent, inclusion of stamped return envelopes, and monetary incentives to participate, based on Dillman’s Tailored Design Method (2000) and a thorough review of survey methods research described above. This plan represents the best approach to balancing the need to control costs with the desire to achieve high response rates. The methods proposed for this study have been highly successful in achieving 70% response to a national survey of physicians (St. Lawrence, et al, 2002), 80% response to a Washington State survey of primary care clinicians (Montaño, et al, 2003), and 82% response to a mailed survey to 743 primary care clinicians in two large health plans (Irwin, et al, 2002). In an effort to improve survey response rates and control costs by identifying ineligible respondents before administration of the survey, we will screen potential survey respondents before mailing out the survey.
4. Tests of Procedures or Methods to be Undertaken
The survey instrument is included as Attachment 3. Multiple phases of survey design, review, and revision were conducted to finalize the survey instrument. The survey instrument was designed in collaboration with researchers at Battelle. The specific items included on the survey were based on our need for information about MECs, their knowledge and opinion about SUID, and the way in which they classify sudden unexpected unexplained infant deaths. To enable the study team to characterize the respondent sample, we also included questions about the the characteristics of respondents’ reporting jurisdiction, reporting practices, training, personal demographic characteristics, and jurisdiction-specific training and resource needs. Three of the survey items (A1, D2, and D3) were adapted for use from the DoJ survey of MEC offices (Hickman et al., 2007). Earlier versions of the scenarios included in Section B of the survey were used previously in trainings conducted by two of the project consultants (T. Andrew and R. Hanzlick).
Early drafts of the survey instruments were reviewed by our four MEC project consultants and were further revised. The instruments were next pre-tested by 9 practicing MECs. Pretest participants reviewed the draft survey cover letter and then reviewed and completed a draft survey instrument. Respondents returned the original copy of their completed survey and comments to the project interviewer. Respondents then participated in a telephone interview lasting about 30 minutes which included questions about survey comprehensibility and length. Pretest participants were given the opportunity to provide suggestions for how the survey instrument and cover letter could subsequently be improved. Final revisions to the survey instrument and cover letter were made based on the review and recommendations of the pre-test participants and project consultants.
Data Collection Procedures
All data collection procedures, question formats, and response scales to be used in this study have been previously tested by the contractor who assisted with design of the study protocol and survey instruments (Battelle). These procedures, which have been used to design questionnaires relevant to practicing medical professionals and to obtain high response rates, have been described in conference presentations including an invited symposium on methods to maximize physicians’ survey response (Kasprzyk, et al, 2000).
5. Individuals Consulted on Statistical Aspects and Individuals Collecting and/or Analyzing Data
CDC collaborated with Battelle - Center for Analytics and Public Health staff to design the study protocol and data collection instruments. Jennifer Brustrom, Ph.D. (209-726-3458) and Betsy Payn, M.A. (206-528-3138) led the Battelle effort to design the protocol and data collection instrument. Dale Rhoda, MAS, MS, MPP (410-377-5660) designed the sampling plan and provided consultation on statistical power. Four medical examiner/coroner consultants-- Thomas Andrew, MD, John Fudenberg, Randy Hanzlick, MD, and Gregory Wyatt-- reviewed and provided input on the design and content of the survey instrument. Drs. Andrew and Hanzlick provided the case study scenarios included in Section B of the survey. Battelle will collect the survey data and create an analytic data set. CDC staff will analyze the survey data.
Carrie Shapiro-Mendoza, Ph.D., MPH, Division of Reproductive Health, National Center for Chronic Disease Prevention and Health Promotion, CDC, is the technical monitor, responsible for designing and conducting the data analysis and disseminating the study findings. Dr. Shapiro-Mendoza will approve and receive all contract deliverables (770-488-6263).
Bibliography
2005 Census of Medical Examiner and Coroner Offices. OMB No. 1121-0296, exp. 7/31/2008.
Berk, M.L., Edwards, W.S., & Gay, N.L. (1993). The use of a prepaid incentive to convert nonresponders on a survey of physicians. Evaluation & the Health Professions, 16(2):239-245.
Buzzle.com. (2012, May). Coroner salary. Retrieved from http://www.buzzle.com/articles/coroner-salary.html.
Cabana, M.D., Rand, C.S., Power, N.R., Wu, A.W., Wilson, M.H., Abboud, P.C., & Rubin, H.R. (1999). Why don’t physicians follow clinical practice guidelines? A framework for improvement. Journal of the American Medical Association, 282(15):1458-1465.
Camperlengo, L.T., Shapiro-Mendoza, C.K., & Kim, S.Y. (2011). Sudden Infant Death Syndrome: Diagnostic Practices and Investigative Policies. American Journal of Forensic Medical Pathology,Jul 20. [Epub ahead of print].
CBE Salary. (2012, May). Medical examiner salary. Retrieved from http://www.cbsalary.com/salary-calculator/chart/Medical+Examiner?kw=Medical+Examiner&jn=jn023&tid=92058.
Delnevo, C.D., Abatemarco, D.J., & Steinberg, M.B. (2004). Physician response rates to a mail survey by specialty and timing of incentive. American Journal of Preventive Medicine, 26(3):234-6.
Dillman, D.A. (1978). Mail and Telephone Surveys. New York, NY: John Wiley & Sons.
Dillman, D.A. (2000). Mail and Internet Surveys: The Tailored Design. New York, NY: John Wiley & Sons.
Everett, S.A., Price, J.H., Bedell, A.W., & Telljohann, S.K. (1997). The effect of a monetary incentive in increasing the return rate of a survey to family physicians. Evaluation and the Health Professions, 20(2):207-214.
Graham, J., Hendrix, S., & Schwalberg, R. (2009). Evaluating the SIDS diagnosis process utilized by coroners in Mississippi, Journal of Forensic Nursing, 5(20), 59-63.
Hanzlick, R. Personal communication. August 2011.
Hanzlick, R. Personal communication. May 18, 2012.
Hauck, F.R. (2004). Changing epidemiology. In: Byard, R.W., Krous, H.F. (Eds), Sudden infant death syndrome. Problems, progress and possibilities. (pp. 31-57). London (UK): Arnold.
Healthy People (2012, May). Maternal, infant, and child health objectives. Retrieved from (http://www.healthypeople.gov/2020/topicsobjectives2020/objectiveslist.aspx?topicid=26).
Hickman, M. J., Hughes, K.A., Strom, K. J., Romero-Miller, J.D. Medical Examiner and Coroners’ Offices, 2004. (2007). Washington, D.C.: U.S. Department of Justice , Office of Justice Programs, Bureau of Justice Statistics.
Indeed.com. (2012, May). Receptionist salary. Retrieved from http://www.indeed.com/salary/Receptionist.html.
Irwin, K.L., Anderson, L., Stiffman, M., et al. (2002). Leading barriers to STD care in two managed care organizations: Final results of a survey of primary care clinicians. 2002 National STD Prevention Meeting, March 4-7, San Diego, CA Abstract P96.
Kasprzyk, D., Montaño, D.E., Phillips, W.R., & Armstrong, K. (2000). System for Successfully Surveying Health Care Providers. Invited symposium at the American Public Health Association meeting, November 2000, Boston, MA.
Kasprzyk, D., Montaño, D.E., St. Lawrence, J., & Phillips, W.R. (2001). The effects of variations in mode of delivery and monetary incentive on physicians’ responses to a mailed survey assessing STD practice and patterns. Evaluation and the Health Professions,
24(1):3-17.
Kochanek, K.D., Xu, J., Murphy, S.L., Minino, A.M., & Kung, H. (2011). National Vital Statistics Report. Health E-Stat. Hyattsville, MD: National Center for Health Statistics.
Laskey, A.L., Haberkorn, K.L., Applegate, K.E., & Catellier, M.J. (2009). Postmortem skeletal survey practice in pediatric forensic autopsies: A national survey. Journal of Forensic Science, 54(1), 189-191.
Malloy, M.H., & MacDorman, M. (2005). Changes in the classification of sudden unexpected infant deaths: United States, 1992–2001. Pediatrics, 115, 1247-1253.
Montaño, D.E., Kasprzyk, D., & Phillips, W.R. (2003). Primary Care Providers’ Role in HIV/STD Prevention. Final Report to the National Institute of Mental Health. Grant No. 5 R01 MH52997-04
National Infant Sleep Position Public Access Web site. Http://dccwww.bumc.bu.edu/ChimeNisp/Main_Nisp.asp. Accessed August 1, 2011.
Nicholls K., Chapman K., Shaw T., Perkins A., Sullivan MM, Crutchfield, S., Reed E. Enhancing response rates in physician surveys: the limited utility of electronic options. Health Serv Res, 46:1675-82.
Shapiro-Mendoza, C.K., Tomashek, K.M., Anderson, R.N., & Wingo, J. (2006). Recent National Trends in Sudden, Unexpected Infant Deaths: More Evidence Supporting a Change in Classification or Reporting, American Journal of Epidemiology, 163(8), 762-769.
St. Lawrence, J.S., Montaño, D., Kasprzyk, D., Phillips, W.R., Armstrong, K.A., & Leichliter, J. (2002). STD Screening, Testing, Case Reporting, and Clinical and Partner Notification Practices: A National Survey of US Physicians. American Journal of Public Health, 92(11):1784-1788.
Tambor, E.S., Chase, G.A., Faden, R.R., Geller, G., Hofman, & K.J., Holtzman, N.A. (1993). Improving response rates through incentive and follow-up: The effect on a survey of physicians' knowledge of genetics. American Journal of Public Health, 83(11):1599-1603.
Task Force on Sudden Infant Death Syndrome. (2005). The changing concept of sudden infant death syndrome: diagnostic coding shifts, controversies regarding the sleeping environment, and new variables to consider in reducing risk. Pediatrics, 116(5),1245-1255.
U.S. Department of Justice (2005). DoJ’s 2005 Census of Medical Examiner and Coroner Offices (OMB No. 1121-0296). Washington, D.C.: U.S. Department of Justice, Bureau of Justice Statistics.
Van Geest, J.B., Johnson, T.P., Welch, V.L. (2007). Methodologies for Improving Response Rates in Surveys of Physicians: A Systematic Review. Eval Health Prof, 30, 303-321.
Walsh, S., Kryscio, R., Holsinger, J.W., & Krous, H.F. (2010). Statewide Systematic Evaluation of Sudden, Unexpected Infant Death Classification: Results for a National Pilot Project. Maternal and Child Health Journal,14, 950-957.
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
File Modified | 0000-00-00 |
File Created | 2021-01-28 |