Data Collection for Evaluation of Education, Communication, and Training (ECT) Activities for the Division of Global Migration and Quarantine (0920-0932)
Evaluating the Effectiveness of Ebola CARE Program
Generic Information Collection Request
December 10, 2014
Statement B
Amy McMillen
National Center for Emerging and Zoonotic Infectious Diseases
Centers for Disease Control and Prevention
1600 Clifton Road, N.E., MS D76
Fax: (404) 248-4146
Email: [email protected]
TABLE OF CONTENTS
PART B. COLLECTION OF INFORMATION EMPLOYING STATISTICAL METHODS 2
B.1. Respondent Universe and Sampling Methods 2
2. Procedures for the Collection of Information 3
3. Methods to Maximize Response Rates and Deal with No Response 5
4. Test of Procedures or Methods to be Undertaken 5
5. Individuals Consulted on Statistical Aspects and Individuals Collecting and/or Analyzing Data 6
Five airports are in the United States currently designated for Ebola screening: Chicago O’Hare (ORD); Newark Liberty (EWR); Atlanta Hartsfield-Jackson (ATL); JFK International (JFK); and Dulles International (IAD). For this evaluation, we intend to conduct a census by asking every traveler who undergoes Ebola screening arriving at ATL, EWR, and ORD within our narrow data collection window to participate. These have been selected because other program activities are now being implemented at JFK and IAD, and will be implemented at all of the five designated airports by the end of the calendar year. Therefore, we are gathering data for a baseline assessment before other program activities are implemented.
To better assess travelers’ experiences through the Ebola screening process, we propose to interview, over a one week period, all adult English speaking travelers arriving at these three airports. The initial data collection will consist of a 10 minute in-person interview using the Intercept Interview Guide. Three to five days after this assessment, we will conduct a second 15 minute interview by phone using the Telephone Interview Guide.
Across the three airports, we will ask every passenger who leaves the secondary screening area to participate in a brief interview. Participation is entirely voluntary. Logistically, this approach will be feasible due to the manner in which passengers are processed by Customs and Border Protections. Due to staffing constraints we will only be able to conduct interviews in English. In future assessments, about the CARE kit, we will include travelers who only speak French. Table 2 reflects the number of adult passengers we expect to be eligible on a weekly basis, based on reports from the CDC Quarantine Stations. It reflects, the proportion we expect to be English speaking, also based on observations from the CDC Quarantine Stations. We expect that approximately 90% will participate in the initial assessment and that 70% of those individuals will participate in the follow-up interview.
Table B-1. Eligibility and participation rates determining study size
Traveler Respondent Universe |
Total expected adult travelers during data collection period |
English Speaking |
Invited to Participate |
Anticipated to Participate in initial interview |
Anticipated to Participate in follow-up interview |
EWR |
76 |
64 |
64 |
58 |
41 |
ATL |
56 |
53 |
53 |
47 |
33 |
ORD |
29 |
26 |
26 |
23 |
16 |
Total |
161 |
143 |
143 |
128 |
90 |
This is data collection involves two assessments at two time points: a) An initial intercept interview at the airport using the Intercept Interview Guide and b) A follow-up interview conducted over the phone using the Telephone Interview Guide. The data collection instruments are designed to assess traveler knowledge of Ebola, awareness of active monitoring, intention to participate in active monitoring, and initiation and retention in active monitoring. In addition, the instruments will provide insight into traveler comprehension of Check and Report Ebola (CARE) Kit messages and traveler perception of CARE Kit materials. This information will be used to develop tools and interventions to supplement entry screening practices, enhance the travelers’ experience of entry screening, and increase travelers’ initial uptake and participation in active monitoring for the full 21-day period. Additionally, this information will be used to develop presentations, reports, and manuscripts.
Procedures for data collection are as follows. First, data will be collected through in-person interviews with the universe of eligible travelers arriving at the ports during the data collection period, which will be 7 days. This period has been chosen because it maximizes the number of interviews that can be conducted within the window of OMB approval and the start dates of additional program activities at the three airports. Additionally, this number of travelers will allow a reasonably robust view of the knowledge, beliefs, and behaviors of those who are undergoing screening and is supported by power calculations used to estimate our ability to determine an effect of the CARE kit on traveler knowledge (described below). Because we will attempt to interview all passengers, there are no statistical sampling procedures for selecting respondents. Burden is minimized because this is a one-time data collection.
Travelers
who complete the secondary screening process are released by Customs
and Border Protections and can leave the screening area. At this
point, the interviewer will approach the traveler and obtain verbal
consent. The interview will be conducted in English, audio recorded,
and will last no more than 10 minutes. At the conclusion, the
interviewer will ask the respondent if they are interested in
participating in a follow-up assessment over the phone in 3-5 days
and will collect the individual’s name and phone number if so.
This information
will
be recorded separately from the responses to the interview.
Interview responses will be audio recorded and additional notes will be written on hard copy. Both will be identified with a project tracking number which will be assigned to them (airport identifier.date.interview number—for example ORD.12/10/14.001). If the passenger agrees to be interviewed via the phone, the interviewer will note the traveler’s name and phone number. This will be transferred to an electronic log using the project assigned tracking number, name, and phone number.
The second data collection will be a follow-up phone interview 3 to 5 days after travelers have received the CARE Kit during the airport screening process. The interview will assess Ebola knowledge, traveler initiation and retention in active monitoring. Data will be collected using the Telephone Interview Guide. Interviewers will attempt to reach the traveler by phone to either conduct the interview “on the spot” or schedule a time in the future, according to the respondent’s preference. We will make up to 5 attempts to reach the respondent. The follow-up interview will take approximately 15 minutes. Data will be audio recorded and additional notes will be documented on hard copy. Both will be identified with the project tracking number.
Up to 8 interviewers will collect data. All are CDC staff who are part of the CARE team and understand the project and expectations. All are experienced interviewers and will be trained on these specific data collection procedures. They will follow procedures as outlined and will meet with the project lead on a regular basis by conference calls. For quality assurance, the project lead will listen to a sample of 20% of the audio files on a regular basis. Any needed adjustments or corrections will be discussed on regular conference calls or via email.
Early questions were pilot tested with 9 travelers at John F. Kennedy Airport in New York. Travelers were selected using a convenience sampling strategy and asked sample questions about their experiences. The time required to ask these questions and quality of responses based on questions posed were used to revise the data collection instruments and determine the composition of the final interview documents. Feedback from the pilot test was also used to ensure tools were culturally appropriate and to estimate burden hours.
One hundred and forty-three respondents will be invited to participate in the interview. It is expected that approximately 128 respondents will be interviewed initially and 90 are expected in the follow-up assessment. Using G*Power software (Faul et al., 2007), the power analysis in the table below was calculated based on the use of a sign test to measure median change in Likert scale responses between time points in a final sample of 90 participants (n=90), with a two-tailed alpha set at the traditional level of 0.05. As shown, the anticipated sample size should be sufficient to detect even modest changes between baseline and follow-up. The following question is of primary interest in terms of detecting a change: How confident are you that you can check yourself for the next few weeks for symptoms of Ebola? For this question, 1 is not confident at all, 2 is somewhat unconfident, 3 is confident, 4 is very confident.
Table B-2. Sample size calculations.
Effect size (g) |
Power (1-beta) |
0.30 |
99.9% |
0.25 |
99.9% |
0.20 |
97.3% |
0.15 |
81.2% |
0.10 |
46.0% |
As stated above, we expect that approximately 90% will participate in the initial assessment and that 70% of those individuals will participate in the follow-up interview.
These estimate for the initial interview participation rate is based on a recent similar data collection (in terms of methods and setting) supported by DGMQ. This evaluation was conducted to determine whether seasonal flu prevention posters were noticed by travelers crossing the U.S./Mexico land border and if the posters resonated with the travelers. A response rate as of 88% was achieved with the intercept interviews with international travelers in four Federal Inspection Service (FIS) areas.
The proportion of respondents we expect to be follow-up (second interview) is based on state experiences implementing the 21-day active monitoring program. Virtually all passengers who go through secondary screening are being actively monitored, thus these individuals are accustomed to short phone calls with public health officials on a daily basis.
The following procedures will be used to maximize response rates and deal with no response:
Informing respondents of what the project is asking, why it is being asked, who will see the results, and how the results will be used, as well as discussing how respondents will benefit from the results and how the findings will be put into action.
Addressing data security with respondents. Although we cannot guarantee anonymity, we will assure travelers that significant efforts will be made to secure data and that their answers will not be linked to them in any way. Additionally, participants will be informed that the interview is voluntary, and they are free to skip questions they do not wish to answer, respond “I don’t know”, or end the interview at any time for any reason.
For telephone interviews, outgoing calls that result in no answer, a busy signal, or an answering machine will be automatically rescheduled for subsequent attempts (up to five additional attempts will be tried before determining that a traveler is lost to follow-up).
The estimate for burden hours is based on an early version of questions pilot tested with 9 travelers at John F. Kennedy Airport in New York. Travelers, who were selected using a convenience sampling strategy and asked sample questions about their experiences. The time required to ask these questions and quality of responses based on questions posed were used to revise the data collection instruments and determine the composition of the final interview documents. Feedback from the pilot test was also used to ensure tools were culturally appropriate and to estimate burden hours. Pilot testing of the questions indicated that the average length of time needed to complete the Intercept Interview Guide of Traveler, including time for reviewing instructions, will be approximately 10 minutes. For the purposes of estimating burden hours, 10 minutes will be used. Again, pilot testing of the questions indicated that the average length of time needed to complete the Telephone Interview Guide, including time for reviewing instructions, will be approximately 15 minutes. For the purposes of estimating burden hours, 15 minutes will be used.
The protocol and data collection instruments were developed and reviewed in collaboration with the staff of the NCEZID and DGMQ. The following individuals, provided advice about the protocol design, sampling methods, and data collection tools:
Name (Last, First) |
Title |
Degree |
Roles/Responsibilities |
Benenson, Gabrielle |
Associate Director for Training, Education, and Communication, NCEZID
|
MPH |
Primary in assessment design, data analysis, and outputs. Support in data collection |
Glynn, Natalie |
Evaluation Fellow, NCEZID |
MDP |
Primary assessment design data collection, data analysis, toolkit development. |
Joseph, Heather |
Behavioral Scientist, NCHHSTP |
MPH |
Primary in assessment design, data analysis, and outputs. Support in data collection |
Erskine, Stefanie |
Behavioral Scientist, NCEZID |
MPH |
Primary Support in data collection. Support in assessment design, data analysis, and outputs. |
Prue, Christine |
Associate Director for Behavioral Science, NCEZID |
PhD, MSPH |
Primary in assessment design data analysis, and outputs. Support in data collection. |
Raber, Anjanette |
Evaluation Fellow, DHQP |
PhD, RN |
Primary assessment design data collection, data analysis, toolkit development. |
Winter, Kelly |
Training Specialist, NCEZID |
MPH, PhD Candidate |
Primary in assessment design data analysis, and outputs. Support in data collection. |
Wojno, Abbey |
Senior Training Specialist, NCEZID |
PhD |
Primary in assessment design data analysis, and outputs. Support in data collection. |
SPSS software will be used to conduct all quantitative analyses. Descriptive statistics will be calculated for the entire sample. Dependent samples t-tests, Chi-square, and binary logistic regression will be used to examine relationships between independent and dependent variables at baseline and follow-up and to measure change between time points. To determine statistical significance, alpha will be set at the traditional level of 0.05. This approach is a census approach of traveler knowledge, attitudes, and behaviors (dependent variables) at two points in time with several questions assessing exposure to CARE kit information (independent variable). Cross-sectional group means at baseline and follow-up will be compared.
The assessment team will use an external transcription service to transcribe the audio recorded interviews. All transcribed interviews will be uploaded to NVivo, ATLAS.ti, or MAXQDA for thematic analysis. The goal of inductive thematic analysis is to make sense of core consistencies and meanings as they emerge from transcripts in a way that retains the spirit of the interviews. To ensure findings are credible and trustworthy, this assessment will use the Consolidated Criteria for Reporting Qualitative Research (COREQ), a 32-item checklist for interviews and focus groups. Items in the COREQ checklist are grouped into three domains, Evaluation Team and Reflexivity-checklist, Assessment Design-Checklist, Analysis and Findings-Checklist and help ensure explicit and comprehensive reporting of qualitative studies throughout the assessment process.
Faul, F., Erdfelder, E., Lang, A., & Buchner, A. (2007). G*Power 3: A flexible statistical power analysis program for the social, behavioral and biomedical sciences. Behavior Research Methods, 39, 175-191.
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
File Title | ECT Statement B | CARE Program |
Author | wernerk;Raber, Anjanette M. (CDC/OID/NCEZID) (CTR);Glynn, Natali |
File Modified | 0000-00-00 |
File Created | 2021-01-31 |