Final Post Disaster survey ICPD Supporting Statement Part B Methodo - v4 May 2019

Final Post Disaster survey ICPD Supporting Statement Part B Methodo - v4 May 2019.docx

Post Disaster Survivor Preparedness Research Survey

OMB: 1660-0146

Document [docx]
Download: docx | pdf




Individual and Community Preparedness Division





Supporting Statement Part B – Collections of Information Employing Statistical Methods

Post Disaster Survivor Preparedness Research

CTC


February 21, 2018




The agency should be prepared to justify its decision not to use statistical methods in any case where such methods might reduce burden or improve accuracy of results. When Item 17 on the Form OMB 83-I is checked, "Yes," the following documentation should be included in the Supporting Statement to the extent that it applies to the methods proposed:

  1. Describe (including a numerical estimate) the potential respondent universe and any sampling or other respondent selection methods to be used. Data on the number of entities (e.g., establishments, State and local government units, households, or persons) in the universe covered by the collection and in the corresponding sample are to be provided in tabular form for the universe as a whole and for each of the strata in the proposed sample. Indicate expected response rates for the collection as a whole. If the collection had been conducted previously, include the actual response rate achieved during the last collection.

The universe for this study will consist of all adults (18 years of age or older) residing in counties eligible for applying for individual and/or public assistance from FEMA as a result of a disaster, such as flood or hurricane (e.g. 2017-2018 hurricanes Harvey, Irma, and Maria). The survey does not require a use of a control group, as the primary research goal is to understand disaster survivor specific actions, attitudes, beliefs and impact from the survivor’s perspective. The collection has been designed so that the National Household Survey, which is a representative sample of national preparedness behaviors, may serve as a control group if required. Respondents who report that they resided in impacted counties at the time of the event will be considered eligible for this study. As needed, other selection criteria will be utilized, such as census track overlays and oversamples or quotas for specific demographics. As desired, depending on the populations affected by disasters, screener questions can be employed to oversample underserved populations or other population factors.

For surveys in this collection, a household-based telephone survey will be conducted to complete about 500 interviews for the telephone survey. The expected sample ratio of landlines to cell phones is 30 to 70, depending on the specific target population. This ratio is shaped based on the ever-increasing proportion of households that are wireless only or wireless primary, but for example, a greater proportion of landlines would be used to target the underserved population of adults 75 or older.

For the focus groups and cognitive interviews, depending on the desired audience, for example the underserved audience that will be the focus of the initial collection, the sampling methods will consist of recruitment through relevant FEMA programs, recruitment through local partners, a purchased listed sample, or other network contacts, as appropriate. Recruitment will be done through a screener call, email, or direct recruitment from contacts, in order to obtain an appropriate sample of these hard to reach audiences. Focus groups will consist of 10 participants per group, while cognitive interviews will have 20 participants. After consulting best practice literature on focus groups, FEMA selected a max of 10 participants as the appropriate number to best facilitate dialogue. For the cognitive interviews, FEMA worked with academics to determine that 20 participants would provide a large enough sample to capture the diverse viewpoints and attitudes present in certain sub-populations.

This study has not been conducted previously and so there is no past response rate to refer to. However, FEMA has been able to get high response rates on national preparedness surveys conducted previously with trained interviewers. The goal will be to maximize the response rate by taking necessary steps as outlined in section B3 on “Methods to maximize response rates.”

As stated, the general population parameters are those individuals living in the counties eligible to apply for individual and/or public assistance from FEMA. Depending on the target population within this eligible population, these parameters (proportions or means) will be estimated at the overall population level for sampled counties, or based on the population of the targeted subgroup such as the underserved socio-demographic groups. As appropriate, the corresponding estimates at subgroup level will be computed and the precision associated with those estimates will depend on the resulting sample size (number of completed surveys) for these subgroups. For example, census data can be used to determine the proportion of specific underserved racial or socio-economic groups within the target location. If necessary, screeners could be employed to sample the target groups in relationship to the proportion of the total population or conduct an oversample in order to gather a more representative approach. A typical screener involves sharing a short questionnaire at the outset of the survey that enables a proportional benchmarking of overall respondents against a set of simple demographic factors to ensure balance (e.g., gender, age, income, access & functional needs, etc). For example, if this survey is deployed in a county where 20% of the population was 65 years or older, but during fielding our projected response rate for that population was 10% based on response frequency, then the screener would be employed to only proceed with the survey for individuals who were over 65 years old, up until the sample achieves a 5% tolerance with the county census.

FEMA does not plan to generalize the findings to the overall U.S. population. This survey is meant to allow FEMA to make targeted programmatic interventions that address gaps in resilience described by survivors based on the disaster type, the demographics of the individual. For example, the results for FEMA’s elderly hurricane survivors would be deployed to provide culturally and age appropriate messaging, programs, and services to help that population prepare effectively for future disasters.



­

  1. Describe the procedures for the collection of information, including:

  1. Statistical methodology for stratification and sample selection,

The sample will consist of U.S. adults who are currently living in households in counties designated as eligible to receive individual and/or public assistance from FEMA as a result of a flood or hurricane [Hurricane name e.g. Harvey, Irma, and Maria]. The sample will be geographically stratified into counties and sampling will be done independently within each county (region). For example, for the first survey in this collection, the breakdown of the counties will be for Texas, Florida, and Puerto Rico counties and areas identified as eligible for Individual Assistance.

There are four different methodologies used for the sampling.

  • Telephone Survey

  • Focus groups

  • Panel-based web survey

  • Cognitive interviews

Telephone Survey

Using proportional sample allocation, the targeted number of surveys to be completed in each county is expected to be close to those proportions. It may be noted that the actual number of completed surveys for each county (and by landline and cell phone strata within each county) will depend on observed response rates and so they may not exactly match the corresponding targets. However, the goal will be to meet those targets to the extent possible by constant monitoring of the response rates and by optimally releasing the sample in a sequential manner throughout the data collection period. In addition, as possible, diversity of sampling will be increased through increased accessibility, such as availability of TTY technology.

For particular populations of interest, listed below, sampling will be conducted through options including listed samples, by employing a screener for specific demographics, or by connecting with programs and community leaders to gain access to specific groups. If a specific impact was observed in particular locations, sampling would focus on only those counties, and screen for experience with that impact. If the proportion of the sub population is large enough to be reached through random sampling, reaching a minimum of 100 participants of the 500 total participants, that method will be used for sampling. If the subpopulation is not large enough that random sampling is expected to reach 100 participants, an oversample will be applied, increasing the telephone survey outreach. Oversample will be acquired through deploying the use of screeners. For example, potential respondents would answer a short series of 1-5 questions regarding their age, income, access & functional needs, etc. and would proceed with the survey if they met criteria. The example for this screener is included in Collection Instrument included with this survey. Either way, the responses from the subpopulation are included as part of the total population analysis, as well as specific comparative and descriptive sub-group analysis.

This collection may be used for follow up surveys to track changes in attitudes and behaviors of storm survivors over time. For these surveys, the sampling methods will be structured to match previous surveys of the same population samples. Survey items of particular interest over the long term will be repeated in research with other survivors from beyond the current hurricane season to track changes over time.

Within each county, the sampling of landline and cell phones will be carried out separately from the respective sampling frames. For both landline and cell phones, the geographic location of the respondent will be determined based on respondent’s self-reported response to a question on location (e.g., “what is your zip-code?”). For the cell phone sample, data will be collected from all respondents regardless of whether they also have access to a landline.

All cell numbers are de-duped against the most up to date available cell data bases when received and before dialing starts and are thereafter also de-duped against updated ported number data bases daily to capture any cell numbers that may have been reassigned in the prior 24 hours. It may be noted that due to continuous porting of numbers from landline to cell and cell to landline, some numbers from landline exchanges may turn out to be cell phones and conversely, some numbers sampled from the cell phone exchanges may actually be landline numbers. However, such numbers will be relatively rare and the vast majority of landline and cell phone numbers will be from the corresponding frames. The survey will also find out from the respondents if the number called is actually a landline or a cell phone number. The physical location of respondents will therefore be based on their self-reported location information (for example, based on their self-reported zip-code information) and will not be determined based on their telephone exchange.

Focus group and Interview Sampling

As focus groups and interviews allow for in-depth topic exploration, they will be used to target particular underserved audience to better understand their needs related to disaster preparedness, by measuring the following:

  • Attitudes/Perceptions
  • Awareness
  • Knowledge
  • Actions taken
  • Barriers to action
  • Motivation
  • Self-efficacy

For the underserved audiences, listed below, sampling will be conducted through options including listed samples, by employing a screener for specific demographics, and by connecting with programs and community leaders to gain access to specific groups. If a specific impact was observed in particular locations, sampling would focus on only those counties, and screen for experience with that impact. A focus group or set of interviews will focus on one to two audiences per implementation, as mixing audiences would not allow for the necessary data analyses.

Demographics of affected respondents may focus on the six historically underserved populations, depending on the regions affected and the specific goals of collection:

  1. Socio-economically Disadvantaged: Populations that are disadvantaged due to low levels of income, community influence, and/or status.
  2. People with Access and Functional Needs: Populations that experience difficulty seeing, hearing, speaking, walking, taking care of daily needs, and/or living independently.
  3. Ethnic Minorities: Populations that may live in geographically and/or socially isolated communities, feel distrustful of police and emergency personnel, and/or those with limited English proficiency.
  4. The Very Young and Very Old: Populations that may have mobility constraints or concerns, and may rely on others for safety and preparedness.
  5. Sex and Gender: Populations that have been historically underserved based on sex, gender, and/or preference.
  6. Tribal Communities: Tribes or groups that are federally recognized and eligible for funding and services from the Bureau of Indian Affairs (BIA), there are currently 566 federally recognized tribes.

Focus groups will likely be used for audiences 1, 5, and 6, while interviews can be used for all audiences, but in particular audiences 2, 3, and 4, as we would expect that these audiences may have scheduling or movement restrictions that present a challenge on the participant’s end for them to gather in a single location. The option of deploying interviews or focus groups allows for more flexibility to accommodate underserved group’s needs. For example, recruitment is often easier for cognitive interviews, as the participant does not have to be free at a particular time and day, so interviews will be deployed for any audiences expected to be a challenge for recruitment.

We will recruit a diverse audience within the underserved group, either with guidance from local partners or through random sampling. The specific metrics of diversity will be determined by the goals of the research. For example, for tribal populations, this could include recruiting members of a few tribes, or those who live on reservations and those who do not. Or, for another example, interviews would include individuals with a diverse set of access and functional needs, such as those who are legally blind or require the use of a wheelchair. The focus groups will draw from a single geographic area, as participants will need to gather to a single location to participate. The interviews will focus on one to three areas, as appropriate for the target audience and the impact zone of the disaster. Interviews can be conducted in person or on the phone, as appropriate for the audience, allowing for a greater diversity of locations if desired. As possible, disabilities, access, and functional needs will be accounted for during each specific implementation of the focus groups and cognitive interviews to allow for diverse sampling through accessibility for all. Focus group facilities will be chosen that are easily accessible for participants, considering the building (e.g., ramps), transportation (e.g., proximity of public transit), language (e.g., conducting focus groups in Spanish), and other needs as appropriate.

Panel-based web survey

For up to two of the four surveys of 500 respondents each year, ICPD would have the option to field the survey through a web-based survey. ICPD has the option to contract a research firm to utilize a nationally-recognized Market Research Panel to field the survey to a list of panel participants. A Market Research Panel is an organization with a large group of voluntary participants who have agreed to fill out surveys in exchange for a reward (e.g., entry into a sweepstakes). Market Research Panel participants are specifically built to match the overall census demographics, and include specialist sub-panels for harder to reach segments like those with access and functional needs. Several studies have confirmed the reliability of web-based in terms of soliciting responses with similar or improved accuracy than a telephone survey.1 ICPD would consider testing the efficacy of such an approach for reaching populations who have traditionally opted out of telephone surveys. This approach also allows ICPD to generate insights for its program at lower cost than a telephone survey.

The collection procedures would mirror the procedures for the telephone survey in several key ways:

  • Proportional sample allocation of panel respondents by county proportional to the overall population across those counties affected by the disaster

  • Sample would be balanced according to age, race, gender, and income demographic variables through the use of online screener questions that would allow participant to proceed or would terminate the survey based on the responses

  • Oversampling would be conducted for specific subpopulations (e.g. over 65 years old, socio-economically disadvantaged) depending on information needs about survivors and sub-populations each disaster event

During the fielding of a panel-based survey, the Market Research Firm would provide daily updates on the completion of the survey instrument. The Market Research Panel would quality check the location of the panel participants based on both the stated zip code in the screener questions (See Collection Instrument) and the meta-data the Market Research Panel holds for its list of participants.

  1. Estimation procedures

For each county, FEMA will determine whether the sample collected reflects the age, income, race, and gender of the county using the latest Census provided information. If the sample does not reflect the demographics of the county within a 5% tolerance, the sample data will be weighted to generate unbiased estimates. Within each stratum (county), weighting could be carried out to adjust for (i) unequal probability of selection in the sample and (ii) nonresponse.

In studies dealing with both landline and cell phone samples, one approach is to screen for “cell only” respondents by asking respondents reached on the cell phones whether or not they also have access to a landline and then interviewing all eligible persons from the landline sample whereas interviewing only “cell only” persons from the cell phone sample. The samples from such designs are stratified, with each frame constituting its own stratum. A dual-frame design will be used where dual users (those with access to both landline and cell phones) can be interviewed in either sample. This will result in two estimates for the dual users based on the two samples (landline and cell). The two estimates for the dual users will then be combined and added to the estimates based on landline-only and cell-only population to generate the estimate for the whole population.



Composite pre-weight: As needed, the states will be used as weighting adjustment classes. The composite pre-weight could be generated within each weighting class. The weight assigned to the ith respondent in the hth weighting class (h=1, 2, 3, 4) could be calculated as follows:


W(landline,hi) = (Nhl/nhl)(1/RRhl)(ncwa/nll)(λIDual) for landline sample cases (1)

W(Cell,hi) = (Nhc/nhc)(1/RRhc)(1 – λ)IDual for cellular sample cases (2)

where

Nhl: size of the landline RDD frame in weighting class h

nhl: sample size from landline frame in weighting class h

RRhl: response rate in weighting class h associated with landline frame

ncwa: number of adults in the sampled household

nll: number of residential telephone landlines in sampled household

IDual: indicator variable with value 1 if the respondent is a dual user and value 0 otherwise

Nhc: size of the Cell RDD frame in weighting class h

nhc: sample size from Cell frame in weighting class h

RRhc: response rate in weighting class h associated with Cell frame

λ’ is the “mixing parameter” with a value between 0 and 1. If roughly the same number of dual users is interviewed from both samples (landline and cell) within each state, then 0.5 will serve as a reasonable approximation to the optimal value for λ. This adjustment of the weights for the dual users based on the value of the mixing parameter ‘λ’ will be carried out within each state. For this study, the plan is to use a value of ‘λ’ equal to the ratio of the number of dual users interviewed from the landline frame and the total number dual users interviewed from both frames within each state. One or two additional values of the mixing parameter may be tested to see the impact on survey estimates. It is anticipated that the value of the mixing parameter will be close to 0.5.

It may be noted that equation (2) above for cellular sample cases doesn’t include weighting adjustments for (i) number of adults and (ii) telephone lines. For cellular sample cases, as mentioned before, there is no within-household random selection. The random selection can be made from all persons sharing a cell phone but the percentage of those sharing a cell phone is rather small and it would also require additional questionnaire time to try to capture such information. The person answering the call will be selected as the respondent if he or she is otherwise found eligible and hence no adjustment based on “number of eligible adults in the household” will be necessary. The information on the number of cell phones owned by a respondent could also be asked to make adjustments based on number of cell phones. However, the percentage of respondents owning more than one cell phone is expected to be too low to have any significant impact on sampling weights. For landline sample cases, the values for (i) number of eligible adults (ncwa) and (ii) number of residential telephone lines (nll) may have to be truncated to avoid extreme weights. The cutoff value for truncation will be determined after examining the distribution of these variables in the sample. It is anticipated that these values may be capped at 2 or 3.

Response rate: Typical response rates for telephone public opinion surveys is 8-12% (Pew Research). Very simply, in order to achieve the response rates required for a 500-person sample, approximately 4,100 - 6,250 calls would be required.

If needed, the response rates (RRhl and RRhc mentioned above in equations (1) and (2)), could be measured using the AAPOR (3) definition of response rate within each weighting class and calculated as follows:

RR = (number of completed interviews) / (estimated number of eligibles)

= (number of completed interviews) / (known eligibles + presumed eligibles)

It will be straightforward to find the number of completed interviews and the number of known eligibles. The estimation of the number of “presumed eligibles” will be done in the following way: In terms of eligibility, all sample records (irrespective of whether any contact/interview was obtained) may be divided into three groups: i) known eligibles (i.e., cases where the respondents, based on their responses to screening questions, were found eligible for the survey), ii) known ineligibles (i.e., cases where the respondents, based on their responses to screening questions, were found ineligible for the survey), and iii) eligibility unknown (i.e., cases where all screening questions could not be asked, as there was never any human contact or cases where respondents answered the screening questions with a “Don’t Know” or “Refused” response and hence the eligibility is unknown).

Based on cases where the eligibility status is known (known eligible or known ineligible), the eligibility rate (ER) is computed as:

ER = (known eligibles) / (known eligibles + known ineligibles)

Thus, the ER is the proportion of eligibles found in the group of respondents for whom the eligibility could be established.

At the next step, the number of presumed eligibles will be calculated as:

Presumed eligibles = ER × number of respondents in the eligibility unknown group

The basic assumption is that the eligibility rate among cases where eligibility could not be established is the same as the eligibility rate among cases where eligibility status was known. The response rate formula presented above is based on standard guidelines on definitions and calculations of Response Rates provided by AAPOR (American Association for Public Opinion Research).

Post-stratification weight—If weighted samples are required, once the two samples are combined using the composite weight (equations (1) and (2) above), a post-stratification weighting step will be carried out, to simultaneously rake the combined sample to (i) known characteristics of the target population (adults living in the designated counties, age, gender, and race/ethnicity ) and (ii) an estimated parameter for relative telephone usage (landline-only, cell only, cell mostly, other dual users). The target numbers for post-stratification weighting will be obtained from the latest available county data available.

The target numbers for the relative telephone usage parameter will be based on the latest estimates from NHIS (National Health Interview Survey). For the purpose of identifying the “cell mostly” respondents among the group of dual users, the following question will be included in the survey.

DC

QID:103424 Of all the telephone calls your household receives (read 1-3)?


1 All or almost all calls are received on cell phones

2 Some are received on cell phones and some on regular phones, OR

3 Very few or none are received on cell phones

4 (DK)

5 (Refused)

Respondents choosing response category 1 (all or almost all calls are received on cell phones) will be identified as “cell mostly” respondents.

After post-stratification weighting, the distribution of the final weights will be examined and trimming of extreme weights, if any, will be carried out if necessary to minimize the effect of large weights on variance of estimates.

Online Survey Estimation Method

For up to two of the four surveys of 500 respondents each year, ICPD would have the option to field the survey through a web-based survey. ICPD has the option to contract a research firm to utilize a nationally-recognized Market Research Panel to field the survey to a list of panel participants.

Once the sample is complete, the Contractor will ensure the incoming sample is representative of the population using a RIM weighting method.

RIM weighting makes the sample representative of the population using marginal distributions, i.e., it weights the sample using the distribution of one variable at a time and not interlocked distributions. The concept follows the theory that the sample and target distributions of the weighting variables will ultimately converge after a given number of iterations. RIM weighting is also known as Iterative Proportional Fitting weighting.

For a the RIM weights method, we weight Reponses – Completes + Terminates to demographic targets based on the demographics of the overall country, because all of them belong to the population

Once we have the weights, the Contractor would run all analyses based on those weights.


Comparing web-based sample and telephone sample results is a topic of much discussion in the public opinion research community. Studies by Pew Research have shown that differences in responses between the two modes are typically modest and hard to predict.2 We do not plan to make adjustments to either web or telephone surveys to account for phone or web-biases as it is not yet possible to predict how modes will affect this instrument in particular. However, FEMA after completing its first 4 surveys, FEMA is considering conducting a pairwise analysis of similar respondents in similar situations across modes to identify if there is a significant and measurable mode effect.




  1. Degree of accuracy needed for the purpose described in the justification

The margin of error (MOE) for estimating the unknown population proportion ‘P’ at the 95% confidence level could be derived based on the following formula:

MOE = 1.96 * where “n” is the sample size (i.e. the number of completed surveys)

In a dual frame household-based survey, some design effect is expected but the precision for survey-based estimates for most subgroups of interest are likely to have reasonable precision. For example, the sampling error associated with an estimate based on a sample size of 1,000 with a design effect of 1.25 will still be below ±3.5 points. Hence, the accuracy and reliability of the information collected in this study will be adequate for its intended uses. For example, when asked the question “Did you did not evacuate because they did not ­have a place to go”, if 70% elderly respondents “Strongly Agree” and 40% of the overall population agrees, then the results would be within the margin of error to make the claim that the two groups are meaningfully different (+/- 3.5%).

The sampling error of estimates for this survey will be computed using special software (like SUDAAN) that calculates standard errors of estimates by taking into account the complexity, if any, in the sample design and the resulting set of unequal sample weights.

  1. Unusual problems requiring specialized sampling procedures

Unusual problems requiring specialized sampling procedures are not anticipated at this time. If response rates fall below the expected levels, additional sample will be released to generate the targeted number of surveys. However, all necessary steps to maximize response rates will be taken throughout the data collection period and hence such situations are not anticipated.

  1. Any use of periodic (less frequent than annual) data collection cycles to reduce burden

The data will be collected after an applicable disaster depending on the programmatic needs and budget of ICPD, not to exceed the approved burden allowance for this collection, and therefore may not be conducted on an annual basis.

  1. Describe methods to maximize response rates and to deal with issues of non‑response. The accuracy and reliability of information collected must be shown to be adequate for intended uses. For collections based on sampling, a special justification must be provided for any collection that will not yield "reliable" data that can be generalized to the universe studied.

Survey based estimates for this study will be weighted to minimize any potential bias that may be associated with unit level non-response, If found necessary, the respondents may be split into two groups: (i) early or ‘easy to reach’ and (ii) late or ‘difficult to reach’ respondents. The total number of calls required to complete an interview will be used to identify these groups. This comparison will be based on selected key questions from the main survey and on the assumption that the latter group may in some ways resemble the population of non-respondents. The goal of the analysis plan will be to assess the nature of non-response pattern in the main survey.

Methods to maximize quality and response rates

During data collection, ISA’s Quality Assurance (QA) department runs through an extensive list of checks for trackers, and Action Research supervises to ensure quality of data. The following outlines these checks.

Systemized Data Collection

In addition, the sample will be dialed in a systemized manner. The first replicate in each area will be released by ISA’s CATI system for initial attempts. The system manages the callback times as well. When a respondent asks for a specific callback day and time the system will bring that up at that time for another call. When a respondent doesn’t ask for a specific time to be called back or when no respondent was reached on any given attempt, the system brings their number up randomly at a different time of day and a different day than the original call. This ensures that all potential respondents will be called on several different occasions in ISA’s attempt to complete an interview. Sample records are given up to eight call attempts before a number is resolved as “call limit reached.” Good numbers are also resolved after second refusals or as a request from a respondent.

Run dummy data and check marginals

The initial dummy marginal is primarily used to check logic - such as skip patterns, selection processes, etc.

Run a marginal after 1st night's interviewing

This marginal allows QA to double check skip patterns, Don’t Know (DK) ratios as well as odd responses which are carefully reviewed and reported to the project managers, so that they can de-brief the interviewers.

Run survey length by interviewer

The length is normally compared to the overall avg. length. If the length is a + or - 10% off, then the interviewer is automatically flagged and will be monitored closely. The survey will also be validated to ensure qualification questions as well as major skip pattern questions were correctly coded. In addition, ISA will run a marginal for that specific interviewer to review certain responses that may have driven the length to increase/decrease.

Internal Monitoring

30-50% of ISA’s Data Collectors are monitored daily. Each Data Collector is monitored, on average, three times per week – 30-50% of the shifts they work. Data Collectors receive feedback and coaching based on performance at all times. Monitors use a report form to evaluate Data Collectors – from reading verbatim, to not biasing respondents, articulation, flow and confidence. In addition, if for some reason, an interviewer was flagged by CATI's QA as having high ratios of DK's, shorter surveys or a higher production, this interviewer will be monitored more often. In addition to verifying proper qualification, monitors review probing and clarification methods.

Client Monitoring

ISA offers clients sophisticated remote and in-house monitoring. Up to 6 remote clients can monitor a project at one time. Remote monitors are facilitated by a live Quality Assurance Supervisor so that immediate client feedback can be relayed on to data collectors. Action Research will periodically monitor surveys to assure quality.

Validation

ISA validates 15% of a specific Interviewer’s overall work. Questions normally asked are qualifying questions, quota questions (Age, Income, and Ethnicity) as well as major skip pattern questions.

Run don't know %'s by interviewer

If for some reason, it becomes apparent that an interviewer has a high ratio of DK's, he/she will be re-trained and re-briefed. In addition the interviewer will be heavily monitored. If for some reason, the numbers don't improve, the interviewer will be removed from this study.

Review the open ends daily

This method allows ISA to verify that the interviewers and probing and clarifying properly. For the responses that are not fully probed and/or clarified, ISA will re-contact these respondents and get a more accurate response.

Run a marginal at the 1/3 of the way point, 2/3 of the way point and at the end

These marginals are compared to previous waves in terms of key metrics and DK levels.

If these metrics don't match, the question #'s in dispute will be reported to the Project Manager who addresses it with supervisors, monitors, interviewers as well as clients.

On-going interviewer training

All Data Collectors are required to attend regular continuing education seminars to improve their interviewing skills. Trainers teach “refresher” workshops – from communication, to Probing & Clarifying, Refusal Prevention and Introduction.

Issues of Non-Response

Survey based estimates for this study will be weighted to minimize any potential bias, including any bias that may be associated with unit level nonresponse. For any subgroup of interest, the sampling error will depend on the sample size. All estimates will be weighted to reduce bias and it will be possible to calculate the sampling error associated with any subgroup estimate in order to ensure that the accuracy and reliability is adequate for intended uses of any such estimate. Based on experience from conducting similar surveys previously and given that the mode of data collection for the proposed survey is telephone, the extent of missing data at the item level is expected to be minimal. We, therefore, do not anticipate using any imputation procedure to handle item-level missing data.

  1. Describe any tests of procedures or methods to be undertaken. Testing is encouraged as an effective means of refining collections of information to minimize burden and improve utility. Tests must be approved if they call for answers to identical questions from 10 or more respondents. A proposed test or set of test may be submitted for approval separately or in combination with the main collection of information.



FEMA has tested this survey methodology with several research experts and contractors in preparation for its launch. In order to refine the focus group, cognitive interviews, and questionnaire, FEMA will instruct the Contractor to conduct a focus groups of less than 10 individuals (experts and those with lived survivor experience) in 2019 to validate the survey questions and approach. This session will be an extension of feedback and research FEMA has received during the course of this proposal to inform the design of this instrument.

Should FEMA proceed with cognitive interview and focus groups, FEMA will work with expert vendor partners to ensure the focus group and interview guides are appropriate for subpopulations selected for the initial interviews. Typically, this consists of 1-2 trial interviews with individuals and adaptation of the approach based on the experience of the trained professional interviewer/facilitator. These learnings would be documented and preserved in the event that future interviews and focus groups are conducted by different individuals/contractors.

Prior to launching a 500-person survey, the questionnaire will be tested with fewer than ten (10) respondents. Based on the drop-off analysis, responses, and feedback from the fielding team, FEMA’s contractor will adjust the survey to correct for any errors or inconsistencies.

  1. Provide the name, affiliation (company, agency, or organization), and telephone number of individuals consulted on statistical aspects of the design and the name of the agency unit, contractor(s), grantee(s), or other person(s) who will actually collect and/or analyze the information for the agency.

The information collection is conducted for the Individual and Community Preparedness Division by a contractor:

  • Concurrent Technologies Corporation
  • 100 CTC Drive
  • Johnstown, PA 15904-1935

The representatives of the contractor who consulted on statistical aspects of design and will be responsible for conducting the planned data collection are:

Name

Agency/Company/Organization

Number Telephone

Robert Kubler

Team CTC

703-310-5692

John Horvatis

Team CTC

917-542-1132









1 http://www.pewresearch.org/fact-tank/2015/05/14/where-web-surveys-produce-different-results-than-phone-interviews/

DHS FEMA ICPD Post Disaster Survivor Preparedness Research Page 8 of 8

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File Modified0000-00-00
File Created2021-01-21

© 2024 OMB.report | Privacy Policy