Supporting Statement for
Paperwork Reduction Act Submissions
OMB Control Number: 1660-0143
Title: Federal Emergency Management Agency Individual Assistance Customer Satisfaction Surveys
Form Number(s):
FEMA Form 519-0-36 Initial Survey –Phone
FEMA Form 519-0-37 Initial Survey -Electronic
FEMA Form 519-0-38 Contact Survey -Phone
FEMA Form 519-0-39 Contact Survey- Electronic
FEMA Form 519-0-40 Assessment Survey -Phone
FEMA Form 519-0-41 Assessment Survey - Electronic
B. Collections of Information Employing Statistical Methods.
When Item 17 on the Form OMB 83-I is checked “Yes”, the following documentation should be included in the Supporting Statement to the extent it applies to the methods proposed:
If the collection does not involve statistical methodology, please enter “THERE IS NO STATISTICAL METHODOLOGY INVOLVED IN THIS COLLECTION” and delete Q1 through 5.
1. Describe (including numerical estimate) the potential respondent universe and any sampling or other respondent selection method to be used. Data on the number of entities (e.g., establishments, State and local government units, households, or persons) in the universe covered by the collection and in the corresponding sample are to be provided in tabular form for the universe as a whole and for each of the strata in the proposed sample. Indicate expected response rates for the collection as a whole. If the collection has been conducted previously, include the actual response rate achieved during the last collection.
The target population of the three surveys in this collection consist of all disaster survivors who registered for assistance with FEMA. The number of FEMA registrants varies depending on disaster size and number of disaster declarations. Survey samples are stratified by disaster size (number of registrants) for each survey period. We make sure we have a proportionate amount of sample, by disaster, to draw conclusions on the monthly (and/or quarterly/yearly) population of FEMA registrants.
The purpose of the surveys in this collection is to use the data to guide leaders and managers in making decisions about ways to improve the quality of services provided by FEMA. The populations are grouped by the intent of the scope of each survey as follows:
Initial (INT) Survey (Phone or Electronic) measures the quality of disaster assistance information and services received during the initial registration process. Possible registration methods include (1) with a FEMA representative or (2) online via DisasterAssistance.gov website. The survey uses a skip pattern to ask specific questions based on disaster survivor’s registration methods. The population for this survey is broken into two groups:
Registered by phone or FEMA agent- People who registered for disaster assistance from FEMA by phone or in person with a FEMA agent. The target number of completes per month is 550.
Registered online -People who registered for disaster assistance from FEMA using the website. The target number of completes per month is 550.
Additional questions are asked for applicants who visited at Disaster Recovery Center (DRC). We estimate 37% of applicants visit a DRC. In order to draw statistically valid conclusions about applicants who visit a DRC, we increased the necessary sample size for the INT Survey. Approximately 1,100 completions for INT (550-phone registration;550 online registration) provides ~400 DRC visitors.
Contact (CNT) Survey (Phone or Electronic) measures the quality of disaster assistance information and services received during additional contact methods with FEMA. These contact methods consist (1) with a FEMA representative phone contact, (2) FEMA inspector contact, and (3) online account access via DisasterAssistance.gov website. This would include someone checking the status of their application or calling a Representative to ask questions about their case. The survey uses a skip pattern to ask specific questions based on disaster survivor’s type of contact. The population for this survey has three groups:
Phone contact (Helpline) - People who called in about their application. The target number of completes per month is 400.
Online access via DisasterAssistance.gov - People accessed their application online. The target number of completes per month is 400.
Inspection - People who had an inspection. The target completion per month is 400.
Assessment (AST) Survey (Phone or Electronic) measures the quality of disaster assistance information and services received after eligibility is determined. The survey uses a skip pattern to ask specific questions based on disaster survivor’s eligibility status. The population for this survey is broken into two groups:
Eligible Applicants- People who receive disaster assistance from FEMA. The target number of completes per month is 400.
Ineligible Applicants -People who did not receive disaster assistance from FEMA. The target number of completes per month is 400.
After a disaster is declared by FEMA and, during the registration process for assistance, survivors indicate their preference of communication (i.e., USPS mail vs. electronic). Disaster survivors who prefer USPS communication with FEMA receives phone administered surveys. Alternatively, disaster survivors who prefer electronic communication receives electronic surveys.
Research shows people who prefer electronic communication are slightly less likely to take a phone survey. In efforts to maximize response rates, both modes of data collection will be used. Percentages of survivors who receive each mode of survey administration will vary by disaster. The table below (Table 1) uses a 5-year average estimate of 58% who prefer USPS correspondence and will receive a phone survey and 42% whose preferred communication method of email and will receive an electronic survey. The percentage of phone vs electronic surveys sample size will vary based on disasters communication preferences.
Information related to electronic surveys in this collection is contingent on acquisition of electronic survey software for public use. Currently, as of 2019, we only have phone survey software and are in the process of acquiring electronic survey software. Analysis of phone vs electronic surveys will be used to ensure the appropriate % of surveys are obtained. Possible weighting of survey results may occur if an imbalance of phone vs electronic surveys occurs with a difference in survey results between the groups. For more information on this please refer to Nonresponse Bias.
Data is collected continuously by using mutually exclusive samples representative of the number of registrations during the sampling period. The table below shows the estimated sizes of the universes covered by the collection, corresponding target completions and samples, and the actual and expected responses rates for each survey. Target completions are the maximum amount used for reporting and drawing conclusions. Target completions may decrease depending on managements need of monthly versus quarterly and/or yearly reporting. For smaller disasters, we may contact the same survivor for a different survey within the collection (i.e. A survivor who completed the Initial survey may be contacted again to take the Assessment survey).
Qualitative research (focus groups and interviews) will not be subject to probabilistic sampling methods (e.g., usually based on purposive or convenience sampling). Historical data shows a response rate of 5% ( ) and an attendance rate of 35% ( ) for focus groups without incentives.
Respondents will be selected based on management requests for more insight into an issue, project, or program within Individual Assistance. This could include a deeper dive of customer perception on a program based on recent changes to the program, or inconclusive survey results that may require further examination. Occasionally, FEMA may notice a trend in the survey comments (e.g., letter was confusing) or lower satisfaction scores on certain questions and are unable to address the problem until more specific feedback is gathered. The methodology is to reach out to a small sample of disaster survivors based on the characteristics of the program in which they participated. Typically, a few different geographic regions are identified, and we call a sample within a 30-mile radius. Focus group target completions are based on four-days of sessions per topic. Each session will have a different group of 12 participants, in six geographical locations performed under 3 different session. The maximum total of unique respondents for the year is 864 participants.
Annual # of focus group participants=4*12*6*3=864
One-on-One Interviews are conducted with a maximum of 5 days with 40 participants at 4 locations for an annual total of 800 participants.
Annual # of interview participants=5*40*4=800
For more information on the focus group protocol and moderators guide, see attachment in supplementary documentation of this information collection.
Table 1: Description of Respondent Population, Sampling Method, Response Rates based on the average yearly disaster activity from 2014-2018 |
||||||||
Type of Respondent / Entity |
Form Name / Form Number |
Target Population Description |
Potential Respondent Population Numerical Estimate |
Target Completions per Month (42% Electronic vs 58% Phone survey) |
Target Completions per Year |
Sampling Criteria for Target Population |
Actual or Expected Survey Response Rates with Actual FY18 |
Target Annual Adjusted Sample Size |
Annual Respondent Universe |
|
|||||||
Surveys |
|
|
A |
E |
E×12 |
|
F |
(E×12) ÷F |
Individuals and Households |
Initial Survey- Phone/FEMA Form 519-0-36 |
Based on 5 Yr. Avg of yearly registrations. |
690,704 |
638 |
7,656 |
Biweekly Sample of disasters registrations |
30% |
25,520 |
Initial Survey- Electronic/FEMA Form 519-0-37 |
499,648 |
462 |
5,544 |
Biweekly Sample of disasters registrations |
30% |
18,480 |
||
Contact Survey- Phone/FEMA Form 519-0-38 |
Based on 5 Yr. Avg of applicants contacting Helpline and/or going On-Line to update the case, or receiving an inspection. |
1,067,028 |
696 |
8,352 |
Biweekly Sample of survivors who contacted Helpline/Internet inquiry or had an Inspections |
29% |
28,800 |
|
Contact Survey-Electronic/FEMA Form 519-0-39 |
771,877 |
504 |
6,048 |
Biweekly Sample of survivors who contacted Helpline/Internet inquiry or had an Inspections |
29% |
20,855 |
||
Assessment Survey-Phone/ FEMA Form 519-0-40 |
Based on 5 Yr. Avg of yearly Eligible and Ineligible decisions. |
690,704 |
464 |
5,568 |
Biweekly Sample of survivors who received an eligibility determination |
13% |
42,831 |
|
Assessment Survey-Electronic/ FEMA Form 519-0-41 |
499,648 |
336 |
4,032 |
Biweekly Sample of survivors who received an eligibility determination |
13% |
31,015 |
||
Total Survey Sample Size |
|
|
|
3,100 |
37,200 |
|
|
167,501 |
Qualitative Research |
|
|
|
|
|
|
|
|
Individuals and Households |
Focus Group for 2 Hrs. Plus Travel 1 Hr. |
Based on 5 Yr. Avg of Total Registrations. |
1,190,352 |
|
864 |
|
5% |
|
One-on-One Interviews |
Based on 5 Yr. Avg of Total Registrations. |
1,190,352 |
|
800 |
|
|
|
|
Qualitative Total |
|
|
|
|
1,664 |
|
|
|
Surveys and Qualitative Research |
|
|
|
|
38,864 |
|
34% |
|
2. Describe the procedures for the collection of information including:
-Statistical methodology for stratification and sample selection:
Achieving a representative sample of the population (ex. # of registrations per disaster) is key for generalizing findings, therefore, a probability-based sampling method of stratification by disaster size (# of registrations per disaster) will be used to make sure each homogeneous subgroup within the population are represented. To ensure each subgroup in our overall sample is represented at similar levels of precision, the sample is also adjusted using historical response rates to accommodate each population. This ensures we have enough data elements within each sample to make statistical inference on the overall disaster survivor population and subpopulations (i.e. survey scope) of interest.
“Simple random samples (where all units and all equal-numbered combinations of units have the same probabilities of selection) are rare in practice for a number of reasons. … Thus, other probability-based methods that employ multiple stages of selection, and/or stratification, and/or clustering are used to draw more practical samples that can be generalized with known degrees of sampling error.” [https://www.whitehouse.gov/sites/default/files/omb/inforeg/pmc_survey_guidance_2006.pdf]
Stratification provides gains in precision, or reliability, of the survey estimates and the gains are greatest when the strata are maximally heterogeneous.
This design supports performance measurement at a FEMA Recovery Directorate level. Previous survey designs showed that no significant differences in program feedback existed at a disaster level to justify disaster level sampling. Hence, we stratify based on the proportion of registrants for the disasters, continuously, until the registration period closes and there are no more registrations for the disaster. This satisfies that we are looking at all disasters to which FEMA provides individual assistance without overburdening the public, while still accounting for the different disasters that make up the FEMA population.
-Estimation procedure:
Weights and Poststratification
The ideal adjustment factors are those that display variation in response rates and variation on key survey statistics such as satisfaction scores. Although the proportion of sample from each disaster does not always follow a similar distribution for completed surveys (i.e. Response rates differ), there typically tends to be little variance in satisfaction between disasters that would warrant the implementation of weighting procedures. When there are statistically significant differences in satisfaction scores between disasters, the effect sizes are negligible. Hence, we usually assume there is no practical significant difference between disasters because the effect size is small.
Adjustments with weights may be used when (a) the population proportion for each disaster does not follow a similar distribution as the completed survey % and (b) the results show large variation among satisfaction between the disasters. Because this may happen on a case-by-case bases due to ongoing sampling and summarizing, we may adjust weights of survey results to accommodate imbalances at the quarterly (and/or yearly) aggregate.
The example below shows how we would create our adjustment to ensure the sample statistic will be based on the population distribution of the disasters. The response rates are different for the completes and satisfactions scores vary greatly among the disasters. Therefore, although the completed overall score is 3.54, the adjusted score is 3.16 to account for the large response rate (overrepresentation) for disaster 4567-IA and the underrepresentation of 7891-OK.
Overall Satisfaction Score=
Adjusted Satisfaction Score
Table 4: Example of Weighing by Disaster Proportions for a Quarter |
|||||||
Disaster Number* |
Total Registrations (Population) |
|
Sample Count |
Completes |
Completes % |
Overall
Satisfaction Score |
Adjusted Satisfaction Score |
1234-AK |
10,021 |
7.6% |
91 |
21 |
4% |
3.2 |
|
4567-IA |
22,031 |
16.7% |
201 |
196 |
42% |
4.5 |
|
7891-OK |
86,952 |
66.0% |
792 |
200 |
42% |
2.9 |
|
2345-HI |
2,738 |
2.1% |
25 |
9 |
2% |
3 |
|
3456-TX |
8,010 |
6.1% |
73 |
40 |
8% |
2.6 |
|
5678-CA |
1,922 |
1.5% |
18 |
6 |
1% |
1.8 |
|
Total |
131,674 |
|
1200 |
472 |
|
3.54 |
3.16 |
*Not actual data, for example purposes only.
Post stratification and other basic weight adjustments such as the one in Table 4 may be used when appropriate to the analysis. There are several demographic characteristics that we do not know about the FEMA population to provide an accurate post stratification weight. For example, we do not know race percentages about the populations for the FEMA population since this question is not asked of the whole population of FEMA registrants. Typically, we know race populations percentages about the whole population by looking at the Census calculated demographic populations for a county declared in a disaster area, but research shows the FEMA registrant population varies from Census demographic statistics. Variable disaster impacts, poverty levels, and the % of insured persons within a county may affect the demographics of the FEMA registration population to be skewed from the Census population. Hence, population demographics are not easy to acquire unless they are asked of the FEMA population. Alternative estimation procedures may
Typically, the adjusted weighing methodology may be applied to disaster level scores when there is an appropriate reason to do so. Empirical considerations and leaderships’ business decisions determined the best overall weighing design for disaster strata. In special studies, where survey results from known population factors are of interest, such as age and income, the response sample may be post stratified to estimate those population characteristics similar to the method used for adjustments in Table 4.
Non response bias:
In general, results from these surveys are used for the overall population of disaster survivors during a requested time period, which are based on a stratified sample by disaster registration size. To proactively anticipate other analysis types, analysis was conducted on data from the previous collection on auxiliary variables considered to have possible nonresponse bias. The results from Table 5 shows across age, income, and administration preference, non-response bias is estimated to be low. The highest estimated bias was for younger respondents (7%; ages 25-34). Younger respondents may respond better to an electronic survey, which we plan on introducing in the current collection.
Unit response rates range from 13-30%, whereas item nonresponses for satisfaction questions have more than a 96% response rate per question, and demographics response rates range from 86% to 95%. Methods implemented in this study include comparing response rates across subgroups of the sample, examining characteristics among full population vs respondents, and investigating the proportion and satisfaction scores of late responders. For the purpose of our analysis, we focus on nonresponse as it relates to the auxiliary variables age, income, and survey administration.
The examined variables include the following:
Income (Nine Levels: Under $25K to Over $200K)- Analysis performed between age group will be aggregated into larger groups to accommodate management analysis needs, ease of understanding, and sample size power. (Ex. Under $35K, $35K-$60K, $60K+).
Age (Eight Levels: Under 25, 25 to 34, 35 to 44, 45 to 54, 55 to 64, 65 to 74, 75 or older)- Analysis performed between age group are usually aggregated into larger groups. (Ex. Under 35, 35-54, 55+).
The analysis in Table 5 shows a decrease in respondents of younger applicants. This may have correlation with mode of survey administration. Research shows younger people may respond more to an email survey as opposed to a phone survey
Mode of survey Administration (Two levels: Phone surveys, Electronic surveys) Data collection modes can also affect response rates and nonresponse bias. The results from analyzing nonresponse based on collection mode is an ongoing issue as we transfer surveys from phone only to phone and electronic surveys. Because electronic surveys have not been implemented, analysis will focus on all phone surveys of people who prefer USPS mail vs email communication. Further analysis will be performed once surveys are administered electronically. The assumption is that people who prefer email communication would respond at a lower rate to a phone survey compared to people who prefer USPS correspondences.
Table 5: Nonresponse Bias Analysis FY2018 |
|||||||||||
|
1st Contact |
2nd Contact |
3rd+ Contact |
||||||||
AGE |
FEMA Population Registration % |
Survey Respondent % |
Estimated Bias |
Satisfaction
Score |
Response % |
Satisfaction Score |
Response % |
Satisfaction Score |
Response % |
Satisfaction Score |
|
UNDER 25 |
5% |
4% |
-1% |
4.07 |
5% |
4.16 |
3% |
4.08 |
4% |
3.79 |
|
25-34 |
16% |
9% |
-7% |
4.05 |
9% |
4.06 |
8% |
3.80 |
9% |
4.36 |
|
35-44 |
18% |
14% |
-5% |
3.91 |
14% |
4.01 |
13% |
3.88 |
14% |
3.69 |
|
45-54 |
20% |
20% |
0% |
3.87 |
20% |
3.94 |
19% |
3.87 |
21% |
3.71 |
|
55-64 |
21% |
25% |
4% |
3.89 |
25% |
3.79 |
25% |
3.90 |
24% |
4.14 |
|
65-74 |
14% |
20% |
6% |
3.70 |
18% |
3.62 |
22% |
3.72 |
18% |
3.88 |
|
OVER 75 |
6% |
9% |
3% |
3.78 |
9% |
3.72 |
9% |
3.93 |
10% |
3.74 |
|
|
|
|
|
|
|
|
|
|
|
|
|
INCOME |
|
|
|
|
|
|
|
|
|
|
|
UNDER 25K |
50% |
56% |
6% |
3.89 |
59% |
3.89 |
54% |
3.90 |
48% |
3.86 |
|
25K-35K |
12% |
11% |
-1% |
3.87 |
11% |
3.79 |
12% |
3.94 |
10% |
3.95 |
|
35K-45K |
9% |
8% |
0% |
3.85 |
8% |
3.89 |
9% |
3.63 |
9% |
4.06 |
|
45K-60K |
9% |
7% |
-2% |
3.90 |
6% |
3.73 |
7% |
3.97 |
9% |
4.15 |
|
60K-80K |
8% |
6% |
-2% |
3.84 |
5% |
3.94 |
6% |
3.68 |
8% |
3.81 |
|
80K-100K |
4% |
4% |
0% |
3.78 |
4% |
3.69 |
5% |
3.77 |
5% |
3.97 |
|
100K-150K |
5% |
4% |
-1% |
3.76 |
3% |
3.70 |
4% |
3.71 |
6% |
3.90 |
|
150K-200K |
2% |
2% |
0% |
3.80 |
1% |
3.96 |
2% |
3.61 |
3% |
3.76 |
|
ABOVE 200K |
2% |
2% |
0% |
3.58 |
1% |
3.54 |
2% |
3.75 |
3% |
3.44 |
|
|
|
|
|
|
|
|
|
|
|
|
|
ADMINISTRATION
MODE* |
|
|
|
|
|
|
|
|
|
|
|
Electronic |
39% |
35% |
-3% |
3.75 |
32% |
3.69 |
37% |
3.78 |
41% |
3.85 |
|
Phone |
61% |
65% |
3% |
3.92 |
68% |
3.93 |
63% |
3.89 |
59% |
3.95 |
|
*Calendar Year 2018 |
Estimated bias in each proportion is estimated as the difference between the proportion estimated for the respondents-only group (pr) and the proportion estimated for the entire eligible sample (pe): 𝐵 (pr)= pr – pe
Adjustment weights may be created depending on the type of analyses needed for that time period. The purpose of the analysis in Table 5 was used to look at differences, measure the potential for nonresponse bias in our survey, and research the possibility to make adjustment on questions and/or sampling weights for future reporting based on the scope of the research question. Analysis like Table 5 may be used to decide weighting correction measures for analysis, and adopt other changes to account for bias in the survey methodology (ex. maximizing response rates by following up and/or offering different modes of survey administration).
-Degree of accuracy needed for the purpose described in the justification:
Overall sample size:
The target number of completions per month for the Initial, Contact, and Assessment Surveys were created to ensure statistical inference on monthly, quarterly, and/or yearly reporting. The degree of accuracy is obtained by using a 50% variability assumption on the population (response distribution), 5% precision (Margin of error), and 95% confidence level. This sample size allows us to make statistical inference of the population and is considered appropriate in survey research [Ref: http://www.raosoft.com/samplesize.html].
Ex. The aim for the Initial (INT) surveys is to complete a statistically valid number of surveys based on approximately 13,200 per year by finishing 1,100 surveys each month for the duration of the survey time frame. Enough sample of survivor data for the target audience of each survey is imported into the system.
Ex. If you use a confidence interval that has a margin of error of 5% and 50% percent of your sample picks an answer of 5=Excellent out of 1 through 5, you can be "sure" that if you had asked the question 95 out of 100 times for the entire relevant population, then between 45% (50-5) and 55% (50+5) would have picked 5 as their answer.
Power Analysis:
Most analyses will look at the overall estimates at each reporting time period (Months/quarters/year) as described in Statistical Methodology. There are circumstances where disasters (or other variables) may display notably atypical satisfaction results, and/or management might request statistical testing across disasters (or other variables of interests) to understand differences. Analysis using ANOVA, Chi squared, or other factorial designs may be used to make these comparisons.
In order to ensure there is adequate sample, power analyses will be performed a priori. Sample size calculations will vary depending on the number of variables, variable structure, and statistical tests being performed. In most instances we strive for an 80% power level and deal with medium effect sizes.
Below gives examples of possible comparison analyses and the corresponding sample size with those parameters:
ANOVA: Age groups (4 Levels); 180 minimum respondents
Two-way ANOVA: Income groups (4 Levels) x Quarterly disasters (4 Levels); 260 minimum respondents
3-way ANOVA: Factorial design of satisfaction by age (4 Levels)/income (4 Levels) /survey mode (2 Levels); 260 minimum respondents.
Analysis performed between groups are usually aggregated to larger groups, for example age is usually aggregated to larger groups to accommodate analysis based on management’s needs. See Age and Income Groups.
Based on the power analyses, we usually have enough data after a quarter’s worth of surveying to perform comparison analyses on various groups of variables.
-Unusual problems requiring specialized sampling procedures:
There are no unusual problems requiring specialized sampling procedures.
-Any use of periodic (less frequent than annual) data collection cycles to reduce burden:
No data is collected less than annually. Initial and Contact surveys are conducted within two to three weeks after the interaction with FEMA for best response recall. Assessment Surveys ask overall questions related to service, assistance and recovery which need more time to experience after the disaster occurred; therefore, the survey is conducted 30 days or more after the eligibility is determined.
3. Describe methods to maximize response rates and to deal with issues of non-response. The accuracy and reliability of information collected must be shown to be adequate for intended uses. For collections based on sampling, a special justification must be provided for any collection that will not yield “reliable” data that can be generalized to the universe studied.
Maximizing Response Rates
Maintaining adequate response rates of surveys continues to be a problem as more people are fatigued from survey inundation, and highly publicized confidentiality breaches from various organizations have people uneasy about providing information. Because of this, several methods are used to maximize response rates: Survey burden time is shorter than previous survey collections, and mixed-mode administration is planned to be offered (phone & electronic).
Below are additional survey efforts that will be performed regularly in order to maintain/increase response rates:
Scheduling of phone surveys will be during normal business hours. Hours may be changed depending on disaster activity and time zone of the respondents being surveyed.
Follow-ups or reminders in the form of electronic communication or phone calls will be used to encourage participation, when electronic survey software is available.
Callbacks will be attempted to applicants who request a different time/day to take the survey that is more convenient.
The opening statement will explain the purpose of the survey, the estimated time frame, and that participation is voluntary.
Multiple attempts will be made to contact each applicant. When electronic survey software is available, those who receive the electronic survey may be sent electronic reminders. If the survey isn’t completed within a certain timeframe, the applicant data may be placed in the phone queue.
The questions are straightforward, short, and easy to answer. We’ve also incorporated numeric scales in the current collection to reduce respondent burden. Listening to and remembering long lists of verbal response options can be tedious.
Applicants will be told their survey responses will in no way affect the outcome of their application for FEMA assistance.
On-going training will be provided to interviewers.
Reliability and Validity (Accuracy)
Questions are screened to ensure readability through research of best practices and read aloud testing. Response options are also screened to create independent/ non overlapping options and dubious replies due to unclear or overlapping response scales. Complex wording, technical terms, jargon, and difficult phrases are closely monitored. Interviewers are screened to remain unbiased and not pressure respondents for answers. Discussions with stakeholders to determine proper terminology of programs and other areas of assessment are used to create a valid survey. Data are collected at appropriate times following the interaction or close of the disaster to ensure best recall to provide valid results. Historical data from surveys, on average, produce similar results, which help test reliability.
Factors that contribute to the non-response portion may be due to the nature of the disaster; such as, due to the disaster applicants who are survivors often do not have telephone service, cell phone service, nor electrical service in their community. Frequent relocations and displacements are anticipated affecting the respondent’s availability to complete the survey. Survivors may not want to use their cell phone minutes to respond to a survey. Disaster trauma may be a factor as the survivor may not remember all the different interactions with FEMA or was not familiar with the case. Due to these factors, sample size is adjusted to accommodate historical nonresponse rates to alleviate possible unreliable/low response data. This is done by taking similar surveys’ response rates and increase the targeted completions.
Ex. If we would like 400 completions but we know we only receive 28% a response rate, we would then survey 1,428 to ensure we receive the 400.
Due to FEMA’s unique focus on various types, size, and magnitude of disasters, and possible media sensationalism that could affect satisfaction, continuous data collection is used. This methodology provides information on ways to continuously improve the disaster survivor experience. Although we use continuous data collection methods, no disaster survivor is called twice for the same survey within the same disaster. For smaller disasters, we may contact the same survivor for a different survey within the collection.
On the rare occasion of low disaster activity and there isn’t enough survey data available to draw valid conclusions, a disclaimer about the small sample size and low precision rate will be included at the beginning of any reports. When appropriate, use of adjustments to scores using weights may be used to deal with nonresponse bias.
4. Describe any tests of procedures or methods to be undertaken. Testing is encouraged as an effective means of refining collections of information to minimize burden and improve utility. Tests must be approved if they call for answers to identical questions from 10 or more respondents. A proposed test or set of tests may be submitted for approval separately or in combination with the main collection of information.
The scope of the questions in the surveys have been performed for ten-to fifteen years and were initially designed based on comments from past focus groups and contractor recommendations. FEMA personnel also reviewed the questionnaire content and wording to improve readability and clarity.
Tests for readability are conducted by staff to help with reliability and accuracy. This includes question layout, wording, definitions, and timing. Questions are also analyzed for plain language.
Discussion with interviewers who have one-on-one experience with public respondents are performed to revise survey content.
5. Provide the name and telephone number of individuals consulted on statistical aspects of the design and the name of the agency unit, contractor(s), grantee(s), or other person(s) who will collect and/or analyze the information for the agency.
The Customer Survey & Analysis (CSA) Section plans, designs, administers, and analyzes results of the survey. This includes the survey methodology and sample selection, collecting, tabulation and reporting of the data.
Jessica Guillory, Statistician Customer Survey & Analysis Federal Emergency Management Agency 940 891 8528
Dr. Kristin Brooks, Statistician Customer Survey & Analysis Federal Emergency Management Agency 940 891 8579
Gena Fry, Program Analyst Customer Survey & Analysis Federal Emergency Management Agency 940 891 8543
|
Maggie Billing, Program Analyst Customer Survey & Analysis Federal Emergency Management Agency 940 891 8709
Kristi Lupkey, Supervisory Program Analyst Customer Survey & Analysis Federal Emergency Management Agency 940 891 8852
Chad Faber, Section Manager Customer Survey & Analysis Federal Emergency Management Agency 940 891 8956
|
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
File Title | Rev 10/2003 |
Author | FEMA Employee |
File Modified | 0000-00-00 |
File Created | 2021-01-14 |