1660-0143 - Supporting Statement B - 2023 02 23 clean

1660-0143 - Supporting Statement B - 2023 02 23 clean.docx

Federal Emergency Management Agency Individual Assistance Customer Satisfaction Surveys

OMB: 1660-0143

Document [docx]
Download: docx | pdf

February 23, 2023

SUPPORTING STATEMENT B FOR

Federal Emergency Management Agency Individual Assistance Customer Satisfaction Surveys

OMB Control No.: 1660-0143


COLLECTION INSTRUMENT(S):

  1. FEMA Form FF-104-FY-21-159 (formerly 519-0-36, Initial Survey - Phone

  2. FEMA Form FF-104-FY-21-160 (formerly 519-0-37), Initial Survey - Electronic

  3. FEMA Form FF-104-FY-21-161 (formerly 519-0-38), Contact Survey - Phone

  4. FEMA Form FF-104-FY-21-162 (formerly 519-0-39), Contact Survey - Electronic

  5. FEMA Form FF-104-FY-21-163 (formerly 519-0-40), Assessment Survey - Phone

  6. FEMA Form FF-104-FY-21-164 (formerly 519-0-41), Assessment Survey - Electronic

  7. Focus Groups

  8. One-on-One Interviews


B. Collections of Information Employing Statistical Methods.



When Item 17 on the Form OMB 83-I is checked “Yes”, the following documentation should be included in the Supporting Statement to the extent it applies to the methods proposed: If the collection does not involve statistical methodology, please enter “THERE IS NO STATISTICAL METHODOLOGY INVOLVED IN THIS COLLECTION” and delete Q1 through 5.


1. Describe (including numerical estimate) the potential respondent universe and any sampling or other respondent selection method to be used. Data on the number of entities (e.g., establishments, State and local government units, households, or persons) in the universe covered by the collection and in the corresponding sample are to be provided in tabular form for the universe as a whole and for each of the strata in the proposed sample. Indicate expected response rates for the collection as a whole. If the collection has been conducted previously, include the actual response rate achieved during the last collection.


The target population for the three surveys in this collection consist of all disaster survivors who registered for assistance with FEMA. For the past 3 years (FY 2020-2022), the average annual population for this survey collection is approximately 969,830 applicants. The number of annual FEMA applicants can vary significantly depending on disaster size and number of disaster declarations.


Survey samples are stratified by disaster size (number of registrants) and communication preference (USPS or Email) for each survey period. Each survey imports enough sample to draw conclusions on the monthly (and/or quarterly/yearly) population of FEMA registrants.


Each survey has specific sub-populations of interest. For example, applicants are not eligible to participate in the Contact survey until they either have an inspection, call Helpline, or log on to their online account. The populations are grouped by the intent of the scope of each survey as follows:


Initial (INT) Survey (Phone or Electronic) measures the quality of disaster assistance information and services received during the initial registration process. Possible registration methods include (1) with a FEMA representative or (2) online via DisasterAssistance.gov website. The survey uses a skip pattern to ask specific questions based on disaster survivor’s registration methods. The population for this survey is broken into two groups:

  • Registered by phone or FEMA agent- People who registered for disaster assistance from FEMA by phone or in person with a FEMA agent. The target number of completes per month is 550.

  • Registered online -People who registered for disaster assistance from FEMA using the website. The target number of completes per month is 550.


Additional questions are asked for applicants who visited a Disaster Recovery Center (DRC). We estimate 37% of applicants visit a DRC. In order to draw statistically valid conclusions about applicants who visit a DRC, we increased the necessary sample size for the INT Survey. Approximately 1,100 completions for INT (550-phone registration; 550 online registration) provides ~400 DRC visitors. 

 

Contact (CNT) Survey (Phone or Electronic) measures the quality of disaster assistance information and services received during additional contact methods with FEMA. These contact methods include: (1) with a FEMA representative phone contact, (2) FEMA inspector contact, and (3) online account access via DisasterAssistance.gov website. The survey uses a skip pattern to ask specific questions based on disaster survivor’s type of contact. The population for this survey has three groups:

  • Phone contact (Helpline) - People who called FEMA regarding their application. The target number of completes per month is 400.

  • Online access via DisasterAssistance.gov - People who accessed their application online. The target number of completes per month is 400.

  • Inspection - People who had an inspection. The target completion per month is 400.


Assessment (AST) Survey (Phone or Electronic) measures the quality of disaster assistance information and services received after eligibility is determined. The survey uses a skip pattern to ask specific questions based on disaster survivor’s eligibility status. The population for this survey is broken into two groups:

  • Eligible Applicants- People who receive disaster assistance from FEMA. The target number of completes per month is 400.

  • Ineligible Applicants -People who did not receive disaster assistance from FEMA. The target number of completes per month is 400.


After a disaster is declared by FEMA and, during the registration process for assistance, survivors indicate their preference of communication (i.e., USPS mail vs. Email). Disaster survivors who prefer USPS communication with FEMA receive phone administered surveys. Alternatively, disaster survivors who prefer electronic communication receive electronic surveys.


The percentages of respondents who receive each administration mode vary by disaster. When examining the distribution for communication preference over the last 3 years (FY 2020-2022), roughly 61% of FEMA applicants prefer email correspondence and 39% of applicants prefer USPS correspondence. These percentages are used in the table below (Table 1) to estimate completes for each administration mode. Surveys are not administered by USPS mail, but it is assumed this population may not want an electronic survey. The distribution for communication preference varies by disaster.


Data is collected continuously by using mutually exclusive samples representative of the number of registrations during the sampling period. The table below shows the estimated sizes of the universes covered by the collection, corresponding target completions and samples, and the actual and expected responses rates for each survey. Response rates for phone surveys are provided for the past 3 years (FY 2020-FY 2022); response rates for electronic surveys are provided for the past year (July 2021- September 2022). Target completions are the maximum amount used for reporting and drawing conclusions. Target completions may decrease depending on managements need of monthly versus quarterly and/or yearly reporting. For smaller disasters, the same survivor may be contacted for a different survey within the collection (i.e., respondent who completed the Initial survey may be contacted again to take the Assessment survey).

Qualitative research (focus groups and interviews) will not be subject to probabilistic sampling methods (e.g., usually based on purposive or convenience sampling). Historical data shows a response rate of 5% ( ) and an attendance rate of 35% ( ) for focus groups without incentives.


Respondents will be selected based on management requests for more insight into an issue, project, or program within Individual Assistance. This could include a deeper dive of customer perception on a program based on recent changes to the program, or inconclusive survey results that may require further examination. Occasionally, FEMA may notice a trend in the survey comments (e.g., letter was confusing) or lower satisfaction scores on certain questions and are unable to address the problem until more specific feedback is gathered. The methodology is to reach out to a small sample of disaster survivors based on the characteristics of the program in which they participated. Typically, a few different geographic regions are identified, and we call a sample within a 30-mile radius. Focus groups may be conducted in person or online. Interviews are sometimes more convenient when the sample is widespread, or we are having trouble getting enough sample for focus group discussions. Projected completions are based on previous usage, while still allowing some room for potential increased activity.


For more information on the focus group protocol and moderators guide, see attachment in supplementary documentation of this information collection.



Table 1: Description of Respondent Population, Sampling Method, Response Rates based on the average yearly disaster activity from FY 2020-2022

Type of Respondent / Entity

Form Name / Form Number

Target Population Description

Potential Respondent Population Numerical Estimate

Target Completions per Month
(61% Electronic vs
39% Phone survey)

Target Completions per Year

Response Rates (FY 2020-2022)

Target Annual Adjusted Sample Size

Annual Respondent Universe

Surveys

 

 

A

E

E×12

F

(E×12) ÷F

Individuals and Households

Initial Survey - Phone /
FEMA Form FF-104-FY-21-159 (formerly 519-0-36)

Applicants who register for FEMA assistance

378,234

429

5,148

38%

13,547

Initial Survey - Electronic /
FEMA Form FF-104-FY-21-160 (formerly
519-0-37)

591,596

671

8,052

14%

57,514

Contact Survey - Phone /
FEMA Form FF-104-FY-21-161 (formerly
519-0-38)

Applicants who have an inspection, call Helpline, or log into their online account

378,234

468

5,616

36%

15,600

Contact Survey-Electronic /
FEMA Form FF-104-FY-21-162 (formerly
519-0-39)

591,596

732

8,784

16%

54,900

Assessment Survey - Phone /
FEMA Form FF-104-FY-21-163 (formerly
519-0-40)

Applicants who receive and eligibility decision

378,234

312

3,744

19%

19,705

Assessment Survey -Electronic /
FEMA Form FF-104-FY-21-164 (formerly
519-0-41)

591,596

488

5,856

8%

73,200

Total Survey Sample Size

 

 

 

3,100

37,200

 

234,466

Qualitative Research

 

 

 

 

 

 

 

Individuals and Households

Focus Group for 2 Hrs. Plus Travel 1 Hr.

Based on annual registrations

969,830

 

500


 

One-on-One Interviews

Based on annual registrations

969,830

 

500

 

 

Qualitative Total

 

 

 

 

1,000

 

 

Surveys and Qualitative Research





38,200

22%





2. Describe the procedures for the collection of information including:



-Statistical methodology for stratification and sample selection:



Achieving a representative sample of the population is key for generalizing findings; therefore, a probability-based sampling method of stratification by disaster size (registrations per disaster X communication preference) will be used to make sure each homogeneous subgroup within the population is represented. To ensure each subgroup in the overall sample is represented at similar levels of precision, the sample is also adjusted using historical response rates to accommodate each population. This ensures there are enough data elements within each sample to make statistical inference on the overall disaster survivor population and subpopulations (i.e. survey scope) of interest.

Stratification provides gains in precision, or reliability, of the survey estimates and the gains are greatest when the strata are maximally heterogeneous. Sampling based on each disaster is typically proportionate, but there are some instances in which more sample may be drawn for small or underrepresented disasters. Sampling for communication mode is typically disproportionate; response rates are much lower for electronic surveys so more sample is drawn to achieve set quotas.


This design supports performance measurement at a FEMA Recovery Directorate level. Customer satisfaction is generally relatively stable across disasters. At one point in time FEMA imported enough sample to make conclusions about each individual disaster, but the results were not being utilized. There was usually too little variance in survey scores between disasters to be useful. Hence, survey samples are now stratified based on the proportion of registrants for the disasters, and respondents are surveyed continuously, until the registration period closes and there are no more registrations for the disaster. This ensures that all disasters to which FEMA provides individual assistance are represented without overburdening the public. Communication preference is used as a stratification variable because different demographic groups favor certain communication methods (e.g., email is preferred more by younger respondents), and communication preference can vary greatly from one disaster to the next.



-Estimation procedure:


Weights and Poststratification


Customer Survey and Analysis (CSA) has started making weighted survey data available. With voluntary surveys, there are always sources of error that impact overall accuracy. The more sources of error, the less the sample is representative of the population. Rather than accept a poor match between the sample and the population, it is now common for survey datasets to use survey weights to bring the two more closely into line.


Base sampling weights (or design weights) represent selection probability, or how likely it is for an individual in the population to end up in the survey sample. Base weights are calculated by taking 1/selection probability (N/n) for each stratum. If every stratum is sampled proportionately, the design weights for each stratum would be the same. Communication preference is sampled disproportionately- more applicants with email preference are sampled because the response rate is lower for email surveys. Small disasters are occasionally oversampled if they are overshadowed by a larger disaster. Base sampling weights are applied for each survey to reflect the stratification of the survey design.


The base sampling weights are then adjusted for survey nonresponse within each stratum. Nonresponse is problematic when non-respondents are a non-random sample the total sample. A disaster where respondents have internet connection issues, or cell service is unreliable, may have higher rates of nonresponse. Electronic surveys also have higher rates of nonresponse compared to phone surveys. Base weights for nonresponse represent the response propensity for each stratum (1/response weight).


The design weights and the base weights for nonresponse are multiplied together to arrive at the final survey weights. Adjusted standard errors are also reported.


Reports with weighted and unweighted raw scores are made available, along with standard errors. Generally, the differences between the weighted and unweighted scores are small due to the stability in scores; but there are instances for specific disasters when the survey weights are impactful. Publishing both scores allow for increased transparency and confidence in the interpretation of the survey results.


Poststratification, calibration, and other weighting adjustments may be considered in the future. In the past FEMA has only known the demographic composition of FEMA applicants by referencing the Census calculated demographic populations for a county declared in a disaster area. This is problematic because research shows that the FEMA registrant population varies from Census demographic statistics. In August 2022, FEMA began collecting demographic information at registration. For the first time FEMA has demographic data for the survey population. Although there is currently not enough data for analysis, nonresponse and potential poststratification variables will be examined in the future.


-Degree of accuracy needed for the purpose described in the justification:

Overall sample size:


The target number of completions per month for the Initial, Contact, and Assessment Surveys were created to ensure statistical inference on monthly, quarterly, and/or yearly reporting. The degree of accuracy is obtained by using a 50% variability assumption on the population (response distribution), 5% precision (Margin of error), and 95% confidence level. This sample size allows us to make statistical inference of the population and is considered appropriate in survey research [Ref: http://www.raosoft.com/samplesize.html].


Ex. The aim for the Initial (INT) surveys is to complete a statistically valid number of surveys based on approximately 13,200 per year by finishing 1,100 surveys each month for the duration of the survey time frame. Enough sample of survivor data for the target audience of each survey is imported into the system.

Ex. If you use a confidence interval that has a margin of error of 5% and 50% percent of your sample picks an answer of 5=Excellent out of 1 through 5, you can be "sure" that if you had asked the question 95 out of 100 times for the entire relevant population, then between 45% (50-5) and 55% (50+5) would have picked 5 as their answer.


Power Analysis:


Most analyses will look at the overall estimates at each reporting time period (Months/quarters/year) as described in Statistical Methodology. There are circumstances where disasters (or other variables) may display notably atypical satisfaction results, and/or management might request statistical testing across disasters (or other variables of interests) to understand differences. Analysis using ANOVA, Chi squared, or other factorial designs may be used to make these comparisons.


In order to ensure there is adequate sample, power analyses will be performed a priori. Sample size calculations will vary depending on the number of variables, variable structure, and statistical tests being performed. In most instances we strive for an 80% power level and deal with medium effect sizes.


Below gives examples of possible comparison analyses and the corresponding sample size with those parameters:

  • ANOVA: Age groups (4 Levels); 180 minimum respondents

  • Two-way ANOVA: Income groups (4 Levels) x Quarterly disasters (4 Levels); 260 minimum respondents

  • 3-way ANOVA: Factorial design of satisfaction by age (4 Levels)/income (4 Levels) /survey mode (2 Levels); 260 minimum respondents.


Analysis performed between groups are usually aggregated to larger groups. For example, age categories may be collapsed to ensure there is adequate sample to accommodate analysis based on management’s needs.


Based on the power analyses, we usually have enough data after a quarter’s worth of surveying to perform comparison analyses on various groups of variables.


-Unusual problems requiring specialized sampling procedures:

There are no unusual problems requiring specialized sampling procedures.


-Any use of periodic (less frequent than annual) data collection cycles to reduce burden:


No data is collected less than annually. Initial and Contact surveys are conducted within two to three weeks after the interaction with FEMA for best response recall.  Assessment Surveys ask overall questions related to service, assistance and recovery which need more time to experience after the disaster occurred; therefore, the survey is conducted 30 days or more after the eligibility is determined.


3. Describe methods to maximize response rates and to deal with issues of non-response. The accuracy and reliability of information collected must be shown to be adequate for intended uses. For collections based on sampling, a special justification must be provided for any collection that will not yield “reliable” data that can be generalized to the universe studied.



Maximizing Response Rates



Maintaining adequate response rates of surveys continues to be a problem as more people are fatigued from survey inundation, and highly publicized confidentiality breaches from various organizations have people uneasy about providing information. Since introducing email surveys, the response rate for phone surveys has increased. The response rate for the overall collection has slightly decreased because electronic surveys traditionally have lower response rates, but respondents are being surveyed in their preferred communication method. As a result, the surveys are potentially reaching a more representative group of respondents (e.g., some people hate phone surveys). Electronic surveys are also much faster to complete than phone surveys, which has also decreased burden time compared to previous survey collections.


Below are additional survey efforts that will be performed regularly in order to maintain/increase response rates:

  • Scheduling of phone surveys will be during normal business hours. Hours may be changed depending on disaster activity and time zone of the respondents being surveyed.

  • Follow-ups or reminders in the form of electronic communication or phone calls will be used to encourage participation.

  • Callbacks will be attempted to applicants who request a different time/day to take the survey that is more convenient.

  • The opening statement will explain the purpose of the survey, the estimated time frame, and that participation is voluntary.

  • Multiple attempts will be made to contact each applicant.

  • The questions are straightforward, short, and easy to answer. Long verbal lists have been minimized. Listening to and remembering long lists of verbal response options can be tedious.

  • Applicants will be told their survey responses will in no way affect the outcome of their application for FEMA assistance.

  • On-going training will be provided to interviewers.


Reliability and Validity (Accuracy)


Questions are screened to ensure readability through research of best practices and read aloud testing. Response options are also screened to create independent/ non overlapping options and dubious replies due to unclear or overlapping response scales. Complex wording, technical terms, jargon, and difficult phrases are closely monitored. Interviewers are screened to remain unbiased and not pressure respondents for answers. Discussions with stakeholders to determine proper terminology of programs and other areas of assessment are used to create a valid survey. Data is collected at appropriate times following the interaction or close of the disaster to ensure the best recall to provide valid results. Historical data from surveys, on average, produce similar results, which help test reliability.


Factors that contribute to the non-response portion may be due to the nature of the disaster; such as, due to the disaster applicants who are survivors often do not have telephone service, cell phone service, nor electrical service in their community. Frequent relocation and displacements are anticipated affecting the respondent’s availability to complete the survey. Survivors may not want to use their cell phone minutes to respond to a survey. Disaster trauma may be a factor as the survivor may not remember all the different interactions with FEMA or was not familiar with the case. Due to these factors, sample size is adjusted to accommodate historical nonresponse rates to alleviate possible unreliable/low response data. This is done by taking similar surveys’ response rates and increase the targeted completions.


Ex. If we would like 400 completions but we know we only receive 28% a response rate, we would then survey 1,428 to ensure we receive the 400.


FEMA applicants are exposed to various natural hazards with unique impacts. Each disaster is different, and there is the potential for media sensationalism to influence satisfaction. In order to keep a pulse on the customer experience, continuous data collection is used. This methodology provides information on ways to continuously improve the disaster survivor experience. Although continuous data collection methods are used, no disaster survivor is called twice for the same survey within the same disaster. For smaller disasters, FEMA applicants may be contacted for a different survey within the collection.


On the rare occasion of low disaster activity and there isn’t enough survey data available to draw valid conclusions, a disclaimer about the small sample size and low precision rate will be included at the beginning of any reports.


4. Describe any tests of procedures or methods to be undertaken. Testing is encouraged as an effective means of refining collections of information to minimize burden and improve utility. Tests must be approved if they call for answers to identical questions from 10 or more respondents. A proposed test or set of tests may be submitted for approval separately or in combination with the main collection of information.


No testing was done for this collection because the surveys were not revised.


FEMA has administered customer satisfaction surveys for the last 10-15 years, and the initial surveys were designed based on comments from past focus groups and contractor recommendations.


Whenever revisions are made to the surveys, tests for readability are conducted by staff to help with reliability and accuracy. This includes question layout, wording, definitions, and timing. Questions are also analyzed for plain language. Thorough testing is performed in each administration mode by multiple staff.


Discussion with interviewers who have one-on-one experience with public respondents are performed to revise survey content.


5. Provide the name and telephone number of individuals consulted on statistical aspects of the design and the name of the agency unit, contractor(s), grantee(s), or other person(s) who will collect and/or analyze the information for the agency.


The Customer Survey & Analysis (CSA) Section plans, designs, administers, and analyzes results of the survey. This includes the survey methodology and sample selection, collecting, tabulation and reporting of the data.


Dr. Kristin Brooks, Statistician

Customer Survey & Analysis

Federal Emergency Management Agency

202 826-6291


Dr. Brandi Vironda, Statistician

Customer Survey & Analysis

Federal Emergency Management Agency

940 205 9576


Gena Fry, Program Analyst

Customer Survey & Analysis

Federal Emergency Management Agency

940 268 9223

Jason Salazar, Program Analyst

Customer Survey & Analysis

Federal Emergency Management Agency

940 268-9245


Kristi Lupkey, Supervisory Program Analyst

Customer Survey & Analysis

Federal Emergency Management Agency

940 368 2571


Chad Faber, Section Chief

Customer Survey & Analysis

Federal Emergency Management Agency

940 535 8364


9


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File TitleRev 10/2003
AuthorFEMA Employee
File Modified0000-00-00
File Created2023-07-29

© 2024 OMB.report | Privacy Policy