0036_Supp Stat_20070613_Part B

0036_Supp Stat_20070613_Part B.doc

Federal Emergency Management Agency (FEMA) Individual Assistance Customer Satisfaction Surveys

OMB: 1660-0036

Document [doc]
Download: doc | pdf

Date: June 13, 2007

Supporting Statement for

Paperwork Reduction Act Submissions


Title: Federal Emergency Management Agency (FEMA) Individual Assistance Customer Satisfaction Surveys


OMB Control Number: 1660-0036


Form Number(s):

  • FEMA Individual Assistance Customer Satisfaction Surveys:

    • FEMA Form 90-147, Registration Intake Survey

    • FEMA Form 90-148, Helpline Survey

    • FEMA Form 90-149, Program Effectiveness and Recovery Survey

    • FEMA Form 90-150, Internet On-Line Registration Phone Survey

    • FEMA Form 90-151, Internet Applicant Inquiry/Update Phone Survey

    • Moderator’s Guide for Focus Group


General Instructions


A Supporting Statement, including the text of the notice to the public required by 5 CFR 1320.5(a)(i)(iv) and its actual or estimated date of publication in the Federal Register, must accompany each request for approval of a collection of information. The Supporting Statement must be prepared in the format described below, and must contain the information specified in Section A below. If an item is not applicable, provide a brief explanation. When Item 17 or the OMB Form 83-I is checked “Yes”, Section B of the Supporting Statement must be completed. OMB reserves the right to require the submission of additional information with respect to any request for approval.


To complete the supporting statement, type in your responses in the white space below for each question. Your responses should be full and complete and provide sufficient information to help the OMB desk officer to understand what you are planning to do and why and how the Agency/Federal Government will benefit from and use the information you will be obtaining or soliciting.


Specific Instructions


B. Collections of Information Employing Statistical Methods.



1. Describe (including numerical estimate) the potential respondent universe and any sampling or other respondent selection method to be used. Data on the number of entities (e.g., establishments, State and local government units, households, or persons) in the universe covered by the collection and in the corresponding sample are to be provided in tabular form for the universe as a whole and for each of the strata in the proposed sample. Indicate expected response rates for the collection as a whole. If the collection has been conducted previously, include the actual response rate achieved during the last collection.


All surveys proposed here are Time-Limited.

The target populations of this information collection are individuals who are disaster victims living in the US territory and registering for federal assistance for each declared disaster excluding catastrophic disasters. Catastrophic disasters are not required to be reported in GPRA. The sampling frames consist of the names of all the disaster victims who contact FEMA for disaster assistance. There can not be a misclassification or eligibility confusion for the sampling frames because they are generated strictly by the definition of the target population stated above. There is neither an exclusion of any element, nor an alternative sample frame. There is no post-stratification procedure included in this IC, but the obtained data are aggregated to estimate the customers’ satisfaction level for the entire primary unit or the target population.

A systematic random sampling method is used to select a group of people to be surveyed from the target population stored in electronic data files, usually in the National Emergency Management Information System (NEMIS). The target populations have a specific character for each survey; for example, the target population of Registration Intake Survey is all people who contacted FEMA to register for the particular disaster assistance. The surveys have an independent sampling from each other. The entity of the element of all samples is individual, and there is no stratification involved in the sampling. More detailed sampling methods and timeliness are provided below for each survey instrument.



The disaster process covers a span of time and the goal is to measure and then report on those services and processes over that span of time. Participation in the survey is available to all applicants based on a random selection. To achieve these goals and not to over burden the disaster victims, weekly goals are established which will cumulate into a statistically valid response.

Registration Intake (RI) Survey and Helpline (HL) Survey

RI surveys are conducted in successive eight waves over a period of eight weeks after the first disaster victim’s registration. Each sampling and survey wave has a duration of one week, that is, the sample in each wave consists of a random selection from all applicants based on the expected call traffic between days 1-7, days 8-14, says 15-21, etc. The first waves are given a heavier estimated proportional weight. Historically 40% of all applicants apply for disaster aid within the first week, reducing to 15% in the second week and 10% in the third week.


The HL sampling and surveys are conducted in successive eight waves over a period of eight weeks after the first disaster victim’s Helpline contact. Each sampling and survey wave has a duration of one week, that is, the sample in each wave consists of a random selection from all contacts based on the expected call traffic between days 1-7, days 8-14, days 15-21, etc. The first wave is given a lower estimated proportional weight because disaster victims are waiting to hear what their initial determination will be. Waves 2 through 5 are given a heavier proportional weight in sample sizes while the following three waves level out to account for the usual tendency of decrease in contacts during the last three weeks.


Based on resources, the aim for each declared disaster at the RI and HL survey stage is to complete approximately 385 surveys over the course of the typical application period of 60 days, which provides in effect a minimum 95% confidence level and error margin of plus or minus 5% at 50% response distribution. The margin of error will decrease for each questionnaire because, historically, the minimum percentage of the responses for ‘satisfaction’ tendency was 71%. The completion of 385 surveys is generally achieved by finishing 160 RI surveys during the first wave, and 60 the following wave, then 40, leveling out to 25 for each wave to the close of the application period. For the HL survey, the first wave will be smaller with a completion of 20, then 60, then 80, and back down to 60 as contacts diminish. For each wave a sufficient sample of applicant data is imported through a systematic random sample extracted from the target population stored in the NEMIS data warehouse.


Program, Effectiveness & Recovery (PE&R) Survey

The Program, Effectiveness & Recovery survey is completed at two periods during the disaster: the first systematic random sample is drawn from the first fifteen days of calls and the survey is completed after the 30th-40th day; the final PE&R is pulled from days 16 to the close of the application period and is completed around 30 days after the close of the application period. The purpose of this survey strategy is to give all applicants the opportunity to participate in the surveys as well as give the Federal Coordinating Officers and Program Managers timely information, which they can utilize for more immediate improvements.


In the past year the actual decline rate for all three surveys above was 4.5% of all attempts. Response rate of completes to declines was 90.4%. Completes to all attempts was 30.72% due to the applicant’s unavailability to complete the survey, bad/wrong phone numbers, busys, declines, no answer, voice mail, privacy manager or the applicant did not remember contacting FEMA or was not familiar with the case. Attempts are made to reach the respondent each time when his/her case systematically returns to the call queue. Up to 4 attempts are made to obtain a survey response if necessary to achieve the target number of completed surveys. If an applicant is not immediately available, an attempt is made to set up another time within the survey period that would be more convenient for the respondent, and the interviewer explains how important his/her feedback is.


Internet On-Line Registration and Applicant Inquiry/Update Phone Survey

The Internet On-Line Registration Phone Survey and Internet Applicant Update/Inquiry Survey are administered in the same way as the RI and HL. The difference is a smaller universe size, less sample.



Table 6 below shows the data on the size of the universe covered by the collection and the corresponding samples for the universe as a whole and for each administration / declared disaster per year. In addition, Table 6A shows the target numbers of completed surveys and the confidence levels and margin of errors pursued at the worst case of response distribution, p = 0.5



Table 6. Annual estimates of universe and sample sizes. The entity of the sample elements is ‘individual’ for all surveys.

Phone Survey

Total Annual Universe for All DRs*1

Number of DRs 2006 *2

Target Universe per DR

Sample/

per Wave per DR based on response rate of 30%

Sample per DR

Confidence Level

[Margin of Error]*3

Registration Intake Survey

681,712

23

29,640

Wave 1

Wave 2

Wave 3

Wave 4-8

1155

95%

[+/- 2.8%]

480

180

120

75

Helpline Survey

221,556

23

9,633

Wave 1

Wave 2

Wave 3

Wave 4-5

Wave 6-8

1035

95%

[+/- 2.9%]

60

180

240

120

105

Program Effectiveness & Recovery Survey

378,784

23

16,469

Wave 1

Wave 2

1152

95%

[+/- 2.8%]

576

576

Internet On-Line Registration Survey

141,678

23

6,160

Wave 1

Wave 2

Wave 3-8

390

95%

[+/- 4.8%]

150

60

30

Internet Applicant Inquiry / Update Survey

6,051

23

263

Wave 1

Wave 2

Wave 3

Wave 4-5

Wave 6-8

345 *4

100%

[+/- 0%]

30

60

75

60

20


Notes

*1 : Universe size is estimated based on the number of disaster victims in FY 2006.

*2 : Number of disasters is based on the number of disaster occurred during FY2006.

*3 : Confidence Level and Margin of Error are at 50% response distribution (p = proportion = 0.5) for the sub-samples, which are actually given to the interviewers to make survey calls, corresponding to the target universe.

*4 : When the sample size is greater than the universe size, universe survey is conducted.

Table 6A. Target number of completed surveys (FY 2006 and proposed), and the confidence level and margin of errors pursued at response distribution, p = 0.5.


Phone Survey

Total Number of Completed Surveys in FY 2006

Target Number of Completed Surveys per DR during the Entire Application Period (Wave 1 to 8)

Target Number of Completed Surveys per DR during the 8 Waves or Weeks

Annual Target Number of Completed Surveys for 23 DRs

Pursued Confidence Level

[Margin of Error]

REPORTED BY DISASTER

Registration Intake Survey

4,411

385

Wave 1

Wave 2

Wave 3

Wave 4-8

8,855

95% [+/- 5%]

160

60

40

25

Helpline Survey

3,276

385

Wave 1

Wave 2

Wave 3

Wave 4-5

Wave 6-8

8,855

95% [+/- 5%]

20

60

80

60

35

Program Effectiveness & Recovery Survey

6,381

384

Wave 1

Wave 2

8,832

95% [+/- 5%]



192

192

REPORTED BY QUARTER

Internet On-Line Registration Survey

NA

130

Wave 1

Wave 2

Wave 3-8

2,990

95% [+/- ~3%]

747.50 per quarter

50

20

10

Internet Applicant Inquiry / Update Survey

NA

125

Wave 1

Wave 2

Wave 3

Wave 4-5

Wave 6-8

2,875

95% [+/- ~3%]

718.75 per quarter

10

20

25

20

10


2. Describe the procedures for the collection of information:


Once a sample set is obtained, the interviewer calls the individuals in the sample to conduct an interview until statistical validity is reached. In the case of the proposed information collections, sending a pre-notification letter for the survey is not desirable because of the time constraint (see B. #1). The following items are employed to the information collection instruments to insure the best balance between maximizing data quality and minimizing respondent time burden, some of which are relevant to response rate improvement.


  • The opening statement explains, briefly, the purpose of the study and nature of being voluntary, and asks for the Applicant's help in order to improve FEMA's response to future disasters.

  • The questions are short and require little time to complete (Historically, it takes less than 15 minutes to complete the questions, on the average 9:38 minutes in FY 06.)

  • The questions are very straightforward and easy to complete.

  • An explanation is given that questions in no way affect the outcome of their application with FEMA.

  • Information gathered from focus groups will be used to ensure that the survey items included are of interest to Individual Assistance applicants, making respondents more likely to see the survey as relevant.

  • Revisions will be made to the survey with attention to correcting low response items.

  • Callbacks are made to applicants who state they will be available at a later time when feasible and resources allow.

  • When limited sample is available, additional attempts are made to contact the applicant up to 4 attempts.

  • Training is provided and more experienced interviewers are retained.

  • Interpreters are used to obtain results from applicants with other languages.



2.1. Statistical methodology for stratification and sample selection,



There is no sample stratification.

A systematic random sample is generated by selection of every nth name in the entire target population from the electronic database in the National Emergency Management Information System (NEMIS) or a program similar to it that contains the names, phone numbers, addresses, and disaster related information of all such applicants.

Each registered applicant from any given disaster will have the same chance of being chosen by the systematic random sampling method. The phone survey sample is imported into the survey database and is randomly populated onto a computer screen from the pool of names using Microsoft Access software or a similar program for the interviewer.

For RI, HL, and both Internet Phone Surveys, a weekly sample is imported into the survey database by disaster and by contact/call dates so that all applicants have the opportunity to participate in the survey. For PE&R, the first 15 days of the registrants are sampled on or about day 30-40 of the disaster. The remaining PE&R sample is imported after the close of the application period, allowing the remaining respondents time for recovery. The two Internet surveys are administered the same way as the RI and HL survey.



2.2. Estimation procedure,

To estimate the satisfaction level of service provided, the top three positive responses (for example, excellent, good and satisfactory) are averaged based on the sample representing the universe of disaster applicants.



2.3. Degree of accuracy needed for the purpose described in the justification

Although extremely accurate statistical inference is not necessary for this information collection, the goal is to achieve a level of accuracy of the estimated customer satisfaction on a per disaster basis based on call volume for that disaster of approximately 5% margin of error 95% confidence level, (19times out of 20) for all surveys except the internet phone surveys. The two Internet Surveys have a smaller universe and less opportunity for a high statistical validity. Actual overall accuracy is computed at the time of report. The cumulative annual report with the completed interviews for all disasters will reflect in effect a confidence level of 99% with a margin of error plus or minus 2% assuming that the cumulative results of all the respondent sets are randomly distributed in the sample universe.

2.4. Unusual problems requiring specialized sampling procedures.

There are no unusual problems requiring specialized sampling procedures.


2.5. Any use of periodic (less frequent than annual) data collection cycles to reduce purden.


Usage of any periodic data collection cycle is not applicable to this particular type of information collection since disaster occurrences are not predictable enough to schedule a collection cycle in advance.


3. Describe methods to maximize response rates and to deal with issues of non-response. The accuracy and reliability of information collected must be shown to be adequate for intended uses. For collections based on sampling, a special justification must be provided for any collection that will not yield “reliable” data that can be generalized to the universe studied.


These surveys are performed by calling disaster assistant applicants. The historical average number of complete cases is 1 to 3.35 calls. Decline ratio average is 4.5% of the total attempts. The historical response rate for the proposed Individual Customer Satisfaction Surveys under the current OMB inventory is 30% using a response-rate formula recognized by the American Association for Public Opinion Research (AAPOR) as following:


RR = I / {(I+P) + (R+NC+O) + U}

, where


RR = Response rate

I = Complete interview

P = Partial interview

R = Refusal and break-off

NC = Non-contact

O = Other (bad/wrong numbers, technical phone problem, etc.)

U = Unknown eligiblity (= 0 in this case, see B. #1.)


The relatively low response rate is compared with a distribution of response rates presented in McCart et al., 2006, a paper concerning phone survey response rates. Figure 1 is a histogram of the response rates using the same RR formula for 205 telephone surveys conducted at the University of Florida Survey Research Center at the Bureau of Economic and Business Research between January 2000 and July 2004, which shows the mode response rate 25%, and the mean about 41.5%. Thus, the response rate of 30% is not in fact as low as it looks without any reference for comparison.



Figure 1. Histogram of the response rates for 205 telephone surveys conducted at the University of Florida Survey Research Center at the Bureau of Economic and Business Research between January 2000 and July 2004 (McCarty et al., 2006, Effort in Phone Survey Response Rates: The Effects of Vendor and Client-Controlled Factors, Field Methods, Vol. 18 No. 2, 172-188).


The target population is disaster victims. Since this information collection is time constraint, and immediately following a disaster, the victims have to be interviewed while they are still experiencing disaster trauma. In most of the cases the victims may be in the worst stage of the disaster trauma when they are called for the surveys. Disaster trauma psychology symptoms may include


[http://www.citizencorps.gov/cert/downloads/training/PM-CERT-Unit7Rev3.doc]:


  • Irritability or anger.

  • Self-blame or the blaming of others.

  • Isolation and withdrawal.

  • Fear of recurrence.

  • Feeling stunned, numb, or overwhelmed.

  • Feeling helpless.

  • Mood swings.

  • Sadness, depression, and grief.

  • Denial.

  • Concentration and memory problems.

  • Relationship conflicts/marital discord.

  • Loss of appetite.

  • Headaches or chest pain.

  • Diarrhea, stomach pain, or nausea.

  • Hyperactivity.

  • Increase in alcohol or drug consumption.

  • Nightmares.

  • The inability to sleep.

  • Fatigue or low energy.


In addition to disaster trauma, frequent relocations are anticipated for the victims after a disaster, which attributes to the Non-contact portion of the non-response. Often victims do not have telephone service in their community due to the disaster. Considering even during normal stages of everyday life “time-limited polls often yield very low response rates” (McCarty et al. 2006), we believe that we have achieved a very good response rate if not the best possible for this particular type of target population. Nonetheless, we follow the steps described in B. #2 to maintain the current level of success in the response rate though our surveyees may be still in disaster trauma during the survey periods.


4. Describe any tests of procedures or methods to be undertaken. Testing is encouraged as an effective means of refining collections of information to minimize burden and improve utility. Tests must be approved if they call for answers to identical questions from 10 or more respondents. A proposed test or set of tests may be submitted for approval separately or in combination with the main collection of information.


Many of the questions in the surveys are based on comments from past focus groups as well as contractor opinion. A contractor has been retained as of November 1, 2006 and will perform one-on-one focus groups and recommend new questions. FEMA personnel will also review questionnaire content and wording to improve readability and clarity. Tests with less than 10 applicants will be performed by FEMA’s customer satisfaction analysis staff when updates are desirable, and all updates to questionnaires will be submitted to OMB for approval. Quantitative design validation will be used to determine and improve the respondent’s comprehension of new questionnaires and minimize the burden. (See table 1-A)


5. Provide the name and telephone number of individuals consulted on statistical aspects of the design and the name of the agency unit, contractor(s), grantee(s), or other person(s) who will actually collect and/or analyze the information for the agency.


Survey

Contractor/POC

(Contractor engaged November 1, 2006-At this date, questionnaires have not been updated to reflect the contractor’s recommendations.) Registration Intake Survey, Helpline Survey, Program Effectiveness & Recovery Survey, Internet On-Line Registration Phone Survey, On-Line Applicant Inquiry / Update Phone Survey


Decision Analyst, Inc

Kevin Sharp

817 640-6166

Registration Intake Survey, Helpline Survey, Program Effectiveness & Recovery Survey, Internet On-Line Registration Phone Survey, On-Line Applicant Inquiry / Update Phone Survey


Maggie Billing

FEMA, TxNPSC

(940) 891-8500, ext 8709



Page 11 of 11

File Typeapplication/msword
File TitleRev 10/2003
AuthorFEMA Employee
Last Modified Byclim
File Modified2007-06-19
File Created2007-06-19

© 2024 OMB.report | Privacy Policy