Supporting Statement Part B 1660-NW32 11-02-08

Supporting Statement Part B 1660-NW32 11-02-08.doc

FEMA Public Assistance Program Evaluation and Customer Satisfaction Survey

OMB: 1660-0107

Document [doc]
Download: doc | pdf

B. Collections of Information Employing Statistical Methods.



When Item 17 on the Form OMB 83-I is checked “Yes”, the following documentation should be included in the Supporting Statement to the extent it applies to the methods proposed:


1. Describe (including numerical estimate) the potential respondent universe and any sampling or other respondent selection method to be used. Data on the number of entities (e.g., establishments, State and local government units, households, or persons) in the universe covered by the collection and in the corresponding sample are to be provided in tabular form for the universe as a whole and for each of the strata in the proposed sample. Indicate expected response rates for the collection as a whole. If the collection has been conducted previously, include the actual response rate achieved during the last collection.


All assistance recipients comprise the universe of this collection and they are all included in this survey. All primary units in the sampling universe are contacted. The total potential universe of grantees and sub-grantees varies from year to year based on how many disasters there are. It is a particularly difficult number to predict. The number of recipients in 2006, for example, was unusually high at 9,018 because of hurricane Katrina. Our universe of selection is the number of all recipients in the upcoming years of this collection. Therefore, our proposed number of applicants that we are asking to collect data from are what we expect in a typical year based on a modest return rate.



The target population is composed those that applied for and received Public Assistance funds as a result of a federally-declared disaster. The sampling frame is the list of all the public units that applied for and received FEMA Public Assistance, and derived from NEMIS. We are asking for approval to survey 3,360 of them and are making efforts to get a response from all applicants. The contractor mails the survey material to representatives of all Business or other for-profit, Not-for-profit institutions, Federal Government, and State, Local or Tribal Government institutions in the sampling universe.


The purpose of the Public Assistance Program Customer Satisfaction Survey is to assess customer satisfaction with different policy, process, and human performance aspects of FEMA’s Public Assistance Program.


The disaster survey response rate is calculated using the following formula:

Number of Completed Surveys Returned

(Number of Applicants – Number of Undeliverable Surveys)


Historical Response Rates for This Collection

This is a new information collection request. Upon our next request in three years we will provide detailed descriptions of the response rates from this collection. In the event of response rates falling below 80%, a non-response analysis will be performed on the group(s) in question.


For the qualitative portion collected from the focus groups, the Public Assistance Program’s intent is to gain customer feedback and use that feedback qualitatively, rather than quantitatively to evaluate specific aspects of the program. The proposed collection will meet this objective in the focus groups even if there fails to be an 80% or more response rate for the results of the focus group because the goal is in-depth understanding. Information collected from the focus groups is used qualitatively to enhance FEMA’s understanding of the survey feedback.


Measures:

Frequencies of responses are converted to percentages of survey respondents selecting one of the affirmative response choices (e.g., very satisfied, etc.). No complex analytical techniques or scaling methods are used to compile the survey responses received on a specific disaster. These percentages are then used to compute the percent of customer satisfaction for the performance measure by summing the percent of respondents selecting the “very satisfied,” etc. Twenty-six performance measures are grouped into six performance standards, with percent customer satisfaction for each performance measure averaged to compute the average percent customer satisfaction with each performance standard. The average percent satisfactions for the six performance standards are then averaged to compute the overall average customer satisfaction with FEMA’s assistance for the particular disaster.


The percent customer satisfaction with each performance measure and standard are tabulated in individual disaster survey reports and compared with the targets established for the measures and standards. The supporting survey response choice frequencies, percents, and cumulative percents for each performance measure is tabulated in the state report addendum, along with written comments in response to survey requests for additional information. Disasters with fewer than five survey respondents do not have report addendum prepared and the survey responses for that disaster are not aggregated with the other disasters for the annual report.


After all state disaster surveys have been compiled and reported, FEMA will prepare an annual survey report, which aggregates individual disaster survey data collected during the year for those disasters with more than 5 responses. Each disaster is weighted the same as another, regardless of the number of responses received, when aggregating the survey data. The computed average percent customer satisfaction for each measure and standard, therefore, represents an average of that recorded for each disaster surveyed that year. Statistical analyses consist of simple frequencies and percentages by response choices and computing average customer satisfaction for each performance measure and standard. The aggregated annual survey data will undergo comparative analysis by evaluating average percent customer satisfaction among different categories, to performance targets, and to the satisfaction levels for the previous year as follows:


  • Disaster Types (e.g., hurricanes, tornados, floods, wildfires, winter storms),

  • Disaster Size: defined as the total obligated dollars. Small as being less than $10M, Medium as between $10M and $25M and Large as being greater than $25M.

  • Respondent Types, i.e., grantee versus sub-grantee, and

  • Project Size: as defined by the statutory dollar value for small versus large projects.


Both the state and regional disaster survey reports and the annual reports will be compiled for their use when implementing for future disasters.


The schedule for future disaster surveys are shown in the table below. The proposed schedule is based on the surveyed disaster declaration date. Individual disaster surveys will be mailed out quarterly to all applicants of disasters declared between 180 and 270 days earlier. This time lag is necessary to allow applicants time to complete all the steps necessary to apply for and obtain funds from FEMA, since the survey asks for feedback on all these processes.


The survey contractor allows applicants at least 60 days to respond to the survey. Applicants can opt to respond via identical paper- or web-based survey tools. The survey contractor then compiles the paper- and web-based responses within 7 days of the survey collection end date. This compiled disaster survey data is then summarized, and simple frequencies and percents are tabulated in report addendum and summarized in a report within 21 days, and another 14 days for FEMA’s review and approval. Once the data from those disasters surveyed within that quarter are completed, the reports will be distributed to the Regions for review. The reports will contain an exhibit comparing the overall customer satisfaction for the specific disaster with satisfaction rates for the other disasters surveyed within that quarter.


After the data from all disasters surveyed within the year are compiled, the data will be compiled and the Annual Report is prepared and reviewed within 45 days. The Annual Report is then tabulated and given to FEMA HQ and the Regions within 60 days of having the data compiled.

Individual FEMA Public Assistance Satisfaction Surveys Schedule

Disaster Survey

Survey Information Collection Start (quarterly)

Survey Information Collection End

Survey Results Compiled

Disaster Report Preparation

Public Assistance Survey—Mail paper survey, allow either web or paper survey response

Surveys mailed between 180 and 270 days after disaster declared to allow time for applicants to complete the FEMA Public Assistance Program and process

Between 60 and 75 days after surveys mailed for survey responses—either web or paper responses (240 to 345 days after disaster is declared)

7 days after collection ends

21 days after receipt of compiled survey data (248 to 373 days after disaster is declared)


Annual Report prepared, reviewed, and given 60 days after survey data from all disasters have been compiled



Focus Group:

The format of the focus group is a mix of state grantees and local subgrantee respondents are given the form to look over and then an open forum discussion of the questions takes place regarding feedback on the questionnaire and any comments. Answers are recorded. Respondents are selected on the basis below.


We will select 80 respondents to participate in the focus groups from the criteria listed below.  We expect to do this in the form of 4 focus groups, one in each region, of approximately 20 people each.  FEMA staff identified four regions in which to hold focus groups with stakeholders.  They then contacted FEMA Regional staff to inform them of the focus groups, and asked them to recommend a location and work with nearby states to identify Public Assistance Program applicants and others whom they felt could describe the impact of Public Assistance Program in their community.  The Regional staff and states then assembled lists of potential attendees who met the following criteria:


  • A mix of state grantees and local subgrantees;

  • A mix in the type of subgrantee organizations;

  • A mix in the functional roles of people within their grantee/subgrantee organization;

  • A mix in the type of disaster;

  • A mix in the project sizes of participants; and

  • To the degree appropriate, a mix of applicants who wrote their own PWs and who did not write their own PWs.



2. Describe the procedures for the collection of information including:


  • Statistical methodology for stratification and sample selection,


The sampling frame is the entire NEMIS list of grantees and sub-grantees. The survey is sent to all the primary units in the sampling frame for each disaster. No stratification has been adopted in the sampling. We hope to reach all but expect that not all will return their questionnaires.


The state grantee in the case of large-scale disasters typically is represented by as many as six primary grantee staff—the State Director, the Governor’s Authorized Representative (GAR), the alternate GAR, the State Public Assistance Officer (PAO), the deputy State PAO, and the State Coordinating Officer (SCO). Smaller disasters (defined by dollar amount) may have one staff fill these different roles; larger disasters may have up to six different staff filling in these roles. Depending upon the individual state organizations, one individual often serves in multiple grantee roles and usually fills out only one survey. Sub-grantees are typically State, local, or tribal governments and may also include private nonprofit organizations who received Public Assistance Program funds. The sub-grantee list consists of one contact person per organization that submitted a Request for Public Assistance (RPA) to FEMA for public assistance and received funds through the Public Assistance Program.


  • Estimation procedure:


The method of calculating satisfaction with each performance measure is by combining the positive response choices (very satisfied, satisfied and slightly satisfied) to a question. Responses that are not included when calculating satisfaction—such as do not know, the various not applicable response choices, and missing responses—are removed from the universe of affirmative responses for that question. The formula for calculating percent satisfaction for 24 of the 26 performance measuring items is as follows:


Very Satisfied + Satisfied + Slightly Satisfied

Total Respondents – (Do Not Know + All Not Applicable Responses +

Not Answered/Bad Response)


In the case of the other two items—‘was staff turnover a problem’ and ‘were site visits performed in a timely manner’—the response choice options differ from that described above. The affirmative response choices for staff turnover a problem are yes or no. In this case, the positive response choice is no—i.e., staff turnover was not a problem. The affirmative response choice options for the measure concerning the timing of Project Worksheet site visits are too soon after the disaster, at the right time, too late to be helpful. The positive response choice for this measure is at the right time. In these two cases, the formula for calculating the percent satisfaction with these two measures is:

Positive response choice

Total Responses – (Do Not Know + Not Applicable + Not Answered/Bad Response)


Each affirmative survey response is given equal weight when calculating percent customer satisfaction for a given disaster, regardless of whether they are from grantee and sub-grantee applicants. This disaster-specific customer satisfaction rate is presented in disaster survey report prepared for each disaster.


For the annual report covering multiple disasters surveyed within a given year, each disaster with five or greater responses is given equal weight regardless of the number of respondents for the surveyed disaster; the individual survey responses within a disaster are therefore weighted proportionally to the number of response within that disaster in order for each disaster to be given equal weight. The percent customer satisfaction recorded in the annual report, therefore, represents an average of each disaster surveyed during that year.


  • Degree of accuracy needed for the purpose described in the justification,


Surveying the entire universe of Public Assistance fund recipients, including both grantees and subgrantees, yields the highest degree of precision, accuracy, representativeness, reproducibility, and completeness of the survey sample. Steps have been taken to reduce sampling error by surveying the as much of the entire population of grantee and subgrantee applicants that we can reach by trying to increase the number of responses we get back by follow-up attempts and reminders. Non-sampling error, caused by participant non-response is attempted to be reduced here as well. Strategies have been adopted to maximize the response rate so that the results reported are representative of the entire population of grant applicants for any given disaster (e.g., follow-up survey mailings, follow-up faxes for disasters with small populations).


  • Unusual problems requiring specialized sampling procedures, and


We do not anticipate any unusual problems on hard to reach populations other than those who have addresses not up to date in the NEMIS system. The list of addresses comes from that system and for office locations where assistance has recently been rewarded, and these are generally quite accurate in the NEMIS database because they have recently asked for funds with a correct address for mailing.


We do not provide an incentive for the respondents to return our questionnaire and, therefore, use follow-up in the form of re-mailing the questionnaire with a reminder and reminder faxes as a way of reaching all respondents and encouraging them to return the questionnaire. Each are programs that have received assistance and the incentive for filling out the questionnaire is being able to express their opinion and evaluate their satisfaction or dissatisfaction with their recent service.


  • Any use of periodic (less frequent than annual) data collection cycles to reduce burden.


The survey tool does not require periodic (less frequent than annual) data collection cycles to reduce burden.



3. Describe methods to maximize response rates and to deal with issues of non-response. The accuracy and reliability of information collected must be shown to be adequate for intended uses. For collections based on sampling, a special justification must be provided for any collection that will not yield “reliable” data that can be generalized to the universe studied.

Form/Questionnaire: To maximize response rates, we will do the following:

*Ensure a short time lag between the disaster and data collection

*Conduct follow-ups

* Provide a Public Assistance Survey Hotline to aid in help with questions to ensure a more frequent likelihood of a returned response

Surveys are expected to be mailed to all Public Assistance funds recipients between 180 and 270 days from the disaster declaration date and appropriate follow-up activities performed to achieve a survey response rate goal of approximately 80% per disaster surveyed. The response rate of 80% is a goal, even though mail and on-line survey tools typically achieve a response rate of between 5% and 30%.1 Improved timeliness in mailing the survey questionnaire and more frequent and timely follow-ups to the survey and to respondent questions received via the dedicated Public Assistance survey hotline and email address will be used to increase the survey response rates.


It is expected that these measures will help to maintain sufficiently high response rates suitable to analysis, but in the event of response rates falling below 80%, a non-response analysis will be performed on the group(s) in question. These analyses will be conducted by using the “SPSS Analysis of Missing Data” module of the general SPSS software package and the findings of the analysis will be addressed accordingly.



4. Describe any tests of procedures or methods to be undertaken. Testing is encouraged as an effective means of refining collections of information to minimize burden and improve utility. Tests must be approved if they call for answers to identical questions from 10 or more respondents. A proposed test or set of tests may be submitted for approval separately or in combination with the main collection of information.

Pilot Test

At the beginning of each collection period, a pilot test is conducted on no more than 10 persons to discover any potential problems with the survey instrument or process. For quality assurance purposes, data from the pilot is reviewed by us and improvements are made to the survey process as deemed necessary.



5. Provide the name and telephone number of individuals consulted on statistical aspects of the design and the name of the agency unit, contractor(s), grantee(s), or other person(s) who will actually collect and/or analyze the information for the agency.


Person 1: William Kaschak, contractor, EarthTech Inc., 703.706.0170


Person 2: Marie O. Randle, POC, DA-PC-PE, 202.646.3649


FEMA-Information Resources Management Branch, IC-Records Management

Person 3: Nicole Bouchet
Records Management Division
Office of Management
Federal Emergency Management Agency
Attention: OM-RM
500 C Street, SW
Washington, DC 20472
Office: (202) 646-2814
Fax: (202) 646-3347



8


File Typeapplication/msword
AuthorFEMA Employee
Last Modified ByFEMA Employee
File Modified2009-02-24
File Created2008-11-07

© 2024 OMB.report | Privacy Policy