Supporting Statement B 7-22-2013 - FINAL

Supporting Statement B 7-22-2013 - FINAL.docx

Federal Crop Insurance Program Delivery Cost Survey and Interviews

OMB: 0563-0086

Document [docx]
Download: docx | pdf

UNITED STATES DEPARTMENT OF AGRICULTURE

Federal Crop Insurance Program Delivery Cost Survey and Interviews

OMB NUMBER: 0563-NEW


B. COLLECTIONS OF INFORMATION EMPLOYING STATISTICAL METHODS


THE AGENCY SHOULD BE PREPARED TO JUSTIFY ITS DECISION NOT TO USE STATISTICAL METHODS IN ANY CASE WHERE SUCH METHODS MIGHT REDUCE BURDEN OR IMPROVE ACCURACY OF RESULTS. WHEN ITEM 17 ON THE FORM OMB 83-1 IS CHECKED "YES", THE FOLLOWING DOCUMENTATION SHOULD BE INCLUDED IN THE SUPPORTING STATEMENT TO THE EXTENT THAT IT APPLIES TO THE METHODS PROPOSED.


1. DESCRIBE (INCLUDING A NUMERICAL ESTIMATE) THE POTENTIAL RESPONDENT UNIVERSE AND ANY SAMPLING OR OTHER RESPONDENT SELECTION METHOD TO BE USED. DATA ON THE NUMBER OF ENTITIES (E.G., ESTABLISHMENTS, STATE AND LOCAL GOVERNMENT UNITS, HOUSEHOLDS, OR PERSONS) IN THE UNIVERSE COVERED BY THE COLLECTION AND IN THE CORRESPONDING SAMPLE ARE TO BE PROVIDED IN TABULAR FORM FOR THE UNIVERSE AS A WHOLE AND FOR EACH OF THE STRATA IN THE PROPOSED SAMPLE. INDICATE EXPECTED RESPONSE RATES FOR THE COLLECTION AS A WHOLE. IF THE COLLECTION HAD BEEN CONDUCTED PREVIOUSLY, INCLUDE THE ACTUAL RESPONSE RATE ACHIEVED DURING THE LAST COLLECTION.

Sampling is the process of selecting units (i.e., agents and policyholders) from a population of interest so that by studying each sample, RMA could generalize the results back to the population from which the agents and policyholders were chosen. The statistical analysis of the sample data that will be collected from the Agent and Producer Surveys will provide RMA with not only an estimate, but a statistical statement regarding the accuracy, or precision, of the estimate. The precision is commonly stated as a range, or confidence interval, around the estimate. The confidence level achieved depends on the size of the sample taken, the variability of the results among the sample items, and on the design of the sample. An important element of statistical sample design is reducing the number of items required to achieve a desired precision. For the purposes of these surveys, we aim for a sampling precision of plus or minus 10 percent at a 95 percent confidence level.

By using generally accepted statistical sampling techniques, we determined sample sizes needed to make objective and statistically valid projections based on the samples’ findings. We selected a representative sample of agents that sold and serviced Federal crop insurance in 2011 and a sample of policyholders that purchased the insurance from the sampled agents to gain an understanding about agents’ costs related to selling and servicing Federal crop insurance. In the sample estimation, we also incorporated the 30 percent response rate1 and inflated the theoretically estimated sample sizes to acquire sufficient responses from agents and policyholders.

The agent and policyholder populations include 13,559 unique agents by region2 and 158,423 policyholders. To ensure the ability to generalize about the agent population, we estimated the stratified random samples for agents and policyholders at 766 and 153, respectively. With the expected response rate of 30 percent, we inflated the required samples to 2,563 agents and 513 policyholders.

Agent Sampling

To achieve greater precision by gaining greater control over the composition of the sample and ensure that particular groups of agents within the agent population are adequately represented, we stratified the random sample of agents by region and the size of gross premium earned. It estimated a proportionate stratified sample with explicit stratification by region (Midwest, Plains, and other3) and implicit stratification by the size of gross premium (small, medium, and large). The regions are defined based on the states as depicted in the table below.

Table 4: Agent Sampling – Regions by State

South

Northeast

Midwest

Plains

Mountain

West

Alabama

Connecticut

Illinois

Kansas

Colorado

Alaska

Arkansas

Delaware

Indiana

Nebraska

Montana

Arizona

Florida

Maine

Iowa

North Dakota

Nevada

California

Georgia

Maryland

Kentucky

Oklahoma

New Mexico

Hawaii

Louisiana

Massachusetts

Michigan

South Dakota

Utah

Idaho

Mississippi

New Hampshire

Minnesota


Wyoming

Oregon

North Carolina

New Jersey

Missouri



Washington

South Carolina

New York

Ohio




Tennessee

Pennsylvania

Wisconsin




Texas

Rhode Island






Vermont






Virginia






West Virginia






By doing stratified sampling, we ensure obtaining sufficient subsamples to support a separate analysis of regional subgroups. The random sample of agents is stratified first by region and then by gross premium size as follows:

  • Level I Strata: all agents selling Federal crop insurance in the six regions, and

  • Level II Strata: all agents selling low (below $100,000), medium (between $100,000 and $800,000), and large gross premium amount (above $800,000) of Federal crop insurance.

Specifically for Level I of the stratification, we divided the agent population into regional subgroups, or strata, and the samples were selected separately in each of the strata, Midwest, Plains, and other (South, West, Mountain, and Northeast). Given the precision of 10 percent at a 95 percent confidence level, we estimated required samples of 261, 254, and 251 agents for Midwest, Plains, and other regions, respectively.4 Given the assumption of the response rate of 30 percent, the inflated sample sizes required by region are 870, 847, and 837, respectively.

Table 5: Agent Sampling - Regional Stratification (Level I)

Region

Population Size

Sample Size by Region

Inflated Sample Size by Region

Midwest

6,450

261

870

Plains

3,791

254

847

South

1,772

134

447

West

546

41

138

Mountain

663

50

167

Northeast

337

25

85

Total

13,559

766

2,553

Since the other category is relatively small, the number of agents that were sampled from South, West, Mountain, and Northeast regions was allocated proportionally to their population with 134, 41, 50, and 25 agents randomly selected in each of these four regions, respectively.5

To ensure that gross premium sizes are adequately represented in the sample, we further stratified each of the six regions by the small, medium, and large sizes of agents’ gross premium amount. Thus, the regional agent samples were further proportionally stratified by small, medium, and large gross premiums creating a total of 18 strata. The table below summarizes the stratification results for these strata.

Table 6: Agent Sampling - Regional & Premium Size Stratification (Level I & II))

Strata

Region (Population & Sample Size)

Premium Size

Population by Strata

Inflated

Sample Size by Strata

1

Midwest

  • Population: 6,450

  • Sample Size: 870

Large

1,813

245

2

Medium

2,761

373

3

Small

1,876

254

4

Plains

  • Population: 3,791

  • Sample Size: 847

Large

1,480

331

5

Medium

1,487

333

6

Small

824

185

7

South

  • Population: 1,772

  • Sample Size: 447

Large

788

199

8

Medium

536

136

9

Small

448

113

10

West

  • Population: 546

  • Sample Size: 138

Large

202

51

11

Medium

181

46

12

Small

163

42

13

Mountain

  • Population: 663

  • Sample Size: 167

Large

350

89

14

Medium

206

52

15

Small

107

27

16

Northeast

  • Population: 337

  • Sample Size: 85

Large

117

30

17

Medium

120

31

18

Small

100

26

Total



13,559

2,5636



In addition, approximately 4.0 percent of agents sold and serviced Federal crop insurance in more than one geographical region. When randomly selected, such agents were assigned to the region they were initially sampled from, and an additional agent was sampled from the other geographic region(s), thus increasing the reliability or the precision of the agent sample even further.

Sampling of Policyholders

After the agent sample was selected, a list of the policyholders insured by the agents was used as the population for the sample selection. These policyholder lists are collected each year and are maintained by RMA. After RMA performed a stratified random sample of agents by region and gross premium size, it matched the policyholders to their corresponding agents that sold Federal crop insurance to them in 2011 and then sampled the required number of policyholders.

RMA selected 513 policyholders by matching their population to the sample of agents that sold the Federal crop insurance to these producers. With the expected response rate of 30 percent, RMA expects to receive approximately 154 policyholder surveys back. Since RMA inflated the sample of agents to ensure the ability to generalize about the agent population, 513 policyholders were selected as the inflated required sample size.

Table 7: Policyholder Sampling Results

Strata

(Level I)

Region

Population (Universe)

Required Sample Size by Region

Inflated Required Sample Size7

1

Midwest

45,897

52

174

2

Plains

10,078

51

169

3

South

6,362

27

89

4

West

48,893

8

28

5

Mountain

36,864

10

33

6

Northeast

10,329

5

17

Total

 

158,423

153

511



First, we will mail the Surveys to the randomly selected agents and the randomly selected policyholders that purchased Federal crop insurance from these agents. To ensure a high response rate, we reached out to some of professional agencies encouraging them to inform their members about the upcoming survey. When surveys are mailed/emailed, to boost the response rate, we will also reach out to the agents and policyholders by phone after a certain time period.

This data collection has not been conducted previously; therefore, no summary of that activity and the response rate achieved are provided here.


2. DESCRIBE THE PROCEDURES FOR THE COLLECTION OF INFORMATION INCLUDING:


- STATISTICAL METHODOLOGY FOR STRATIFICATION AND SAMPLE SELECTION;


- ESTIMATION PROCEDURE;


- DEGREE OF ACCURACY NEEDED FOR THE PURPOSE DESCRIBED IN THE JUSTIFICATION;


- UNUSUAL PROBLEMS REQUIRING SPECIALIZED SAMPLING PROCEDURES, AND


- ANY USE OF PERIODIC (LESS FREQUENT THAN ANNUAL) DATA COLLECTION CYCLES TO REDUCE BURDEN.


To achieve the desired precision using the minimum sample size, we used a stratified sampling method with the corresponding producer sample. We will send the surveys to 2,563 agents and 513 producers by either emailing the surveys to the agents with existing email addresses or mailing the survey to the remaining agents and producers.

We did not encounter any unusual problems requiring specialized sampling procedures. Since the data are collected only once, no use of periodic data collection cycles will be implemented.

A representative statistical sample will be used to gain an understanding about the target population based on the examination of a fraction of the population. A sampling approach can produce results that are valid, objective, and defendable. We used generally accepted statistical sampling techniques to determine sample sizes needed to make objective and precise projections based on sample findings. A 30 percent response rate was used to inflate the theoretically estimated sample sizes to acquire sufficient responses from the agents and policyholders.

To the extent that we will use sampling to administer the surveys, there are a number of subtasks that must be accomplished to define and develop a valid sample. These tasks are described in more detail below.

Defining a Survey Group and Key Characteristics

The task of determining how many stakeholders to survey to get the minimum acceptable number of valid responses to support a meaningful statistical analysis of the results is fairly complex. We will use the database containing a complete population of agents and policyholders from which the sample will be drawn. In this case, the database contains records for the agents employed in the crop insurance field, as well as policyholders who purchased crop insurance. Typically the database will contain one record (line item) for each individual. We will verify uniqueness of individuals within the data base, confirm record counts, total dollar amounts of premiums and claims, and any other relevant totals to ensure an accurate data transmission.

Generating Confidence Intervals

The goal of sampling analysis is to generate statistically valid results based on an examination of a small fraction of a population. A sampling analysis provides an estimate of the issues of interest, such as the percentage of agents providing specific types of crop insurance. In addition, it provides a statistical statement regarding the accuracy, or precision, of the estimate. This precision is commonly stated as a range, or “confidence interval,” around the estimate. The confidence level achieved depends on the size of the sample taken, the variability of the results among the sample items, and on the design of the sample. An important element of statistical sample design is reducing the number of items required to achieve a desired precision. The level of precision typically required depends on the application and the agreement of the parties involved. However, the precision set as a goal for the study must be traded off against the cost of analyzing the sampled items. As noted, an important advantage of a well-planned statistical analysis is the fact that it produces an objective statement regarding the range of uncertainty.

A relationship, or trade-off, exists among the sample size, the confidence interval, and the precision of the estimate. One frequently used sampling standard is a 10 percent precision at a 95 percent confidence level.

Establishing the Criteria for Sample Selection (Stratification)

We determined what classification system are built into the sample design. The process of classifying the data into various groups based on appropriate factors is referred to as sample stratification. Stratification is a process used to reduce the sample size necessary to obtain a desired level of statistical precision. It is also used to ensure appropriate accuracy for relevant subpopulations of the data. We initially stratified the population of data by geography (six regions) and then by gross premium size (large, medium, small) to get an appropriate representation in each of these categories.

Finalizing Sample Size and Obtaining RMA Approval

A properly designed statistical sampling analysis provides results that are statistically valid, objective, and defendable. We calculated the required sample size(s) based on its assumptions of the variability inherent in the population and the anticipated non-response rate.


3. DESCRIBE METHODS TO MAXIMIZE RESPONSE RATES AND TO DEAL WITH ISSUES OF NON-RESPONSE. THE ACCURACY AND RELIABILITY OF INFORMATION COLLECTED MUST BE SHOWN TO BE ADEQUATE FOR INTENDED USES. FOR COLLECTIONS BASED ON SAMPLING, A SPECIAL JUSTIFICATION MUST BE PROVIDED FOR ANY COLLECTION THAT WILL NOT YIELD "RELIABLE" DATA THAT CAN BE GENERALIZED TO THE UNIVERSE STUDIED.


Based on discussions with both industry and survey professionals, we expect to achieve a response rate of 30.0 percent for each of our surveys of insurance agents and insured producers. Response rate will be calculated in accordance with the OMB standards applicable to Federal agencies.8 To achieve the target response rate, we propose to implement the procedures discussed in Section B.1 and B.2 to help increase the response rate. Specifically, we intend to take the following steps:

  • We developed user-friendly survey questionnaires with clear, concise, inoffensive, and easy to respond to questions.

  • We reached out to certain agents and agencies asking them to pre-notify and emphasize to their members the importance of the survey.

  • We have been and continue to maintain timely and continuous communication with agency representatives to receive their feedback and urge them to continue conveying to agents the importance of the surveys.

  • We will increase contact efforts (as necessary), send timely follow-up emails, and make repeated follow-up calls to non-respondents or respondents with missing data.

  • The surveys will provide an email address so that any questions from sampled respondents could be appropriately addressed in a timely fashion.

  • During the course of the survey, we will continue identifying and focusing on groups with relatively low response rate to help achieve a reasonable response rate among those groups.

  • We have invested a great deal of effort and attention to achieve a careful balance of the need to collect sufficient data with the probable burden imposed on survey participants to ensure achievement of both data sufficiency and high response rate. It has also exerted a good amount of effort to improve the surveys to achieve an adequate data collection rate.

  • We have carefully planned and selected the survey time in October to avoid conducting the surveys at a time during which the survey participants are likely to be busy and therefore non-responsive.

KPMG will work with RMA to identify additional actions that may help achieve a high response rate. Both KPMG and RMA will continuously monitor and evaluate the response rates on a bi-weekly basis to ensure timely follow-ups and achievement of the target response rates.

Non-Response Rates

Non-response refers to the failure to obtain observations on some elements included in the survey. In data collection, there are generally two types of non-response, item and unit non-response. Item non-response occurs when certain questions in a survey are not answered by the respondent. Unit non-response occurs when the surveyed individual cannot be contacted or refuses to participate in the survey. In cases where the non-responses are not randomly distributed across survey participants, high non-response rate will lead to a small dataset to be collected and may adversely affect the validity of the findings and statistical results.

Non-Response Bias Analysis and Adjustment

Despite the considerable efforts in increasing the response rate, the response rate resulting from the surveys may be less than required. OMB requires non-response bias analysis whenever the survey response rate is less than 80 percent. In such cases, we propose to first conduct non-response bias analysis to determine if the answers received from the respondents significantly differ with those expected from the non-respondents. Based on the survey response and the non-response data, we will use various methods to identify the potential reasons that may have caused the differences in response rates.

One typical way of gauging the existence of potential non-response bias is to compare the distribution of responses across subgroups to the distribution of certain factors (e.g., geographical, demographic or economic) of which information is readily available for the whole population (respondents and non-respondents). In cases where these two distributions are significantly different, potential non-response bias is likely to be present and non-response bias adjustments are warranted. To the extent possible, we will also use statistical methods, such as multivariate regression, to analyze the potential reasons underlying survey participants’ decisions on survey response to help identify the direction and magnitude of potential biases caused by survey non-responses. In cases where the group of respondents itself represents a random selection of the population, the potential bias due to non-responses is likely to be negligible. On the other hand, if a particular group of survey participants is detected to be more likely to respond to the survey than another group, potential non-response bias is likely to exist. Using the aforementioned methods, we will determine the existence of potential non-response bias by assessing if the group of respondents itself comprises of a nationally representative sample of the population and/or if the respondents are significantly different with non-respondents in any systematic way.

In case a significant non-response bias is determined to exist from the non-response bias study, we propose to classify respondents and non-respondents into adjustment cells (by geographical regions or other characteristics). Potential methods that can be used to adjust for non-response bias are shown in Table 8 below.

Table 8: Potential Methods for Non-Response Bias Adjustment

Suggested Method

Comment

Weighting Class Adjustment

This method is used when the distribution of population over adjustment cells is not available. A non-response weight is computed for cases in a cell proportional to the inverse of the response rate in that cell.

Post-Stratification

This method is used when the distribution of population over adjustment cells is available from external sources (e.g. U.S. Census Bureau). A non-response weight is computed for cases in a cell proportional to the ratio of the population count to the number of respondents in that cell.

Other Adjustment Techniques

Besides weighting-class adjustment and post-stratification, other techniques exist to adjust for non-response bias, such as propensity models, which usually use logistic regression and require certain information (e.g. demographics) be known for the entire sampled group, or the calibration methods, which make use of auxiliary population data from a census.



We will carefully analyze the response data to determine the most appropriate method for adjustment of the non-response bias in case one is determined to exist. As such post-survey adjustments are merely estimated “fixes” to the problem, we believe the most effective way to reduce non-response bias is to reduce non-response rates through properly designed studies and timely follow-ups. The adjustments on the back-end will help reduce, but not completely eliminate, the non-response bias.


4. DESCRIBE ANY TESTS OF PROCEDURES OR METHODS TO BE UNDERTAKEN. TESTING IS ENCOURAGED AS AN EFFECTIVE MEANS OF REFINING COLLECTIONS OF INFORMATION TO MINIMIZE BURDEN AND IMPROVE UTILITY. TESTS MUST BE APPROVED IF THEY CALL FOR ANSWERS TO IDENTICAL QUESTIONS FROM 10 OR MORE RESPONDENTS. A PROPOSED TEST OR SET OF TESTS MAY BE SUBMITTED FOR APPROVAL SEPARATELY OR IN COMBINATION WITH THE MAIN COLLECTION OF INFORMATION.



In the pre-OMB interviews conducted with insurance agents and insured producers, we discussed and solicited their views on the preferred survey format, survey duration and measures that may help boost the response rates in conducting the surveys of insurance agents and insured producers. In addition, we consulted several professional insurance agent trade associations and agencies regarding the insurance agent survey:



We also consulted with the external survey specialists at Campos regarding the insurance agent and insured producer surveys.


5. PROVIDE THE NAME AND TELEPHONE NUMBER OF INDIVIDUALS CONSULTED ON STATISTICAL ASPECTS OF THE DESIGN AND THE NAME OF THE AGENCY UNIT, CONTRACTOR(S), GRANTEE(S), OR OTHER PERSON(S) WHO WILL ACTUALLY COLLECT AND/OR ANALYZE THE INFORMATION FOR THE AGENCY.


Survey design and methodology are determined by the Economic and Valuation Services FCIC team at KPMG; contact is Jon Silverman, (703) 286-8283.


Sample sizes for each region are determined by Paul Li, (267) 256-2706.


The OMB submission has been reviewed by National Agricultural Statistics Service office.


Data collection is carried out by Campos. Contact Barb Theobald, Executive Vice President, Research Services, (412) 471-8484 Ext. 501.


1 Provided the survey techniques that RMA will implement, it projects the agent response rate at 30 percent.

2 The 2011 RMA agent population records include 13,040 unique agents selling Federal crop insurance in six regions. And 519 agents or 4 percent of the overall agent population sell insurance in more than one region.

3 Midwest and Plains are the largest regions with 6,450 and 3,791 agents, respectively. The four remaining regions together include 3,318 agents that sold and serviced Federal crop insurance in the South, West, Mountain, and Northeast geographical regions. Therefore, the remaining regions have been combined into one category (Other) to increase the precision level for the overall sample.

4 Since agent and policyholder samples are large compared to the agent and policyholder populations that are of a finite size 13,559 and 158,423, respectively, and the sample was chosen without replacement. RMA used a finite population correction factor to show additional precision obtained from the sample size being a greater fraction of the population size than normal. The finite population correction factor shows that it is unlikely for the sample mean in a large sample to be far away from the population mean when compared to a small sample size.

5 For instance, agents that sold and serviced Federal crop insurance in the South, West, Mountain, and Northeast regions represent 53 percent, 17 percent, 20 percent, and 10 percent of the overall number of agents in the other category respectively. Therefore, RMA estimated a required sample for the South region at 134 agents or 53 percent of the overall required sample of 251 agents. Similarly, RMA estimated required samples at 41, 50, and 25 agents for the West, Mountain, and Northeast regions, respectively.

6 Since the estimated individual sample size amounts were rounded up to obtain a more conservative measure of the required sampling size, the final sample size of 2,563 is somewhat larger than the theoretically estimated sample size of 2,553.

7 The actual estimated sample sizes are slightly above the required sample sizes due to rounding up results.

8 OMB, Standards and Guidelines for Statistical Surveys, September 2006, (http://www.whitehouse.gov/sites/default/files/omb/assets/omb/inforeg/statpolicy/standards_stat_surveys.pdf), page 14.


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorPersetic, Shannon - RMA
File Modified0000-00-00
File Created2021-01-30

© 2024 OMB.report | Privacy Policy