Attachment 2. ERS Crop Management Survey Part B

Attachment 2. ERS Crop Management Survey Part B.docx

Crop Management With or Without Cover Crops

OMB:

Document [docx]
Download: docx | pdf

Supporting Statement B


Crop Management With or Without Cover Crops


COLLECTIONS OF INFORMATION EMPLOYING STATISTICAL METHODS



1. Describe (including a numerical estimate) the potential respondent universe and any sampling or other respondent selection methods to be used. Data on the number of entities (e.g., establishments, State and local government units, households, or persons) in the universe covered by the collection and in the corresponding sample are to be provided in tabular form for the universe as a whole and for each of the strata in the proposed sample. Indicate expected response rates for the collection as a whole. If the collection had been conducted previously, include the actual response rate achieved during the last collection.


Note: response rate means: Of those in your respondent sample, from what percentage do you expect to get the required information (if this is not a mandatory collection). The nonrespondents would include those you could not contact, as well as those you contacted but who refused to give the information.



Potential Respondent Universe


To develop the sample, we will focus on twelve states in the Midwest, a region which currently has low cover crop adoption rates but significant potential for growth in cover crops. The sampling area will span the following states:

  1. Illinois

  2. Indiana

  3. Iowa

  4. Kansas

  5. Michigan

  6. Minnesota

  7. Missouri

  8. Nebraska

  9. North Dakota

  10. Ohio

  11. South Dakota

  12. Wisconsin


These states constitute the Northern Great Plains, Lake, and Corn Belt states of the USDA ERS Farm Production Regions1 and together are commonly referred to as the Midwest2.


Corn and soybeans accounted for more than half of all U.S. crop cash receipts in 20233. Because of the dominance of corn and soybeans among U.S. crops, the survey was developed and tested with corn and soy growers, and the sampling frame will only include farms that have grown corn or soy in the past three years. Therefore, respondents will have broadly similar cropping systems. The 12-state sampling region described above represents 86% of the national corn and soy production4. Therefore, the region is also large enough to have some geographic and climatic variability, both of which are thought to be important drivers of cover cropping decisions and will be used as controls in economic models. The states further west, including Kansas, Nebraska, and the Dakotas typically have less rainfall and cropping systems may include more wheat, which potentially influences cover cropping decisions. The northern states, such as Wisconsin, Minnesota, and Michigan have lower yields and shorter growing seasons, but also more livestock production which may influence use of cover crops since they can be used for forage. Sections A and B of the survey instrument ask questions regarding farm and field level production practices, including crop rotation, use of irrigation, and livestock on the farm to control for this variation when estimating models.


Data Sources Used to Develop Sample


The primary data source used to develop the sampling frame is the Farm Service Agency (FSA) Crop Acreage Reporting Data (CARD). The CARD database includes field-level records of crops planted annually, the Customer ID(s) associated with each field, and the FSA Farm ID of the field. In the FSA data, a customer is an individual or farm business that interacts with the USDA to purchase crop insurance, engage in conservation programs, or for other reasons, and a farm is an administrative unit defined by the agency that consists of multiple fields. FSA Farm IDs may be associated with one or more Customer IDs. To avoid sampling customers who may not be primary decisionmakers, we will identify a “principal operator” to serve as the point of contact for each farm.


The CARD database may under-represent boutique and specialty operations as they are less likely to purchase crop insurance or participate in programs. However, small, specialty operations are not the focus of our study. Based on experience working with CARD data for other projects, the research team expects that approximately 90% of farms are represented in the data, with that share higher for large operations. The database only includes planting-related records and does not include any ancillary demographic data on customers.


Information from two additional databases that have the same cross-agency, unique identifying variables (Customer ID, FSA Farm ID) will be appended to the records in the CARD data. First, we will bring in historical program data from the Natural Resources Conservation Service (NRCS) ProTracts database. The ProTracts database includes planned conservation practices on Environmental Quality Incentives Program (EQIP) and Conservation Stewardship Program (CSP) contracts and the fiscal year contracts were signed. This data will tell us (a) whether the farm has participated in EQIP or CSP in the past, and (b) whether they received support for cover crops. We will consider the last ten years of program history (2014-2023) as these contracts are more likely to be relevant to current decisions than earlier contracts. Using this data, we will draw three samples (Group 1, general population; Group 2, program participants with no cover crop; and Group 3, program participants with cover crops).


Second, we will bring in data from the FSA Farm Records Database. The Farm Records Database (FRD) identifies a principal operator for each FSA Farm ID. The invitation to the survey will be sent to the principal operator, as this is the individual most likely to be the primary decision-maker on the farm. If multiple Farm IDs share a single principal operator, those FSA Farms will be grouped into a single observation, or “farm operation”.


Mailing addresses will be provided by the Farm Service Agency (FSA) Mailings Team. The Mailings Team will associate the list of principal operator Customer IDs with a mailing address and check the validity of addresses against recent death indices using best practices they have developed internally for FSA mailings.


In summary, the potential respondent universe includes all farms in Illinois, Indiana, Iowa, Kansas, Michigan, Minnesota, Missouri, Nebraska, North Dakota, Ohio, South Dakota, and Wisconsin that report growing corn or soybeans in 2021, 2022, or 2023.


Three Samples for Separate Analyses


From the total potential respondent universe, we will draw three separate samples.

  • Group 1 (general population)

  • Group 2 (program participants who have not cover cropped)

  • Group 3 (program participants who have cover cropped)


Responses from the three groups will be used for separate analyses, and we will compare results across the three groups of customers.


Estimate of the Potential Respondent Universe in Total and in Each State for Groups 1, 2, and 3


The number of FSA Farms reported in the following tables contains some duplication. Because FSA Farms are administrative units, many farm operations consist of multiple FSA Farms. When drawing the sample, we will first aggregate by principal operator as defined in the Farm Records Database so that we reduce the chances of sampling the same farm operation more than once.


Table 1 reports the number of FSA Farms in the sample states that grow corn or soybeans in 2021, 2022, or 2023.


Table 1. Group 1, Total Potential Respondent Universe of FSA Farms with Corn or Soybeans


State

Freq.

Percent




IL

186,591

14.7

IN

123,167

9.7

IA

171,600

13.5

KS

93,882

7.4

MI

62,078

4.9

MN

114,058

9.0

MO

105,805

8.3

NE

91,541

7.2

ND

54,745

4.3

OH

110,053

8.7

SD

57,857

4.6

WI

98,600

7.7




Total

1,269,977

100


Table 2 shows the corn and soybean farms with program experience without cover crops and with cover crops in the past ten years (2014-2023). These farms are the universe of potential respondents for Groups 2 and 3. Groups 2 and 3 are both sub-sets of Group 1, but the sample will be drawn without replacement so that a farm will only be selected once.


Table 2. FSA Farms with Corn or Soybeans, by Program Experience


Group 2, program, no cover crop

Group 3, program, cover crop


Freq.

Percent

Freq.

Percent

IL

5,576

8.5

9,760

13.1

IN

4,137

6.3

8,140

10.9

IA

7,831

11.9

7,742

10.4

KS

6,054

9.2

2,663

3.6

MI

2,465

3.7

6,338

8.5

MN

8,874

13.5

5,762

7.7

MO

6,001

9.2

5,425

7.3

NE

5,660

8.6

5,110

6.8

ND

4,538

6.9

3,449

4.6

OH

3,910

6.0

6,236

8.3

SD

3,319

5.1

4,688

6.3

WI

7,231

11.0

9,424

12.6






Total

65,596

100

74,737

100




Estimated Response Rate


Based on the experience of the research team conducting surveys of farming populations in Michigan and surrounding states over the past 25 years, and a long-term declining trend in response rates, our overall estimated response rate to this information collection across all three samples and all farm size strata is 17%.



2. Describe the procedures for the collection of information including:

  • Statistical methodology for stratification and sample selection,

  • Estimation procedure,

  • Degree of accuracy needed for the purpose described in the justification,

  • Unusual problems requiring specialized sampling procedures, and

  • Any use of periodic (less frequent than annual) data collection cycles to reduce burden.

  • If you are selecting a uniform respondent universe, you may be using simply a random numbers table to select a sample.

Stratified sampling is often used when the sampling population can be split into non-overlapping strata that individually are more homogeneous than the population as a whole (e.g. gender and age groups). If there are no obvious "dividing lines", grid lines can be used to divide the population. Random samples are taken from each stratum (or class) and the results are combined to estimate a population mean. Stratified sampling is most successful when the variance within each stratum is less than the overall variance of the population


Stratifying by Farm Acreage


The sample will be drawn proportionally across the sampled states according to the total acres operated by farms in each state.


Previous research has shown that a large share of crop acreage is operated by a small number of large farms5. Because these large operators have significant influence on cropland area, we will employ a stratified sampling strategy in which larger farms will belong to strata where the sampling rate is higher. There will be four sampling bins, each representing 25% of the farm acres in a state. In each selected state, we will sample from quartiles of acres operated. Within each quartile, farms will be sampled randomly without replacement. The development of these strata is dependent on the aggregation of FSA Farms by principal operator so that we accurately account for farm operations’ entire acreage.


Three Samples for Three Analyses


In order to accomplish Objective 3, “Compare preferences between three groups of farmers: general population of farmers, farmers with prior NRCS program experience without cover crops, and farmers with prior NRCS program experience with cover crops” we will draw three separate samples, or groups (Group 1, general population, Group 2, program participants who have not cover cropped, Group 3, program participants who have cover cropped). Each of the three samples will use the same stratified sampling strategy and will be drawn without replacement. The sample sizes will be:

  • Group 1 (general population): 10,000 farms

  • Group 2 (program participants who have not cover cropped): 2,500 farms

  • Group 3 (program participants who have cover cropped): 2,500 farms


Estimation Procedure


For each group (Group 1: General population, Group 2: Program participants who have not cover cropped, and Group 3: Program participants who have cover cropped) we will estimate a logistic regression model where the dependent variable is cover crop adoption on a field (responses to contract scenarios in section C of the survey). We model cover crop adoption as a function of characteristics of the contract, farm, management practices, attitudes collected in all other sections of the survey. Model results will be compared across the three samples. In these models we will use post-stratification weights that account for survey strata and the potential for differential response based on known characteristics (e.g., farm size, number of owners, location, cropping history, etc.) to mitigate potential non-response bias.


Variation in the contract features underpins the regression models to be estimated in each of the three respondent groups. The features of contracts will be input into software (Ngene) which then optimizes an experimental design that maximizes information attained from specified models. The final design will provide a subset of contracts that are then randomized across respondents. Each respondent will be shown a set of six contract scenarios, resulting in a panel dataset of choices. The survey (Attachment 4) shows two alternate formats of the contract scenario question. We will randomize the format across respondents with 50% receiving each version. A full list of the levels of contract features utilized in the experimental design can be found in Attachment 14.


Model results will provide two key pieces of information. First, they will allow us to estimate the monetary value of changes to contract features. This information can be used by program agencies, state and regional organizations, or private entities when considering design of cover crop contracts. Second, they will allow us to estimate the field-level probability of cover crop adoption given a set of features. Using the reported field acreage, we can generate a supply curve of land for cover crops across the three respondent groups.




3. Describe methods to maximize response rates and to deal with issues of non-response. The accuracy and reliability of information collected must be shown to be adequate for intended uses. For collections based on sampling, a special justification must be provided for any collection that will not yield "reliable" data that can be generalized to the universe studied.


Any aspect of your plan which makes it easier and more attractive to comply with the request for information, would tend to maximize response rate. This would include:


Such steps as pre-notification and various types of follow-up with those who did not respond at the first opportunity (give details, e.g. intervals for follow-up, type(s) of follow-up, how many times you will follow up)

Making the questions as simple and brief as possible

Already having a good working relationship with this group and/or the group’s perception that actions based on the information collected would be helpful to them.

All NPS submissions are required to identify a plan to address nonresponse. This means that a large enough number of respondents didn’t give information so that there is a possibility that their answers as a group might have differed significantly from those who did respond. Following up with non-respondents – resending surveys or sending a shorter version of the survey, trying a phone interview if possible, etc. are all effective strategies.


We will follow best practices for maximizing survey response rates, including use of pre- and post-completion incentives, multiple follow-up notifications to individuals to address nonresponse, and making the survey available through multiple modes.


Notifications to individuals and time frames are as follows. All notifications will be sent approximately 1 week apart, except for the Mailing #5 (paper survey) which will be sent 2 weeks after Mailing #4.


  1. Letter invitation to survey including a $5 pre-survey incentive (Attachment 5)

  2. Reminder postcard (Attachment 6) (+1 week)

  3. Reminder postcard (large format) (Attachment 7) (+1 week)

  4. Second letter invitation to survey (Attachment 8) (+ 1 week)

  5. Paper survey (if not completed online) (Attachment 9) (+2 weeks)


The data used to develop the mailing list will include a limited number of email addresses and no phone numbers. Some customers provide an email address but indicate that they prefer not to be contacted by email.


Nonresponse will primarily be judged by comparing known characteristics in our sample frame to the respondents. We will develop post-stratification weights to match our respondent sample characteristics to the characteristics of our sample based on variables that describe farms in the data (e.g., farm size, number of owners, location, cropping history, etc.).



4. Describe any tests of procedures or methods to be undertaken. Testing is encouraged as an effective means of refining collections of information to minimize burden and improve utility. Tests must be approved if they call for answers to identical questions from 10 or more respondents. A proposed test or set of test may be submitted for approval separately or in combination with the main collection of information.


Pilot surveys of 10 or more are often conducted, and must go through the PRA approval process.


The questionnaire for this information collection was developed through 16 in-depth cognitive interviews supported by OMB Control No.: 0536-0073 (Exp. Date 04/30/2025) (summary provided in Attachment 10) and refined through multiple conversations with Federal agencies, including Natural Resources Conservation Service (NRCS) Programs Deputy Area, NRCS Science and Technology Deputy Area, USDA Farm Production and Conservation Business Center (FPAC-BC), FSA Mailings Team, Risk Management Agency (RMA) and the Office of the chief Economist (OCE), as well as the Sustainable Agriculture Research and Education (SARE). In addition, public comments submitted through the 60-day Federal Register Notice.


Interviews assessed respondents’ understanding of the survey as a whole and tested the performance of alternate question formats. In general, respondents had a good understanding of the questions asked in the survey and were familiar with making similar kinds of decisions on their operations. Question format, skip logic, and wording were refined after interviews and the additional feedback to reduce redundancy, improve understanding, and better serve the research goals. Main lessons learned from the interviews were firstly that respondents typically understood questions well and were familiar with the types of information asked in the survey. Another main theme from the interviews was that language in the questions that accommodated farmers’ specific circumstances reduced respondent burden by making it quicker and easier to select from the answer choices available. Finally, cover crop adoption is a decision that is made in the context of complex management systems and therefore information about the farm operation including crop type, whether there are livestock, the adoption of other management practices, and availability of equipment are necessary to estimate models of adoption. These lessons were incorporated into the final version of the survey included in this request.


In addition, we will conduct 9 pilot surveys with farmers contacted through MSU extension. This pilot test will serve two purposes. First, it will serve as a final check that the online survey instrument is formatted correctly and skip patterns and automation are functioning. Secondly, the responses to the contract questions will be used as inputs into NGene, the experimental design software, as the additional data will strengthen the design. Responses will not be analyzed or reported in any way, and only used to optimize the design.



5. Provide the name and telephone number of individuals consulted on statistical aspects of the design and the name of the agency unit, contractor(s), grantee(s), or other person(s) who will actually collect and/or analyze the information for the agency.


The following individuals were consulted on statistical aspects of the design:



Frank Lupi will lead the implementation of the survey. Data will be analyzed by Frank Lupi, Ying Wang, and USDA-ERS members of the project team.





1 https://www.ers.usda.gov/webdocs/publications/42298/32489_aib-760_002.pdf?v=1295.4

2 https://www.britannica.com/place/Midwest

3 https://www.ers.usda.gov/data-products/chart-gallery/gallery/chart-detail/?chartId=76946

4 USDA National Agricultural Statistics Service (NASS). 2023 Census of Agriculture. Retrieved from https://quickstats.nass.usda.gov/

5 James M. MacDonald, Robert A. Hoppe, and Doris Newton. Three Decades of Consolidation in U.S. Agriculture, EIB-189, U.S. Department of Agriculture, Economic Research Service, March 2018.

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorTanner, Sophia - REE-ERS, Kansas City, MO
File Modified0000-00-00
File Created2024-11-14

© 2024 OMB.report | Privacy Policy