Supporting Statement B
Conservation Auction Behavior: Effects of Default Offers and Score Updating
Economic Research Service
OMB Control Number: 0536-XNEW
B: COLLECTION OF INFORMATION EMPLOYING STATISTICAL METHODS
1: Describe (including a numerical estimate) the potential respondent universe and any sampling or other respondent selection method to be used. Data on the number of entities (e.g., establishments, State and local government units, households, or persons) in the universe covered by the collection and in the corresponding sample are to be provided in tabular form for the universe as a whole and for each of the strata in the proposed sample. Indicate expected response rates for the collection as a whole. If the collection has been conducted previously, include the actual response rate achieved during the last collection.
The goal of the farmer experiment is to estimate what the treatment effects of a default auction offer and live updating of offer scoring on the average offer structure. Since those could only be used once a farmer has already decided to participate in an auction and is the process of preparing an offer, the target population for the study is likely future CRP participants.
Most CRP participants are former participants. In the two most recent signups (2020 and 2021) the share of offers coming from reenrolling land ranged from 63 percent to 73 percent. The sampling frame for the study consists of the participants in these recent Conservation Reserve Program (CRP) General Signups. In these signups, there were over 79,000 offers from over 70,000 farmers. (Since offers are for individual tracts of land, farmers can make more than one offer to the program.) Due to this tendency of the CRP General Signup to be most attractive to re-enrollments, the population of recent participants is highly representative of the target research population (for inference purposes) of likely participants in future CRP General Signup. In addition, the focus on this population is likely to result in higher response rate due to their prior engagement in the program.
The assumed response rate for this study is 10 percent. This is considerably lower than the response rate for recent USDA economic surveys (often 50-60 percent) and higher than the response rate in a number of recent experimental studies with farmer populations conducted by universities (less than 5 percent). This response rate is also lower than that average “reoffer” rate by prior CRP land (about 50 percent).
To achieve a set of usable observations of 1,100 study participants, as assumed in the power analysis, the sample will be drawn as a simple random sample of the prior CRP General Signup population, which covers the full U.S., with no stratification. At the assumed 10 percent response rate, this will require a sample of 11,000 farmers, about 15 percent of the sampling frame. After the sample is drawn, the experiment will be conducted in “waves” because the of the interaction between the budget constraint for participant payments (which is how the auction “clears”) and the uncertainty over the response rate. In the first wave, recruitment materials will be sent to 4,000 of the sampled 11,000 farmers. The size of the second wave of recruitment will be adjusted based on the response rate from the first wave. For example, if the actual response rate from the first wave is 10 percent, then the second wave will include all of the remaining 7,000 farmers from the sample of 11,000. If the response rate is 20 percent (800 participants), then the second wave will consist of 1,500 farmers, which should yield the necessary additional 300 observations. Alternatively, if the response rate is 27.5 percent or higher, which means hitting or exceeding the 1,000-participant limit in the first wave, then no second wave recruitment will be sent. The third wave of recruitment will only be used if the second wave recruitment under-performs the first wave recruitment and any of the sample of 11,000 remains available for recruitment. The alternative approach of recruiting all 11,000 farmers in a single wave would lead to a situation under higher response rate where the budget-constrained auction cleared with a much lower acceptance rate than the range indicated in the instructional materials. While there is precedent for wide variation in the acceptance rate in the actual CRP General Signup due to variable participation level and program budget (acreage) constraints, the goal in this project is to maintain a relatively close correspondence between the acceptance rate assumed in the Nash Equilibrium calibration and the final study. (The Nash Equilibrium model finals the optimal offers for a pool of the nine hypothetical fields used in this study where all participants have shared beliefs about the probability of acceptance (the likely “cutoff” EBI score) that are consistent with the full set of optimal offers.) Each wave will clear as a separate auction with a budget constraint proportional to the share of participants. If there are 600 participants in the first wave, the budget for that auction will be 60 percent of the total payments budget. This ensures that participants in different waves will face the same expected probability of acceptance and, given the large number of participants, the same distribution of competing offers.
2: Describe the procedures for the collection of information including:
• statistical methodology for stratification and sample selection,
• estimation procedure,
• degree of accuracy needed for the purpose described in the justification,
• unusual problems requiring specialized sampling procedures.
The sampling will be done through a simple random sample of the sample frame with no stratification.
There will be no correction to the sample data for non-response or imputation of data that participants choose not to provide. Analysis will only be done on participants that fully complete all three rounds of the simulated conservation auction.
The setting for the experiment is a multi-unit reverse auction in which participants compete on price (requested rental payment) and quality (land characteristics and proposed conservation practices). Participants are provided with a hypothetical field in each round. Those fields vary in their baseline score (competitiveness) and in field-specific bid caps (maximum allowable rental payments). The use of the experiment setting means that the fields can also vary according to their reserve value (returns if they do not get accepted in the auction) and participants can be given marginal incentives in the auction that are analogous to a real-world setting by providing participant payments that are based on the net returns of a winning offer.
The main outcome measurements, the data collected, are the offer price (discount relative to the bid cap) and offer quality (conservation practice) for each round. Based on insights from the behavioral economist literature, the treatments are a default starting offer and live score updating. Defaults are found in many contexts to cause an anchoring effect in which decision makers selection a final option that is closer to the default than they do without the default. Live updating is hypothesized to lead to greater responsiveness to the underlying incentives based on the idea that cognitive constraints can lead to attenuation of responsiveness in complex decision-making environments. The current CRP enrollment software uses and active choice (i.e.: blank) default and withholds scoring information until the end of the enrollment process.
The experiment uses two treatments to test how changes in the auction environment, specifically the auction software, might influence offer structure. The treatments in this experiment are assigned in a balanced 2-by-2 design without estimation of the interaction effects. This maximizes the statistical power of testing each of the two research hypotheses because the treatments are independent of each other.
Each participant is asked to complete an offer in a training round and in three actual rounds. The assignment of treatment varies by participant and remains constant across rounds for each participant. Over the three actual rounds, the participants receive three different fields, which creates variation in the expected optimal offer for each participant. This sort of within-participant variation is necessary for testing the hypothesis about live score updating.
Since expected optimal offers vary by field characteristics (the “endowments” in each round for each participant), the treatment effects are jointly estimated in a set of regressions for each outcome. The explanatory variables in each regression are a dummy variable for the default treatment, the endowment (exogenous) score, the interaction of a dummy variable for the live updating treatment and the endowment, and a dummy variable for the live updating treatment. The total upgrade (cumulative additional points) and the percent discount offered are both estimated using a Tobit model given the corner solution of “zero” in both cases. The cover practice, for which there are four option, is estimated using an ordered probit. These basics specifications, absent the treatment effects, were selected using an econometric analysis of actual CRP offer data. An additional test will use a binary choice model to check for the effect of the live score updating treatment on use of the “back” button in the software.
There are no concerns about accuracy of individual responses since the software constrains choices. While some participants could engage in non-optimal or essentially random bidding, the literature on experimental conservation auctions nearly always finds that study participants respond to the underlying auction incentives in the expected direction. The study has been designed to have 1,100 usable observations (responses) and a limited number of treatments. The combination of limited treatments and population size were selected based on a power analysis in which the minimum detectible effect on the rental rate portion of the offer would be between 0.2 percentage points. While this seems like a small effect in absolute terms, which would suggest that the experiment is perhaps overpower, it is a reasonably large effect in relative terms. The average discount in recent signups has ranged from 3.9 to 6.0 percent (Attachment D). So a 0.2 percentage point change in the average discount would be a treatment effect of 2 to 5 percent. In the context of the CRP general signup, a 0.2 percentage point reduction in rental payments would reduce total program rental payments by $200,000 for every $100,000,000 in accepted offers. In consultation with program experts, an effect of at least this order of magnitude would likely be necessary to motivate the expense and effort required to implement changes in the enrollment software.
There are no unusual problems that require special sampling procedures.
As noted in item 1 above, the recruitment will be done in waves to meet the targeted usable observation size, which allows the auction to clear at the expected budget with the expected offer acceptance rate. For above expected participation rates, this will lead to a reduction in respondent burden due to recruitment.
3: Describe methods to maximize response rates and to deal with issues of non-response. The accuracy and reliability of information collected must be shown to be adequate for intended uses. For collections based on sampling a special justification must be provided for any collection that will not yield "reliable" data that can be generalized to the universe studied.
To achieve the highest possible response rate among the farmer population, the study will utilize three techniques that have been found to increase response rates among farmers for participation in experimental studies (Weigel et al. 2021, see supporting statement A). First, the recruitment materials will come from a trusted source. The recruitment letters will come from the USDA and will feature the USDA logo. The webpage that the participants will be asked to log-into will be on the usda.gov domain. Second, the study will use a priming post-card, a main recruitment letter, and two follow-up post-cards. Third, the study will utilize a monetary payment that includes a minimum “participation” payment. The recruitment materials will highlight this aspect (Attachment G).
The study will include a non-response bias analysis. Since the sample frame will consist of prior CRP participants, the study will conduct tests for difference in average characteristics that are observed in the prior CRP offers: field size, EBI score, offer acceptance (0/1), and rental rate.
The study will also calculate the average participation rate by state to see if the sample is geographically representative. The study will not weight the analysis to correct for non-response. Any findings of potential imbalance between the participant and the non-participant populations will be disclosed to indicate potential limitations in making inference about treatment effects for the full population.
4: Describe any tests of procedures or methods to be undertaken.
The research team conducted a pilot version of the experiment as pre-test with six students in 2020 to confirm that the expected completion time was not more than 30 minutes. Most students completed the study in 15 to 20 minutes. They also confirmed that they understood the tasks required in the study (Attachment N).
Since this study relies upon induced “values” in terms of the incentives that participants face to structure their offer, some method was needed for calibrating the study. Beginning with values derived from the actual CRP program, the research team developed a numerical model of the auction to solve for the Nash Equilibrium offers. The parameters for the field characteristics and ranking and payoff parameters, which interact to induce variation in the underlying incentives, were then calibrated to produce an expected distribution of offer structure that is close to that observed in the actual program. This simulation assumes the null hypothesis for the study (that there is no anchoring effect for defaults and no effect to changing the timing of information on scoring) and is described in greater detail in Attachment D.
The full model for the estimation of treatment effects is described in Attachment D.
5: Provide the name and telephone number of individuals consulted on statistical aspects of the design and the name of the agency unit, contractor(s), or other person(s) who will actually collect and/or analyze the information for the agency.
The research team that developed the experimental design and statistical analysis plan consisted USDA staff and researchers from CBEAR.
Steven Wallander, Senior Economist, USDA Economic Research Service, Conservation and Environment Branch, (202) 694-5546
Richard Iovanna, Economist, USDA Farm Production and Conservation Business Center, Economic and Policy Analysis Division, (202) 720-5291
Kent Messer, S. Hallock DuPont Professor of Applied Economics, Co-Director of the Center for Behavioral and Experimental Agri-environmental Research, University of Delaware, Department of Applied Economics and Statistics, (302) 831-1316
Paul Ferraro, Bloomberg Distinguished Professor of Human Behavior and Public Policy, Co-Director Director of the Center for Behavioral and Experimental Agri-environmental Research, Johns Hopkins University, Carey Business School, (410) 234-9389
Laura Paul, Post-doctoral Researcher, Center for Behavioral and Experimental Agri-environmental Research, University of Delaware.
In addition, the draft study design (Attachment D) was reviewed by three experts on conservation auction experiments (Attachments E1, E2, and E3).
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
Author | Wallander, Steve - REE-ERS, Washington, DC |
File Modified | 0000-00-00 |
File Created | 2022-04-07 |