Attachment E2 EDAP Reviewer 2

Attachment E2 EDAP Reviewer 2.docx

Conservation Auction Behavior: Effects of Default Offers and Score Updating

Attachment E2 EDAP Reviewer 2

OMB: 0536-0078

Document [docx]
Download: docx | pdf


Card Review: “Conservation Auction Behavior: Effects of Default Offers and Score Updating

Experimental Design:

  • Does the research propose appropriate methods to test a meaningful hypothesis?

I would summarize the research question as follows:

Can particular changes to the CRP auction design, in particular (1) the addition of “default choices” and (2) real-time “score updating” be employed to improve the cost-effectiveness and quality (in terms of conservation outcomes) observed in CRP reverse auctions?

Overall, it appears the authors propose appropriate methods to test (several) meaningful hypotheses related to the expected effects of (1) default offers (H1A-H1C), and (2) score updating (H2A & H2B) (p. 2).



The only weakness I see among the hypotheses are those related to “H3: Students vs. Farmers.” As explained in more detail below, I don’t fully understand why there is a need to compare/contrast participants from these two groups, or (perhaps more importantly) to what extent any observed differences could be traced to meaningful differences between the two groups (as opposed to differences in the parameters/incentives applied/presented to the groups within the proposed experiment itself).



  • Evaluate the match between the proposed treatments and the research question.

The authors’ proposed 2x2 treatment design and the treatments themselves appear well-tailored to addressing the research question.

In particular, the (1) default starting offer treatment (wherein the treatment is starting participants off at the highest (that is, most expensive conservation practice) and highest-possible (non-negative payoff) bid-down of 9 percent of the bid-cap - $15), versus the control of an “active choice” (wherein participants must select a conservation practice and bid-down, rather than “opting-out” of the default), is sound and well-founded in the economics literature (though the proposal itself does not make reference to prior papers on the effect of defaults).1

Similarly, the (2) “score updating” treatment appears appropriate to test the impact of real-time knowledge of the effects of choices on EBI scores versus. the control of no-live updating (that is, knowledge of final EBI score/points not provided to participants until final submission screen). Again, the proposal does not provide references to existing research on the topic of the timing/quality of information provision; but finding papers to substantiate the application of the score updating condition here would be very easy to find.



  • Evaluate the external validity of the experiment.



The authors seem to contradict themselves regarding external validity. On one hand, they correctly note that external validity is rooted in the extent to which the experiment’s structure, incentives, and parameters closely mimic those faced by participants in the actual CPR Auctions. I believe that the experimental environment and study design align closely enough with the actual CPR Auction framework to be externally valid. The authors note explicitly when (and why) they were forced to simplify or adjust the experimental environment (reducing choice sets in conservation options, etc), and I believe these are well-justified.



I do not, however, see the direct benefit in adding in the “Farmer cohort” (for the purposes of achieving external validity). Why does this give the experiment more external validity? Similarly, why the odd choice to endow Students with a single acre and Farmers with 5 acres?



One small concern I have regarding external validity is within the Farmer sample, specifically. There may be some farmers who (for any number of reasons), want to establish the best conservation practices on parcels of their land in the real world and are not cost-sensitive, and thus would submit high bid-downs and select high-cost conservation practices. But, in the context of this artefactual experiment, what incentive would they have to do this? Within the experiment itself, they are just like everyone else, trying to maximize their profits from taking the time to participate in this study. So, in this sense, the authors may be underestimating the impact of the defaults and score-updating treatments relative to the real world (though if the results were to be biased, this is the direction you’d want the bias to go in).



  • Comment on the key features of the design such as the assignment of treatment, the statistical power, participant recruitment, and other features that you view as critical.



  1. Participant Recruitment: I am slightly confused by the recruitment process, and how it might impact the auction environment. If I understand the recruitment process correctly, the authors seek 1,000 participants from both the Student and Farmer populations. The “first wave” of recruitment will entail contacting 1,000 potential members of each group, and conducting the experiment using however many of each group show-up (and then adjusting recruitment for the next wave based on the show-up rate from the first wave). But won’t this affect the study design, in that it may force participants to compete in auctions with differing numbers of bidders? Shouldn’t the number of other bidders remain constant across individual sessions?



  1. Actual Payments to Participants: I understand most of the example regarding the calculation of actual payments from the experiment on p. 12, except for the last sentence: “Participants who bid down more or selected an even better practice and are accepted would receive less than $190.” Where did the $190 come from?



  1. Farmer vs. Student Cohorts: As discussed above, I fail to see how adding in the Farmer cohort contributes to enhancing the external validity of the experiment. Moreover, I am concerned by the differences in acreage endowments for the Students (1 acre) vs. Farmers (5 acres). Why is there a need to introduce an endowment differential to the study design? I don’t think this is ever explicitly explained by the authors. I am concerned that giving the Farmers the opportunity to earn much more money (relative to the Students) will bias the results (because, as mentioned above, if they can make more money, Farmers may view the experiment even more as a toy model to be “beaten/solved” for the purposes of profit-maximization, as opposed to an analogue for studying behaviour in actual CRP Auctions). Also, to the extent that Students and Farmers may have different marginal utilities of income, might expanding the payoff set so much for Farmers introduce bias in the comparative tests that the authors aim to run? They seem to acknowledge this indirectly in H3A and H3B on p. 2, but, again, it’s not clear/explicit.



  1. Scoring/Offer Selection Rule(s): After reading the proposal, I don’t come away with a clear understanding of how offers are to be evaluated/accepted after each round of the auction (either in actual CRP auctions or in the experimental auctions). I think the proposal would benefit from additional explanation/clarity here (pp. 12-13).



  • Other Comments:



  1. I do not understand the “Expected payoff matrix for a $50 acre/bid cap field at NE” (Table A1; p. 19). For example, for this Low-Low field, the entry corresponding to a bid-down of 0% and selection of “basic conservation practices” (the uppermost left entry), displays a value of $98. How is this possible? [e.g., $50 bid-cap - $0 (no bid-down) - $31 (RR) - $2 (conservation cost) = $17 + $10 participation fee = $27]. Even assuming this example were from the “Farmers” cohort (where each participant is endowed with 5 acres, [$17*5 +10 = $95]? The data point in the lowermost right cell (-$5), similarly does not make sense to me, [e.g., $42.50 (15% bid-down on $50 bid-cap) - $31 (RR) - $15 (conservation cost) = -$3.50; if you add in the $10 (participation bonus), the expected pay out would be $6.50].



  1. On the topic of “cost-effectiveness,” is there a reason that the authors do not adopt the typical convention wherein in experiments with multiple, independent “rounds,” one—and only one—round is randomly selected for payment at the end of the experiment? As I understand it, as presented, the authors intend to make payments based on the outcomes of each of the three rounds for all participants (plus the $10 show-up fee).



  1. The proposal needs to be copy-edited closely. There are many typos, mis-spellings and incorrect/unintelligible sentences that are obviously the result of redlining/multiple authors proposing edits.









Analysis Plan:

  • Are the statistical tests for treatment effects appropriate?



The statistical tests appear to be appropriate for the analysis of the effects of the treatments. However, I do believe that the statistical analysis section of the proposal would greatly benefit from the addition of text directly linking the hypotheses listed on p. 2 (that is H1A-H1C, H2A-H2B, and H3A-H3B), to the tests/test statistics that are proposed. That is, either say—explicitly—“to test Hypothesis H1A, we propose to…”, or, in the alternative, add a column to “Table 5: Treatment Effect Estimates,” that lists which specific hypothesis(es) are implicated by a given test statistic.



  • Have the researchers left out any tests or analysis that should be conducted?

I don’t know if the authors will have the required statistical power, but I might consider the addition of some type of control/test for order effects. That is, does the order in which participants were presented with the High-Medium-Low field types/bid-caps affect their decision-making?



1 Papers on this topic are easy to find. Two of my favorites (though not “experimental”) are: Kareem Haggag & Giovanni Paci, “Default Tips,” 6(3) American Economic Journal: Applied Economics 1-19 (July 2014); and B. Douglas Bernheim, Andrey Fradkin & Igor Popov, “The Welfare Economics of Default Options in 401(k) Plans,” 105(9) American Economic Review 2798-2837 (September 2015).


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorCard, Andrew
File Modified0000-00-00
File Created2022-04-07

© 2024 OMB.report | Privacy Policy