Elwha_River_Pilot_Study_ICR_12172012_Part_B 2013.10.24_clean2

Elwha_River_Pilot_Study_ICR_12172012_Part_B 2013.10.24_clean2.docx

PILOT TEST OF THE ELWHA RIVER DAM REMOVAL AND FLOODPLAIN RESTORATION ECOSYSTEM SERVICE VALUATION PROJECT SURVEY

OMB: 0648-0683

Document [docx]
Download: docx | pdf

SUPPORTING STATEMENT

PRETEST OF THE ELWHA RIVER DAM REMOVAL AND FLOODPLAIN RESTORATION ECOSYSTEM SERVICE VALUATION PILOT PROJECT SURVEY

OMB CONTROL NO. xxxx-xxxx

B. COLLECTIONS OF INFORMATION EMPLOYING STATISTICAL METHODS

1. Describe (including a numerical estimate) the potential respondent universe and any sampling or other respondent selection method to be used. Data on the number of entities (e.g., establishments, State and local governmental units, households, or persons) in the universe and the corresponding sample are to be provided in tabular form. The tabulation must also include expected response rates for the collection as a whole. If the collection has been conducted before, provide the actual response rate achieved.

The eligible study population is defined as follows: U.S., non-institutionalized adults age 18 and older; limited to residents of Washington and Oregon. However, for the pretest requested in this ICR, we do not expect to extrapolate any conclusions of the pretest to the sample frame. This pretest if for methods testing only.

Sample size and response rates

For the first wave of the pretest, KN will send the survey to a sample of 2,188 panel members in Oregon and Washington. It is anticipated that KN will achieve approximately a 60% participation rate for approximately 1,313 completed surveys.

The sample size for the pilot study must be sufficiently large to allow us to address the following questions:

  1. Can statistical efficiency be improved by using the independent choice question format while providing WTP estimates consistent with the traditional format?

  2. Does the market for restoration of ecosystem services on the Elwha River extend beyond Western Washington, i.e., into Eastern Washington and Oregon? How do responses differ among these regions?

  3. Are the bid amounts appropriate for this policy question? Is the maximum sufficiently high to return relatively few positive votes, and are the midpoints of the range receiving an expected number of votes?

The pilot study will be stratified in two dimensions – geography and choice question format – leaving approximately 394 responses per choice question format in Washington and 130 responses per choice question format in Oregon per the sample allocation as described in Section B2. The traditional choice question format has four versions to cover all combinations for a main effects orthogonal design, asking respondents three choice questions apiece. The survey versions are structured so that no respondent will receive the same scenarios back to back.

From the pilot study, we want to understand WTP for the salmon program and forest program, and identify whether the geography or choice question formats yield different WTP estimates. We conducted a power analysis assuming a simple conditional logit model with no covariates other than the level of salmon restoration, forest restoration, and cost. The following table summarizes the expected power to detect differences in WTP values between the two choice question formats over a range of sample sizes, at an alpha of 5%. Because we are interested in generating robust WTP estimates at the state level, the results presented in this table should be compared to the sample allocation within each state.

Table 1. Predicted power of detecting a difference in WTP estimates between choice question formats, by sample size.

Number of responses for each choice format

Power to estimate WTP for forest restoration

Power to estimate WTP for salmon restoration

20

45%

69%

40

55%

74%

60

61%

77%

80

65%

79%

100

68%

81%

120

70%

82%

140

72%

83%

160

74%

84%

180

75%

85%

200

77%

86%

220

78%

86%

240

79%

87%

260

80%

88%

280

81%

88%

300

81%

89%

320

82%

89%

340

82%

90%

360

83%

90%

380

84%

90%

400

84%

91%

We expect to achieve 80% power at approximately 260 responses for forests and approximately 90 responses for salmon. This indicates that we will likely be able to independently compare salmon WTP estimates for each choice format within each state, but will have to pool the responses across states to compare forest WTP estimates.

Given these power analysis results, plus our interest in evaluating the effectiveness of our bid design within geographic strata and choice question format, we anticipate that the 1,313 responses for the pilot will be sufficient. A smaller sample size would reduce our ability to robustly compare WTP estimates across the different strata.

For the second wave of the pretest, about 1,042 survey instruments will be sent out to households in Washington and Oregon in order to get 312 completed surveys.

This number is sufficient for refining, if necessary, the experimental design for the final survey.

A table summarizing the sampling universe for both waves of the pretest is shown below.

Table 2. Pretest sample size and response rate

Mode of data collection

Sample size

Completed surveys

Overall response rate

Internet (wave 1)

2,188

1,313

20%a

Mail (wave 2)

1,042

312

30%

a. The completion rate for the Internet wave is 60%, but the overall response rate is approximately 20%. The lower overall response rate results from the steps involved in the initial recruitment of participants into the sample (e.g., phone calls, administration of a screener), during which potential panelists are lost before becoming part of the sampled group.



2. Describe the procedures for the collection, including: the statistical methodology for stratification and sample selection; the estimation procedure; the degree of accuracy needed for the purpose described in the justification; any unusual problems requiring specialized sampling procedures; and any use of periodic (less frequent than annual) data collection cycles to reduce burden.

Sample frame and sample selection

Knowledge Network’s (KN’s) established Web panel sample, KnowledgePanel®, The Web-enabled panel comprises both Internet and non-Internet households prior to panel participation; KN supplies the non-Internet households with an Internet appliance and Internet connection. It also includes households with both listed and unlisted telephone numbers, cell-phone-only households, and no-phone households. The panel does not accept self-selected volunteers.

Originally, the panel’s probability-based recruitment had been based exclusively on a national random digit dialing (RDD) frame. In April 2009, KN used an address-based sample (ABS) frame (to supplement the RDD frame) in response to the growing number of cell-phone-only households (CPOHHs) that are outside of the RDD frame. In 2010, KnowledgePanel® transitioned completely to ABS-sourced panel recruitment, ending recruitment using RDD and telephone methods, with the exception of some targeted Spanish-language telephone-based recruitment to support KnowledgePanel Latino.

ABS involves probability-based sampling of addresses from the U.S. Postal Service’s (USPS’s) Computerized Delivery Sequence File (CDSF). Randomly sampled addresses are invited to join KnowledgePanel® through an initial mailing (with a $2 non-contingent incentive included), followed a week later by a postcard reminder and three weeks later by a final letter mailed to all non-responders. Telephone follow-up calls are made to those non-responders for whom a telephone number can be matched to their address.

The key advantage of the ABS sample frame is that it allows sampling of virtually all U.S. households. In sampling nomenclature, an estimated 97% of households are “covered” in this frame. Because the frame is address based, household telephone status is not a limiting factor, as residents can be contacted through the mail. KN’s ABS experience has also revealed some advantages beyond the expected improvement in recruiting young adults and CPOHHs. There is also a marked improvement in sample representativeness of minority racial and ethnic groups, as well as improved recruitment of households with less education and low incomes.

Currently, about 55% of KnowledgePanel® members are sourced from the more recent ABS recruitment samples. The balance is the more tenured, RDD-sourced members. The methodologies used to recruit this panel have been shown to achieve the same or similar quality standards established by prominent surveys conducted for Federal Government agencies that also use ABS, RDD, or area probability methods.

Response rates for different stages of the KnowledgePanel® are summarized in Table 3. Any measure of recruitment rate, profile, survey completion rate, and survey breakoff rate is specific to the study being evaluated. Thus the values listed below are a representative of a typical study implemented by KN. The final response rates for this study will vary.

Mean Recruitment Rate

0.144321

Weighted Profile Rate

0.657956

Survey Completion Rate

0.618103

Survey Breakoff Rate (out of Assigned)

0.052155

Cumulative Response Rate

0.058693

For the mail portion of the pretest, a mail survey firm will conduct a probability sample of adult residents in the adult population of Washington and Oregon. Given the nature of the survey (i.e., choice questions that the respondents will have to look at), a self-administered mail survey will be provided to respondents. The survey administration firm will use ABS based on the USPS Delivery Sequence File.

This population will be accessed through a sample of residential addresses with a questionnaire completed per address. The questionnaire will elicit an enumeration of the adult members of the household for later adult population weighting and estimations. The frame from which the sample will be drawn is the Computerized Delivery Sequence File (CDSF) created and maintained by the USPS.

The ABS for this study will be stratified by state, with 75% of the sample randomly selected from Washington and 25% randomly selected from Oregon. The 75% is a mild oversample of Washington, which has approximately 63% of the total number of households of the two states combined.

The mailing protocol for the mail portion of the study follows the researched and published “Tailored Design Method” (Dillman, 2009). A pre-notification letter will be mailed to all households in the sample about one week before the survey packet is mailed. The first mailing of the survey packet will include a cover letter, survey booklet, $2 non-contingent incentive, and postage-paid return envelope. One week after this mailing, a reminder postcard will be mailed to all households to encourage a speedy reply while the memory of the original mailed questionnaire is still fresh in residents’ minds. Approximately three weeks after the first mailing of the survey packet, a second questionnaire will be mailed to all households that have not responded (and no $2 incentive is included this time). Finally, about two weeks after this second questionnaire mailing, telephone reminder calls will be made to non-responding households in the sample for which a landline telephone number could be matched to the sample address. About 48–55% of the addresses in the sample may be successfully matched with a telephone number, although this may vary by state and degree of urbanity. We propose that all responding households receive a $10 contingent post-incentive payment.

Sample letters and reminder post cards are included in Attachments.

The survey will not be conducted on an annual basis. This ICR is to request the pretest survey be administered only once in each of the two waves.

3. Describe the methods used to maximize response rates and to deal with non-response. The accuracy and reliability of the information collected must be shown to be adequate for the intended uses. For collections based on sampling, a special justification must be provided if they will not yield “reliable” data that can be generalized to the universe studied.

Maximizing response rates

The first step in achieving a high response rate is to develop an appealing questionnaire that is easy for respondents to complete. We spent significant effort on developing an effective survey instrument during the qualitative research phase. We hired experts on economic survey design and stated preference techniques to assist in the design and testing of this survey. The survey instrument benefited from input on earlier versions from several focus groups and cognitive interviews, and from peer review by experts in survey design and non-market valuation, as well as scientists who study the Elwha River. In the qualitative research phase, the information presented was tested to ensure that key concepts and terms were understood, figures and graphics were developed by professional graphic artists and tested for proper comprehension and appearance, and key economic and design issues were evaluated.

For both waves of the pretest, we will employ the practices that have been employed successfully on other projects requiring OMB approval:

  1. Use of the federal agency name in the email invitation

  2. Both survey-specific and non-survey-specific incentives (as described in response to Part A, Question 9) will be used to improve response rates.

The results of this pilot using the KnowledgePanel will solely be used to design and inform the development and structure of the final study. These pilot results will not be used to infer the general population’s preferences.


Non-respondents

For the purposes of the pretest, we do not plan any non-response follow-up study. However, for the final administration of the survey, we will plan a separate non-response follow-up study.

4. Describe any tests of procedures or methods to be undertaken. Tests are encouraged as effective means to refine collections, but if ten or more test respondents are involved, OMB must give prior approval.

In the first wave of the pretest, the Team plans to test two choice question formats. The tests include a comparison of the point estimates and standard errors of the cost coefficient from the WTP model, a tabulation and statistical comparison of questions regarding respondents’ perceived consequentiality of the choice questions, a tabulation and statistical comparison of the length of time it took respondents to complete the choice questions, and a tabulation and statistical comparison of non-responses to the choice questions.

Findings from the summary statistics, WTP model, and choice question format comparisons will then be incorporated in the supporting statement for the full survey, which will be submitted to OMB for final approval. This supporting statement will include details on the methods of analysis, as well as plans for tabulation and publication of project results.

5. Provide the names and telephone numbers of individuals consulted on the statistical aspects of the design, and the name of the agency unit, contractor(s), grantee(s), or other person(s) who will actually collect and/or analyze the information for the agency.

Stratus Consulting Inc. of Boulder, Colorado, was selected by NOAA to conduct the study through a competitive contract procedure. Dr. Megan Lawson of Stratus Consulting serves as the Project Manager, and Mr. David Chapman of Stratus Consulting serves as Project Technical Advisor. Both Dr. Lawson and Mr. Chapman have extensive experience in applied environmental and natural resource economics involving the use of statistical methods. Contact information follows:

Dr. Megan Lawson: 406-219-3633

Mr. David Chapman: 303-381-8289

Stratus Consulting hired Professor Emeritus Richard Bishop of the University of Wisconsin, Department of Agricultural and Applied Economics, to serve as Principal Investigator. Professor Bishop is a well-known environmental and natural resource economist and has conducted many applied projects involving the use of statistical methods. Contact information follows:

Professor Richard Bishop: 608-238-7473

Stratus Consulting hired Dr. Barbara Kanninen to advise on experimental design issues. Dr. Kanninen is an expert in statistical methods for stated preference studies. Contact information follows:

Dr. Barbara Kanninen: 703-536-6949

The rest of the research team includes Dr. Anthony Dvarskas and Dr. Peter Edwards for NOAA.

Peer review team:

Dr. Richard Carson, University of California at San Diego

Dr. Adam Domanski, IM Systems Group

In addition, the team has relied extensively on federal researchers to develop foundational information for the survey and to check specific facts about the restoration actions:

Dr. George Pess
Supervisory Research Fisheries Biologist
NOAA Fisheries

Dr. Kurt Jenkins
Research Wildlife Biologist
U.S. Geological Survey Forest and Rangeland Ecosystem Science Center

Bibliography

Adamowicz, W., D. Dupont, and A. Krupnick. 2004. The value of good quality drinking water to Canadians and the role of risk perceptions: A preliminary analysis. Journal of Toxicology and Environmental Health 67:1825–1844.

Adamowicz, W., J. Louviere, and M. Williams. 1994. Combining revealed and stated preference methods for valuing environmental amenities. Journal of Environmental Economics and Management 26:271–292.

Adamowicz, W., P. Boxall, M. Williams, and J. Louviere. 1998a. Stated preference approaches for measuring passive use values: Choice experiments and contingent valuation. American Journal of Agricultural Economics 80:64–75.

Adamowicz, W.L., P. Boxall, J. Louviere, J. Swait, and M. Williams. 1998b. Stated preference methods for valuing environmental amenities. In Valuing Environmental Preferences: Theory and Practice of the Contingent Valuation Method in the US, EC and Developing Countries, I. Bateman and K. Willis (eds.). Oxford University Press, London, UK, pp. 460–479.

Baker, L., T.H. Wagner, S. Singer, and M.K. Bundorf. 2003a. Use of the Internet and email for health care information: results from a national survey. Journal of the American Medical Association 289:2400–2406.

Baker, L.C., M.K. Bundorf, S. Singer, and T.H. Wagner. 2003b. Validity of the survey of health and the Internet, and Knowledge Network’s panel and sampling. Stanford, CA, Stanford University. Available: http://www.knowledgenetworks.com/ganp/reviewer-info.html. Accessed March 17, 2003.

Batsell, R.R. and J.J. Louviere. 1991. Experimental analysis of choice. Marketing Letters 2:199–214.

Bausell, R.B. and Y. Li. 2002. Power Analysis for Experimental Research. Cambridge University Press, Cambridge, UK.

Beggs, S.D., N.S. Cardell, and J. Hausman. 1981. Assessing the potential demand for electric cars. Journal of Economics 4:87–129.

Bishop, R.C., K.J. Boyle, M.P. Welsh, R.M. Baumgartner, and P.R. Rathbun. 1987. Glen Canyon Dam Releases and Downstream Recreation: An Analysis of User Preferences and Economic Values. Report prepared for Glen Canyon Environmental Studies, U.S. Bureau of Reclamation, by HBRS, Madison, WI.

Breffle, W.S. and R.D. Rowe. 2002. Comparing choice question formats for evaluating natural resource tradeoffs. Land Economics 78(2).

Breffle, W.S., E.R. Morey, R.D. Rowe, and D.M. Waldman. 2005. Combining stated-choice questions with observed behavior to value NRDA compensable damages: A case study of recreational fishing in Green Bay and the Lower Fox River. In The Handbook of Contingent Valuation, D. Bjornstad, J. Kahn, and A. Alberini (eds.). Edward Elgar Publishing, Northampton, MA.

Cameron, T. and J.R. DeShazo. 2005. Sample Selection in a Major Consumer Panel: Assessment and Correction Using Year 2000 Census Tract Characteristics and County-level Presidential Voting Patterns (Draft).

Cameron, T., W.D. Shaw, and S. Ragland. 1999. Nonresponse bias in mail survey data: Salience vs. endogenous survey complexity. In Valuing the Environment Using Recreation Demand Models, J.A. Herriges and C.L. Kling (eds.). Edward Elgar Publishing, Northampton, MA, pp. 217–251.

Cattin, P. and D.R. Wittink. 1982. Commercial use of conjoint analysis: A survey. Journal of Marketing 46:44–53.

Elrod, T., J.J. Louviere, and K.S. Davey. 1992. An empirical comparison of ratings-based and choice-based conjoint models. Journal of Marketing Research 30:368–377.

Gan, C. and E.J. Luzar. 1993. A conjoint analysis of waterfowl hunting in Louisiana. Journal of Agricultural and Applied Economics 25(2):36–45.

Green, P.E. and V. Srinivasan. 1990. Conjoint analysis in marketing: New developments with implications for research and practice. Journal of Marketing October:3–19.

Heckman, J. 1979. Sample selection bias as a specification error. Econometrica 47(1):153–161.

Hensher, D.A. 1994. Stated preference analysis of travel choices: The state of practice. Transportation 21:107–133.

Holmes, T.P. and W.L. Adamowicz. 2003. Attribute-based methods. In A Primer on Nonmarket Valuation, P.A. Champ, K.J. Boyle, and T.C. Brown (eds.). Kluwer Academic Publishers, Dordrecht, pp. 171–220.

Huber, J., W.K. Viscusi, and J. Bell. 2004. The Value of Regional Water Improvements: Further Evidence. Presented at the Valuation of Ecological Benefits Conference, U.S. Environmental Protection Agency. October.

Johnson, F.R. and W.H. Desvousges. 1997. Estimating stated preferences with rated-pair data: Environmental, health, and employment effects of energy programs. Journal of Environmental Economics and Management 34:79–99.

Johnson, F.R., W.H. Desvousges, E.E. Fries, and L.L. Wood. 1995. Conjoint Analysis of Individual and Aggregate Environmental Preferences. Triangle Economic Research Technical Working Paper No. T‑9502, Carey, NC.

Kanninen, B. (ed.). 2007. Valuing Environmental Amenities Using Stated Choice Studies. 1st Edition. Springer Publications, Dordreich, The Netherlands.

Kline, J. and D. Wichelns. 1996. Measuring public preferences for the environmental amenities provided by farmland. European Review of Agricultural Economics 23:421–436.

Krupnick A. and M.L. Cropper. 1992. The effects of information on health risk valuations. Journal of Risk and Uncertainty 5:29–48.

Lareau, T.J. and D.A. Rae. 1998. Valuing WTP for diesel odor reductions: An application of contingent ranking technique. Southern Economics Journal 55(3):728–742.

Layton, D. and G. Brown. 1998. Heterogeneous Preferences Regarding Global Climate Change. Presented at NOAA Applications of Stated Preference Methods to Resource Compensation Workshop, Washington, DC.

Loomis, J.B. 1996a. How large is the extent of the market for public goods: Evidence from a nationwide contingent valuation survey. Applied Economics 28:77–782.

Loomis, J.B. 1996b. Measuring the economic benefits of removing dams and restoring the Elwha River: Results of a contingent valuation survey. Water Resources Research 32(2):441–447.

Louviere, J.J. 1988. Conjoint analysis modeling of stated preferences. Journal of Transport Economics and Policy 10:93–119.

Louviere, J.J. 1992. Experimental choice analysis: Introduction and overview. Journal of Business Research 24:89–95.

Louviere, J.J. 1994. Conjoint Analysis. In Advances in Marketing Research, R. Bagozzi (ed.). Blackwell Publishers, Cambridge, MA.

Louviere, J.J. and G. Woodward. 1983. Design and analysis of simulated consumer choice or allocation experiments: An approach based on aggregated data. Journal of Marketing Research 20:350–367.

Louviere, J.J., D.A. Hensher, and J. Swait. 2000. Stated Choice Methods: Analysis and Application. Cambridge University Press, Cambridge, UK.

Mackenzie, J. 1993. A comparison of contingent preference models. American Journal of Agricultural Economics 75:593–603.

Magat, W.A., W.K. Viscusi, and J. Huber. 1988. Paired comparison and contingent valuation approaches to morbidity risk valuation. Journal of Environmental Economics and Management 15:395–411.

Martinez, E. and B. Babbitt. 1996. Record of Decision, Operation of Glen Canyon Dam. Final Environmental Impact Statement, Appendix G, October. Available: http://www.usbr.gov/uc/rm/amp/pdfs/sp_appndxG_ROD.pdf. Accessed October 31, 2009.

Mathews, K.E., W.H. Desvousges, F.R. Johnson, and M.C. Ruby. 1997. Using Economic Models to Inform Restoration Decisions: The Lavaca Bay, Texas Experience. TER technical report prepared for presentation at the Conference on Restoration of Lost Human Uses of the Environment, Washington, DC. May 7–8.

Morey, E.R., T. Buchanan, and D.M. Waldman. 1999a. Happy (hypothetical) Trails to You: The Impact of Trail Characteristics and Access Fees on a Mountain Biker’s Trail Selection and Consumer’s Surplus. Working paper, University of Colorado, Boulder.

Morey, E.R., K.G. Rossmann, L. Chestnut, and S. Ragland. 1999b. Estimating E[WTP] for reducing acid deposition injuries to cultural resources: Using choice experiments in a group setting to estimate passive-use values. Chapter 10 in Valuing Cultural Heritage: Applying Environmental Valuation Techniques to Historic Buildings, Monuments and Artifacts, S. Narvud and R.C. Ready (eds.). Edward Elgar Publishing, Cheltenham, UK and Northampton, MA.

Morikawa T., M. Ben-Akiva, and D. McFadden. 1990. Incorporating Psychometric Data in Econometric Travel Demand Models. Prepared for the Banff Invitational Symposium on Consumer Decision Making and Choice Behavior.

National Oceanic and Atmospheric Administration (NOAA) . 2008. Elwha River Fish Restoration Plan Developed Pursuant to the Elwha River Ecosystem and Fisheries Restoration Act, Public Law 102-495. NOAA Technical Memorandum NMFS-NWFSC-90 (http://www.nwfsc.noaa.gov/assets/25/6760_06202008_151914_ElwhaPlanTM90Final.pdf)

Opaluch, J.J., S.K. Swallow, T. Weaver, C.W. Wessells, and D. Wichelns. 1993. Evaluating impacts from noxious facilities: Including public preferences in current siting mechanisms. Journal of Environmental Economics and Management 24:41–59.

Orme, B. 1998. Sample Size Issues for Conjoint Analysis Studies. Sawtooth Software Research Paper Series, Sawtooth Software, Inc.

Rae, D.A. 1983. The value to visitors of improving visibility at Mesa Verde and Great Smokey National Parks. In Managing Air Quality and Scenic Resources at National Parks and Wilderness Areas, R.D. Rowe and L.G. Chestnut (eds.). Westview Press, Boulder, CO, pp. 217–234.

Roe, B., K.J. Boyle, and M.F. Teisl. 1996. Using conjoint analysis to derive estimates of compensating variation. Journal of Environmental Economics and Management 31:145–150.

Ruby, M.C., F.R. Johnson, and K.E. Mathews. 1998. Just Say No: Assessing Opt-Out Options in a Discrete-Choice Stated-Preference Survey of Anglers. TER Technical Working Paper No. T‑9801. Triangle Economic Research, Durham, NC.

Singer, E. 2002. The use of incentives to reduce nonresponse in household surveys. In Survey Nonresponse, R.M. Groves, D.A. Dillman, J.L. Eltinge, and R.J.A. Little (eds.). Wiley, New York, pp. 163–178.

Swait, J., W. Adamowicz, and J. Louviere. 1998. Attribute-Based Stated Choice Methods for Resource Compensation: An Application to Oil Spill Damage Assessment. Prepared for Presentation at the Natural Resources Trustee Workshop on Applications of Stated Preference Methods to Resource Compensation, Washington, DC. June 1–2.

Viscusi, W.K., W.A. Magat, and J. Huber. 1991. Pricing environmental health risks: Survey assessments of risk-risk and risk-dollar trade-offs for chronic bronchitis. Journal of Environmental Economics and Management 21:32–51.

Welsh, M.P., R.C. Bishop, M.L. Phillips, and R.M. Baumgartner. 1997. Glen Canyon Dam, Colorado River Storage Project, Arizona: Nonuse Values Study Final Report. PB98-105406. National Technical Information Service, Springfield, VA.

Wittink, D.R. and P. Cattin. 1989. Commercial use of conjoint analysis: An update. Journal of Marketing 53:91–96.

Page 10

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File TitleMS Word Default Normal Template
AuthorColleen Donovan
File Modified0000-00-00
File Created2021-01-30

© 2024 OMB.report | Privacy Policy