The Supporting Statement for OMB 0596-0201
Role of Communities in Stewardship Contracting Projects.
Terms of Clearance
BLM may participate in this survey provided that they do not undertake the NAU (Northern Arizona University) survey, which is duplicative of this effort.
B. Collections of Information Employing Statistical Methods
Describe (including a numerical estimate) the potential respondent universe and any sampling or other respondent selection method to be used. Data on the number of entities (e.g., establishments, State and local government units, households, or persons) in the universe covered by the collection and in the corresponding sample are to be provided in tabular form for the universe as a whole and for each of the strata in the proposed sample. Indicate expected response rates for the collection as a whole. If the collection had been conducted previously, include the actual response rate achieved during the last collection.
Since the number of stewardship contracting projects changes each year the potential respondent universe will vary slightly, however the manner of selecting respondents to be surveyed will not. The Bureau of Land Management and Forest Service estimate that over the next three years there will be no more than 550 stewardship contracting projects in a given year.
Strata used in stratified random sampling process
Total active projects |
Northeast/Lake states |
Southeast |
Northern Rockies |
Rocky Mountain/Southwest |
Pacific Northwest |
FS |
random sample1 |
random sample |
random sample |
random sample |
random sample |
BLM |
no projects expected |
no projects expected |
random sample |
random sample |
random sample |
Number of footnote in table does not match footnote on bottom of page. Also, if you need the footnote, it should either go with all the cells in the table that currently say “random sample” or it should go wit the title.
This project will utilize stratified random sampling based on the following criteria. Each year, the Forest Service and BLM will provide a list of stewardship contracting projects, the State in which the project is located and the number (what number is this? do you mean name?), email, and phone number of the federal project manager. From this list of projects, the Pinchot Institute and their subcontractors will utilize a process of stratified random sampling. Projects will be stratified by the managing agency (FS or BLM) and by geographic region (five regions will be used – Northeast/Lake States (CT, DE, IA, IL, IN, MA, ME, MD, MI, MN, MO, NJ, NH, NY, OH, PA, RI, VT, WI, WV), Southeast (AL, FL, GA, KS, KY, LA, MS, NC, SC, TN, VA), Northern Rockies (ID, MT, ND, SD, WY), Southwest (AZ, CO, KS, NE, NM, NV, OK, TX, UT) , Pacific Northwest (AK, CA, HI, OR, WA)). KS is listed twice (SE and SW regions) and AR is never listed. Then a statistically relevant number of projects will be randomly selected from each stratum. (This is very vague. Do you need to definte that more precisely for OMB?) Maureen McDonough from Michigan State University, who designed the sampling process, estimates that no more than 25 percent of projects in each strata will be selected for the survey each year. However, if there are less than 12 projects is a given strata the sampling rate may exceed 25 percent; however, the total number of interviews will not exceed 350 per year. (The rest of this paragraph is talking about how many projects will be selected, but this sentence is about the number of interviews. Those are different ideas, so I would not mix them together.)
For each project entered into the sampling pool, three people shall be interviewed, including the agency project manager and two external (to the agencies) participants in the project (ideally one community participant and one contractor involved in implementation of the project). When conducting the phone survey with the agency project manager for a randomly selected project, the Pinchot Institute and its subcontractors will ask for a list of community members and contractors involved in the project. The Pinchot Institute and its subcontractors will then randomly select two external participants to survey from the federal project manager’s list. Will this be a simple random sample or stratified based on type (community participant vs contractor)? The analysis will take place on a project basis, not a respondent basis. How does that work? You should provide more detail. Because the sampling pool consists entirely of individuals who have voluntarily participated in a stewardship contracting project, non-response of selected participants is expected to be low, yet if there is a selected individual that does not wish to participate in the survey the interviewer will randomly select another participant from the project’s list. Therefore, a 100 percent response rate is expected on a project level.
Since the docket specifically requests population and sample counts, I would encourage you to provide that in tabular format. I realize you cannot predict exact sample sizes for all 3 years, but it seems like you could make a good estimate for next year at least based on currently authorized projects or projects in the development stages. Providing this breakdown by strata would help address some of their key questions.
It would be relevant to include the average number of external participants you expect most projects to have. I realize this is a highly variable number, but it’s helpful to know how big that sample pool is.
Do you have any mechanisms in place to assure you get a complete list of external participants? I don’t know much about these projects, but would there be a tendancy for the project manager to omit the names of external participants who were less happy or cooperative, thus biasing your results?
Where are the response rates from the previous years’ data collections? I guess you could infer that they were 100%, but you should spell it out.
Describe the procedures for the collection of information including:
Statistical methodology for stratification and sample selection,
Estimation procedure,
Degree of accuracy needed for the purpose described in the justification,
Unusual problems requiring specialized sampling procedures, and
Any use of periodic (less frequent than annual) data collection cycles to reduce burden.
The stratification is two–fold. Projects will be stratified first by agency, then by geographic region (five regions will be used –Northeast/Lake States (CT, DE, IA, IL, IN, MA, ME, MD, MI, MN, MO, NJ, NH, NY, OH, PA, RI, VT, WI, WV), Southeast (AL, FL, GA, KS, KY, LA, MS, NC, SC, TN, VA), Northern Rockies (ID, MT, ND, SD, WY), Southwest (AZ, CO, KS, NE, NM, NV, OK, TX, UT) , Pacific Northwest (AK, CA, HI, OR, WA)). KS is listed twice (SE and SW regions) and AR is never listed. The Pinchot Institute and subcontractors will then interview a random sample of enough projects in each stratum to provide an effective and efficient statistical sample. (This is very vague. Do you need to definte that more precisely for OMB?) As information is collected during the interview process, it will be entered into a uniform report format and sent to Michigan State University for analysis. Following receipt of the data, MSU researchers will code questions and responses for entry into software programs used for qualitative and quantitative analyses. Since the information is needed to write an annual report to Congress, data collection cycles can not be less frequent than annual.
One of the questions specifically asks about the estimation procedure, but you really do not cover that. Earlier you said the analysis will take place at a project level not an interview level, but how do you get to that aggregation level with your data? Is your final report to Congress at the project level? if not, how do you aggregate it further? What type of statistics are you producing: ratios, expanded totals, expanded counts, etc? Are you using sampling weights?
You do not address degree of accuracy needed or unusual problems requiring special sampling.
Would you collapse strata if there are insufficient projects?
Describe methods to maximize response rates and to deal with issues of non-response. The accuracy and reliability of information collected must be shown to be adequate for intended uses. For collections based on sampling, a special justification must be provided for any collection that will not yield "reliable" data that can be generalized to the universe studied.
The phone survey method will be used in order to secure a high response rate. The phone interview can be conducted at a time convenient for the participant. In addition the potential respondent universe includes only individuals who have chosen to be involved in some manner in a stewardship contracting project; therefore they will be familiar with the information in the survey. I think you ought to mention your procedure to contact a new, randomly selected person if you initial person can’t or wont’ do the survey.
Describe any tests of procedures or methods to be undertaken. Testing is encouraged as an effective means of refining collections of information to minimize burden and improve utility. Tests must be approved if they call for answers to identical questions from 10 or more respondents. A proposed test or set of tests may be submitted for approval separately or in combination with the main collection of information.
No tests or procedures will be undertaken. An OMB survey number was awarded, and the survey has been conducted for three years. Minor format and wording changes are being submitted for approval (as noted in under Section A(2)(g)).
What are the reprecussions of changing so many questions from open ended to closed ended responses when you already have 3 years of previous data? That could represent a break in the dataseries. It may be worthwhile, but has it been fully considered?
Provide the name and telephone number of individuals consulted on statistical aspects of the design and the name of the agency unit, contractor(s), grantee(s), or other person(s) who will actually collect and/or analyze the information for the agency.
Maureen McDonough, Michigan State University, (517) 432-2293, [email protected], designed the statistical aspects of the survey. The Pinchot Institute for Conservation and their subcontractors will be collecting and analyzing the data. The project manager at Pinchot Institute is Brian Kittler, [email protected], 202-797-6585.
12 Each year the number of projects in each stratum will be determined by the Forest Service and BLM. The Pinchot Institute and subcontractors will then interview a random sample of enough projects in each stratum to provide an effective and efficient statistical sample.
Page
File Type | application/msword |
File Title | DRAFT |
Author | PCxx |
Last Modified By | roseje |
File Modified | 2009-03-31 |
File Created | 2009-03-31 |