NOAA B-WET SS 102512 Revised 020213 Part B

NOAA B-WET SS 102512 Revised 020213 Part B.docx

NOAA Bay Watershed Education and Training (B-WET) Program National Evaluation System

OMB: 0648-0658

Document [docx]
Download: docx | pdf

SUPPORTING STATEMENT

NOAA BAY WATERSHED EDUCATION AND TRAINING (B-WET) PROGRAM NATIONAL EVALUATION SYSTEM

OMB CONTROL NO. 0648-xxxx



B. COLLECTIONS OF INFORMATION EMPLOYING STATISTICAL METHODS


1. Describe (including a numerical estimate) the potential respondent universe and any sampling or other respondent selection method to be used. Data on the number of entities (e.g. establishments, State and local governmental units, households, or persons) in the universe and the corresponding sample are to be provided in tabular form. The tabulation must also include expected response rates for the collection as a whole. If the collection has been conducted before, provide the actual response rate achieved.


Censuses will be conducted in light of the relatively small sample sizes (Table 2) and the sophisticated analyses planned to be conducted by an external evaluator. More specifically, models will be tested with Mplus statistical modeling program using multilevel (i.e., to account for teachers nested in professional development programs and repeated measures from the same individuals) structural equation modeling (SEM) to explore the direct and indirect relationships between teachers’ practices and perceived student outcomes based on their MWEE professional development experiences and background. Benefits of SEM include that it allows for exploring direct and indirect causal relationships between variables while also taking into account measurement error (Bollen 1989). SEM permits the combination of factor and path analysis into a single model. SEM models require large sample sizes because they estimate 1) regression coefficients, 2) variances and covariances of unobserved variables, and 3) variances and covariances of errors. Because of the number of direct and indirect paths that the models will estimate, they will have few degrees of freedom (df). Based on the expected df=4 and an approximate sample size of 1,280,1 a power of 80% will be achieved for testing model fit (see Table 4 in MacCallum et al. 1996).



A similar OMB-approved data collection has been conducted for the Chesapeake Bay area only (“Evaluation of National Oceanic and Atmospheric Administration Chesapeake Bay Watershed Education and Training Programhttp://chesapeakebay.noaa.gov/images/stories/pdf/BWETevalsummary.pdf). The expected response rates below are informed by the response rates achieved by this previous data collection (see additional detail in Question 3).







Table 2: Potential respondent universe, sampling strategy, and expected response rates

Population

Sample

Na

Expected

response rate

%

B-WET Grantees

Census

125

90

B-WET PD teachers

Census

4,000

80

B-WET MWEE teachersb

Census

3,200

40

aEstimated populations based on FY2010 B-WET awards [124 active awards (grantees) and 4,489 teachers reached (proposed)]

bSome of the PD teachers may respond as MWEE teachers in the same year, but because that number is unknown, the highest possible number of respondents is used to calculate burden hours.



2. Describe the procedures for the collection, including: the statistical methodology for stratification and sample selection; the estimation procedure; the degree of accuracy needed for the purpose described in the justification; any unusual problems requiring specialized sampling procedures; and any use of periodic (less frequent than annual) data collection cycles to reduce burden.


As there will be censuses of the respective populations, there will be no sampling.


3. Describe the methods used to maximize response rates and to deal with nonresponse. The accuracy and reliability of the information collected must be shown to be adequate for the intended uses. For collections based on sampling, a special justification must be provided if they will not yield "reliable" data that can be generalized to the universe studied.


Grantee Questionnaire Response Rate

Grantee response rates will be maximized (1) by ensuring that the B-WET federal funding opportunity (FFO) and contract will include information about the questionnaires and (2) through an email request initiated by B-WET coordinators to complete the questionnaire at the end of the grant period (with up to two automatically-generated follow-up requests). Because B-WET grantees receive funds from NOAA to conduct their MWEE projects, they are highly invested in the B-WET program. It would be very unlikely that a grantee would not respond to a request from B-WET to complete an end-of-year questionnaire. In addition, grantees will receive a personalized request from their respective NOAA B-WET regional coordinator with whom they are all familiar, as well as a pre-notification and two follow up requests (one and two weeks later), all practices that increase response rates (Dillman et al., 2009. A 90% response rate is therefore expected from grantees.


Teacher Post-PD Questionnaire Response Rate

Teacher response rates will be maximized by encouraging B-WET grantees to provide information about the two questionnaires up front, asking teachers to complete the professional development questionnaire at the close of the professional development experience, and prompting teachers to complete the MWEE questionnaire before the end of the following school year. Up to two automatically-generated follow-up requests will be made for all data response requests. Follow-up reminders, along with advance notice of an impending survey request, improve response rates (Yu and Cooper, 1983). We know from the “Evaluation of National Oceanic and Atmospheric Administration Chesapeake Bay Watershed Education and Training Program” (http://chesapeakebay.noaa.gov/images/stories/pdf/BWETevalsummary.pdf) that when teachers who participated in this program were contacted by email two months after the professional development and asked to complete an online questionnaire, a response rate of 70% was attained.


We anticipate a higher response rate of 80% to the post-professional development questionnaire because teachers will be asked to complete it at the close of the professional development (when response rates typically tend to be almost 100%) and these additional best practices (Dillman et al., 2009) will be used:

  1. Grantees typically offer teacher stipends or credits for completing their professional development responsibilities and will be encouraged to ask their teachers to complete the survey as part of these responsibilities.

  2. Time will be allowed for completing the questionnaire before teachers leave the professional development and/or they will be asked to complete the questionnaire immediately thereafter.

  3. Teachers will be asked to complete the questionnaire by grantees with whom they have developed a relationship through their professional development experience.

  4. Teachers will receive a pre-notification and up to two additional reminders to complete the questionnaire.


Teacher Post-MWEE Questionnaire Response Rate

We also anticipate a sufficient response rate for the post-MWEE questionnaire because the request to complete this instrument will come from the provider of the professional development rather than from unfamiliar researchers and, again, teachers will know in advance that they will be asked to complete this questionnaire as part of their professional development responsibilities. For this survey, we expect that the response rate will be 40%, lower than at the close of the professional development given the time that will have passed since the professional development (possibly 6-9 months). Similar evaluations (i.e., ones including Internet-based questionnaires administered within same time frames after teacher professional development by the providers of these programs) offered by environmental educators have resulted in 35-80% response rates (Zint, 2010, 2009, 2008).


In addition, and as alluded to above, multiple contacts will be incorporated into the evaluation system’s design because they have been shown to be more effective than any other technique for increasing response to questionnaires distributed by email (Dillman et al., 2009). These contacts will be personalized and the questionnaire has been designed to be respondent-friendly (e.g., almost all questions are closed-ended, worded in a clear, easy to understand manner, and skip logic has been incorporated). These latter features have also been found to yield increased response rates (Dillman et al., 2009; Dillman, Sinclair, and Clark, 1993).


Finally, questionnaires that ask for little personal or sensitive information result in higher response rates (Dillman et al., 2009). Because this will be the case for the evaluation’s questionnaires, this should also contribute to encouraging respondents to complete the questionnaire.


Nonresponse Surveys

Should response rates for the teacher PD and MWEE surveys fall below 80%, B-WET will engage an external contractor to conduct non-response surveys and analysis of those results.


As part of these surveys, all non-respondents will receive an email invitation with a Web link to an abbreviated version of the questionnaires (automatic reminders will again be sent twice). Results from these questionnaires will be compared with those from earlier respondents to determine if there are significant differences.


If the earlier respondent and non-respondent populations are determined not to be significantly different, no further analysis will occur. If it is determined that the non-respondent population is significantly different from the earlier respondent population, analysis with weighted adjustments for nonresponse, using a method such as those described in Part IV of Survey Nonresponse (Groves et. al. 2002), will be conducted.


4. Describe any tests of procedures or methods to be undertaken. Tests are encouraged as effective means to refine collections, but if ten or more test respondents are involved OMB must give prior approval.


The majority of measures and procedures that will be used as part of the proposed B-WET evaluation system have been tested and successfully implemented by previous studies (e.g., “Evaluation of National Oceanic and Atmospheric Administration Chesapeake Bay Watershed Education and Training Program”). Moreover, an exploratory study of the benefits of MWEEs found that the scales that will also be used as part of the proposed B-WET evaluation system (examined using exploratory factor analysis in SPSS and M+) are reliable and valid. Reliabilities, for example, ranged between good and excellent (i.e., Cronbach Alpha range: .70 to .90) and the amount of variance explained by the factors were substantial (i.e., range: 40% to 90) (Zint, In Process). Lastly, the measures that will be used as part of the evaluation system have been examined for face and content validity by stakeholders consisting of the nine members of NOAA’s internal B-WET Advisory group, three evaluation experts with knowledge of B-WET, three B-WET grantees, and two watershed scientists. The proposed instruments have thus been sufficiently tested and no additional testing is planned.


5. Provide the name and telephone number of individuals consulted on the statistical aspects of the design, and the name of the agency unit, contractor(s), grantee(s), or other person(s) who will actually collect and/or analyze the information for the agency.


Individuals Consulted on Statistical Design:

Dr. Michaela Zint, Associate Professor, School of Natural Resources & Environment, School of Education, and College of Literature, Science & the Arts at the University of Michigan developed the statistical design for the proposed evaluation system. She, in turn, consulted with:

  • Dr. Heeringa & Statistical Design Group members, Institute for Social Research, University of Michigan

  • Dr. Lee & Dr. Rowan, School of Education, University of Michigan

  • Dr. Rutherford & Dr. West, Center for Statistical Consultation and Research, University of Michigan

If you have any questions about the statistics design of the study, please contact Dr. Michaela Zint: [email protected], 734.763.6961.




Individual Who Will Conduct Data Collection and Analysis:

The evaluation system is designed to collect data through an online portal and the database housing this portal will automatically generate descriptive statistics. Data may also be downloaded from the database for more sophisticated analysis by an external contractor.

Bronwen Rice, B-WET National Coordinator, NOAA Office of Education ([email protected], 202.482.6797) will be responsible for initiating the automated data collection process and for ensuring the functioning and maintenance of the evaluation system.


LITERATURE CITED

Bollen, K.A. 1989. Structural equations with latent variables. New York: Wiley.

Burton, L.J. and Mazerolle, S.M. 2011. Survey Instrument Validity Part I: Principles of Survey Instrument Development and Validation in Athletic Training Education Research. Athletic Training Education Journal. Vol. 6, No. 1, 27-35.

Carmines, E. G. and Zeller, R. A. 1979. Reliability and Validity Assessment. Sage: Beverly Hills, CA.

Dillman, D.A., Sinclair, M.D., and Clark, J.R. 1993. Effects of questionnaire length, respondent-friendly design, and a difficult question on response rates for occupant-addressed census mail surveys. Public Opinion Quarterly. Vol. 57, 289-304.

Dillman, D. A., Smyth, J.D. and Christian, L.M.. 2009. Internet, Mail and Mixed-Mode Surveys: The Tailored Design Method, 3rd edition. John Wiley: Hoboken, NJ.

Groves, R. M., Dillman, D. A., Eltinge, J. L., and Little, R. J. A. 2002. Survey Nonresponse. John Wiley & Sons, Inc.: New York.

Litwin, Mark S. 1995. How to Measure Survey Reliability and Validity. Sage: Thousand Oaks, CA.

MacCallum, R. C., Browne, M. W, and Sugawara, H. M. 1996. Power analysis and determination of sample size for covariance structure modeling. Psychological Methods 1(2):130-149.

Nunally, J. C. and Bernstain, I.H. 1994. Psychometric Theory. McGraw-Hill: New York.

Patton, M.Q. 2008. Utilization-focused evaluation. 4th ed. Sage: Los Angeles.

U. S. Department of Labor, Bureau of Labor Statistics. May 2011. National Compensation Survey: Occupational Earnings in the United States, 2010. Table 5: Full-time State and local government workers: Mean and median hourly, weekly, and annual earnings and mean weekly and annual hours: http://www.bls.gov/ncs/ocs/sp/nctb1479.pdf.

Yu, J. and Cooper, H. 1983. A Quantitative Review of Research Design Effects on Response Rates to Questionnaires. Journal of Marketing Research. Vol. XX, 36-44.

Zint, M. 2008, 2009 & 2010. Summary of annual Environmental Education and Training Partnership achievements. Annual reports to the U.S. Environmental Protection Agency’s Office of Environmental Education, Washington, DC.

Zint, M. T. (In Process). An exploratory assessment of the benefits of MWEEs. To be submitted to Applied Environmental Education & Communication.


1 This sample size is based on a 32% compound response rate, based on an 80% response rate to the first survey and a 40% response rate to the second survey.

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorSarah Brabson
File Modified0000-00-00
File Created2021-01-30

© 2024 OMB.report | Privacy Policy