0648-0658 Supporting Statement Part B 2019-0718

0648-0658 Supporting Statement Part B 2019-0718.docx

NOAA Bay Watershed Education and Training (B-WET) Program National Evaluation System

OMB: 0648-0658

Document [docx]
Download: docx | pdf

SUPPORTING STATEMENT

NOAA BAY WATERSHED EDUCATION AND TRAINING (B-WET) PROGRAM NATIONAL EVALUATION SYSTEM

OMB CONTROL NO. 0648-0658



B. COLLECTIONS OF INFORMATION EMPLOYING STATISTICAL METHODS


1. Describe (including a numerical estimate) the potential respondent universe and any sampling or other respondent selection method to be used. Data on the number of entities (e.g., establishments, State and local governmental units, households, or persons) in the universe and the corresponding sample are to be provided in tabular form. The tabulation must also include expected response rates for the collection as a whole. If the collection has been conducted before, provide the actual response rate achieved.


Censuses will be conducted in light of the relatively small sample sizes (Table 6) and the sophisticated analyses planned to be conducted by an external evaluator. Statistical analyses could include confirmatory factor analysis, multilevel analysis (i.e., to account for teachers nested in professional development programs and repeated measures from the same individuals), and structural equation modeling (SEM) to explore the direct and indirect relationships between teachers’ practices and perceived student outcomes based on their MWEE professional development experiences and background. Benefits of SEM include that it allows for exploring direct and indirect causal relationships between variables while also taking into account measurement error (Bollen, 1989). SEM permits the combination of factor and path analysis into a single model. SEM models require large sample sizes because they estimate: 1) regression coefficients, 2) variances and covariances of unobserved variables, and 3) variances and covariances of errors. Because of the number of direct and indirect paths that the models could estimate, they will have few degrees of freedom (df). These more sophisticated analysis could be conducted once the sample size is approximately 1,280, Based on an expected df=4 and the proposed sample size, a power of 80% could be achieved for testing model fit (see Table 4 in MacCallum, et al. 1996).


The expected response rates reported in Table 8 are informed by the response rates obtained between April 2016 and April 2018. This represents all of the response data available for these versions of the survey instruments at the time of this analysis. Response rate is analyzed in aggregate for this time period in order to include as much response data as possible in this calculation.


Table 8: Past and Expected Response Rates

Questionnaire

Time Period

N

(number of emails sent successfully)a

n

(number who responded)

R

(response rate)

Future expected Rb

Grantee

June 2016 - April 2018

201

142

71%

75%

Grantee Nonresponse

New

NA

NA

NA

50%

Teacher Post-PD

April 2016 - March 2018

1,390

545

39%

40%

Teacher Post-PD Nonresponse

May 2016 - April 2018

846

182

22%

25%

Teacher Post-MWEE

May 2016 - January 2018

1,322

335

25%

30%

Teacher Post-MWEE Nonresponse

June 2016 - February 2018

987

148

15%

20%

aBounced emails have been subtracted from the sent number.

bFuture expected response rates are estimated higher than current response rates due to improvements to the survey system (e.g., improved communication about the evaluation system) and to ensure reporting the maximum possible burden on the public as calculated in Table 6.

2. Describe the procedures for the collection, including: the statistical methodology for stratification and sample selection; the estimation procedure; the degree of accuracy needed for the purpose described in the justification; any unusual problems requiring specialized sampling procedures; and any use of periodic (less frequent than annual) data collection cycles to reduce burden.


Censuses of the respective populations will be conducted to attain the sample sizes needed for sophisticated statistical analyses, which will allow for more in-depth answers to the evaluation system’s questions.


3. Describe the methods used to maximize response rates and to deal with nonresponse. The accuracy and reliability of the information collected must be shown to be adequate for the intended uses. For collections based on sampling, a special justification must be provided if they will not yield "reliable" data that can be generalized to the universe studied.


Methods to Maximize Response Rates

The following are considered to be best practices for maximizing response rates, compiled from several sources (CDC, 2010; Millar & Dillman, 2011; Scantron Corp, 2014; Umbach, 2016):

  • Keep the format and content easy to understand and navigate

  • Keep the questionnaire as lean as possible so it takes the least amount of time possible to complete

  • Ensure that the questions are relevant to the respondents; allow for selecting NA as appropriate

  • Participation should be voluntary, anonymous, and confidential

  • Provide advance notice that the survey is coming

  • Contact the respondent four times, including a prenotification, an invitation, and 2 reminders

  • Include a copy of the questionnaire with the invitation to complete it, including an estimate of how much time it will take to complete it

  • Make the invitation as personal as possible while maintaining confidentiality

  • Include a deadline for completing the questionnaire

  • Publish results online for participants

While NOAA employs the above best practices above to the extent practicable, is not able to use the following best practices for this evaluation system:

  • Provide an incentive, especially a monetary one (however, the grantee is able to provide an incentive)

  • Use mixed modes, if possible (e.g., email followed by snail mail) (only email addresses are available for contacting teachers)

  • Allow smartphone or tablet formatting, if possible (the questions are not appropriate for these formats)

Grantee Questionnaire

The grantee response rate was 71% between April 2016 and April 2018 triggering the need for a nonresponse questionnaire to be created for the next 3-year data collection period. These specific efforts used in prior years to maximize response rates will continue:

  1. Include information about the national evaluation in the B-WET federal funding opportunity (FFO),

  2. Provide preview copies of the evaluation system questionnaire on the B-WET website and in the invitation email,

  3. Send a pre-notification to all grantees at the beginning of their grant year, and

  4. Send two reminder invitations, two and four weeks following the initial invitation, to complete the questionnaire at the end of their grant year.

Because B-WET grantees receive funds from NOAA to conduct their MWEE projects, they are highly invested in the B-WET program. It is surprising that the response rate is not higher. Additional efforts will be made to increase the response rate including:

  1. A general deadline will be added to the invitation. Given the nature of the survey distribution, a specific date cannot be provided, but the statement “Please complete this questionnaire in the next 10 days” will be added.

  2. A personal reminder will be sent to those who have not responded after 5 weeks.

  3. Questions have been modified or omitted to streamline the questionnaire.

Teacher Post-PD Questionnaire

The teacher post-PD response rate was 39% between April 2016 and April 2018. These specific efforts used in prior years to maximize response rates will continue:

  1. Providing advance notice, sending two reminders after the invitation, use of closed-ended and easy to understand questions, and other best practices listed above.

  2. The national coordinator working to increase grantees’ familiarity with the data collection process so they will advocate for teacher evaluation participation. The national coordinator will continue to offer evaluation system webinars for grantees, raise awareness of the evaluation resources available through the B-WET evaluation website, send monthly reminders to grantees to add teacher emails to the evaluation system, and participate in meetings with grantees to discuss ways to increase teacher participation in the national evaluation.

  3. The regional coordinators will continue to promote grantee and teacher evaluation participation, such as by refining content about the national evaluation as part of their FFOs (e.g., by asking grantees to describe how they will participate in the national evaluation system) and playing a prominent role in encouraging grantees to encourage their teachers’ participation in the evaluation (e.g., meetings about the national evaluation with the grantees that may include the national coordinator).

Additional efforts to increase the response rate include:

  1. A general deadline will be added to the invitation. Given the nature of the survey distribution, a specific date cannot be provided, but the statement “Please complete this questionnaire in the next 10 days” will be added.

  2. Questions have been modified or omitted to streamline the questionnaire.

Teacher Post-MWEE Questionnaire

The teacher post-MWEE response rate was 25% between April 2016 and April 2018. These specific efforts used in prior years to maximize response rates will continue:

  1. In addition to a pre-notification from NOAA before the teacher’s PD, grantees are asked to inform their teachers that they will be asked to complete this questionnaire as part of their professional development responsibilities.

  2. Multiple, personalized completion requests are sent with the grantee and project’s name that is familiar to the teachers.

  3. The questionnaire is streamlined as best as possible by using questions that are closed-ended, and worded in a clear, easy to understand manner, and skip logic ensures respondents only see questions relevant to them.

Additional efforts to increase the response rate include:

  1. A general deadline will be added to the invitation. Given the nature of the survey distribution, a specific date cannot be provided, but the statement “Please complete this questionnaire in the next 10 days” will be added.

  2. Questions have been modified or omitted to streamline the questionnaire.

Nonresponse Questionnaires

Currently no reminders are sent after the invitation to complete the nonresponse questionnaire. Given that the recipients have already received a pre-notice, an invitation, and 2 reminders for the initial questionnaire, only one reminder after 2 weeks will be added after the nonresponse surveys are distributed. A deadline will be included in the invitations and reminders.


Nonresponse Analysis

The evaluation system was designed to include post-PD and post-MWEE nonresponse questionnaires to ensure that comparisons can be made between initial and nonrespondent teachers when response rates are below 80% (a grantee nonresponse questionnaire is being added). B-WET has engaged an external contractor to conduct analyses of these results.

All nonrespondents received a one-time email invitation with a Web link to an abbreviated version of a questionnaire. Results from these questionnaires have been compared with those from respondents to the initial questionnaire to determine if there are statistically and substantively/meaningfully significant differences.

If respondent and nonrespondent populations are determined not to be significantly and substantively different, no further analysis will occur. If it is determined that the nonrespondent population is significantly and substantively different from the respondent population, analysis with weighted adjustments for nonresponse, using a method such as those described in Part IV of Survey Nonresponse (Groves et. al. 2002), will be conducted for purposes of formally reporting/publishing results. In other instances, there will be an acknowledgement of how results from the respondent sample may differ from non-respondents.

Teacher Post-PD Nonresponse Questionnaire

Based on the recent 39% response rate for the post-PD questionnaire, an analysis was conducted comparing results from the respondent post-PD questionnaire with the much shorter nonresponse questionnaire. Although 7 of 23 statistical tests indicated that there were statistically significant differences between respondent and nonrespondent teachers, only one was ultimately determined to warrant attention (Attachment 3). This one difference occurred with regard to the hours of PD teachers experienced: the evaluation system’s post-PD sample is more likely to reflect the responses of teachers who participated in more PD hours. The remaining statistically significant differences consisted of 4 that were too small to be meaningful (e.g., the largest difference in means was 0.40 on a scale from 1-7), one that would be expected (i.e., NR teachers’ PD was longer ago than PD teachers’), and minor differences in the regions represented. The last is not of concern because grantees are encouraged to implement the same MWEE elements across all regions.

Teacher Post-MWEE Nonresponse Questionnaire

Based on the recent 25% response rate for the post-MWEE questionnaire, an analysis was conducted comparing results from the respondent post-MWEE questionnaire with the much shorter nonresponse questionnaire. Six of 21 statistical tests indicated that there were statistically significant differences between respondent and nonrespondent teachers (Attachment 4). Three notable differences were found: NR teachers were somewhat less likely to complete a MWEE and were more likely to complete shorter MWEEs (including spending less time outside with their students) than respondents. The remaining statistically significant differences would be expected (i.e., NR teachers’ PD was longer ago than PD teachers) and there were minor differences in the private vs. public teachers as well as regions represented. Again, the last are not of concern because all teachers are encouraged to implement the same MWEE elements. Importantly, none of the statistically significant differences occurred in perceived student MWEE outcomes (14 in total).


4. Describe any tests of procedures or methods to be undertaken. Tests are encouraged as effective means to refine collections, but if ten or more test respondents are involved OMB must give prior approval.


The majority of measures and procedures used as part of the B-WET evaluation system have been tested and successfully implemented by previous studies (e.g., “Evaluation of National Oceanic and Atmospheric Administration Chesapeake Bay Watershed Education and Training Program,” Kraemer et al., 2007; Zint et al., 2014). In addition, an exploratory study of the benefits of MWEEs found that the scales used as part of the proposed B-WET evaluation system (examined using exploratory factor analysis in SPSS and M+) are reliable and valid (Zint, 2012). Reliabilities, for example, ranged between good and excellent (i.e., Cronbach Alpha range: .70 to .90) and the amount of variance explained by the factors were substantial (i.e., range: 40% to 90). The measures used as part of the evaluation system have also been examined for face and content validity by stakeholders consisting of the nine members of NOAA’s internal B-WET Advisory group, three evaluation experts with knowledge of B-WET, three B-WET grantees, and two watershed scientists.


As part of this application, some revisions to the three questionnaires are being requested, based on a review of descriptive statistics of initial data as well as respondents’ feedback.


No additional testing is planned.


5. Provide the name and telephone number of individuals consulted on the statistical aspects of the design, and the name of the agency unit, contractor(s), grantee(s), or other person(s) who will actually collect and/or analyze the information for the agency.


Individuals Consulted on Statistical Design:

Dr. Michaela Zint, Professor, School of Natural Resources & Environment, School of Education, and College of Literature, Science & the Arts at the University of Michigan developed the statistical design for the proposed evaluation system. She, in turn, consulted with:

  • Dr. Heeringa & Statistical Design Group members, Institute for Social Research, University of Michigan

  • Dr. Lee & Dr. Rowan, School of Education, University of Michigan

  • Dr. Rutherford & Dr. West, Center for Statistical Consultation and Research, University of Michigan

If you have any questions about the statistics design of the study, please contact Dr. Michaela Zint: [email protected], 734.763.6961.


Individual Who Will Conduct Data Collection and Analysis:

The evaluation system is designed to collect data through Qualtrics, an online survey platform that automatically generates descriptive statistics. Data may also be downloaded from Qualtrics for more sophisticated analysis by an external contractor.

Bronwen Rice, B-WET National Coordinator, NOAA Office of Education ([email protected], 202.482.6797) will be responsible for managing the data collection process and for ensuring the functioning and maintenance of the evaluation system.


LITERATURE CITED

Bollen, K.A. 1989. Structural equations with latent variables. New York: Wiley.

Burton, L.J. and Mazerolle, S.M. 2011. Survey Instrument Validity Part I: Principles of Survey Instrument Development and Validation in Athletic Training Education Research. Athletic Training Education Journal. Vol. 6, No. 1, 27-35.

Carmines, E. G. and Zeller, R. A. 1979. Reliability and Validity Assessment. Sage: Beverly Hills, CA.

CDC Department of Health and Human Services. July 2010. Evaluation briefs, No. 21. https://www.cdc.gov/healthyyouth/evaluation/pdf/brief21.pdf

Cohen, J. 1988. Statistical power analysis for the behavioral sciences (2nd ed). Lawrence Erlbaum Associates: Hillsdale, N.J., p. 567.

Dillman, D. A., Smyth, J.D. and Christian, L.M. 2009. Internet, Mail and Mixed-Mode Surveys: The Tailored Design Method, 3rd edition. John Wiley: Hoboken, NJ.

Groves, R. M., Dillman, D. A., Eltinge, J. L., and Little, R. J. A. 2002. Survey Nonresponse. John Wiley & Sons, Inc.: New York.

Kraemer, A., Zint, M., and Kirwan, J. 2007. An Evaluation of National Oceanic and Atmospheric Administration Chesapeake Bay Watershed Education and Training Program Meaningful Watershed Educational Experiences. Unpublished. http://chesapeakebay.noaa.gov/images/stories/pdf/Full_Report_NOAA_Chesapeake_B-WET_Evaluation.pdf

Litwin, Mark S. 1995. How to Measure Survey Reliability and Validity. Sage: Thousand Oaks, CA.

MacCallum, R. C., Browne, M. W, and Sugawara, H. M. 1996. Power analysis and determination of sample size for covariance structure modeling. Psychological Methods, Vol. 1, No. 2, 130-149.

Millar, M. M. and D. A. Dillman. Summer 2011. Improving Response to Web and Mixed-Mode Surveys. Public Opinion Quarterly, Vol. 75, No.2. pp. 249-269.

Nunally, J. C. and Bernstain, I.H. 1994. Psychometric Theory. McGraw-Hill: New York.

Patton, M.Q. 2008. Utilization-focused evaluation. 4th ed. Sage: Los Angeles.

Qualtrics. 2013. Qualtrics Security White Paper: Why should I trust Qualtrics with my sensitive data? https://www.utexas.edu/its/downloads/survey/2185/White%20Paper_Qualtrics%20Security_1%2018%2013%20(2).pdf

Scantron Corporation. 2014. Web page. http://scantron.com/articles/improve-response-rate

Umbach, Paul D. 2016 July 26. Increasing Web Survey Response Rates: What Works? Percontor, LLC. Webinar.

U. S. Department of Labor, Bureau of Labor Statistics. May 2011. National Compensation Survey: Occupational Earnings in the United States, 2010. Table 5: Full-time State and local government workers: Mean and median hourly, weekly, and annual earnings and mean weekly and annual hours: http://www.bls.gov/ncs/ocs/sp/nctb1479.pdf.

Zint, M. 2011. A literature review of watershed education-related research to inform NOAA B-WET’s evaluation system. University of Michigan: Ann Arbor, MI.

Zint, M. 2012. An exploratory assessment of the benefits of MWEEs: A report prepared for NOAA. University of Michigan: Ann Arbor, MI.

Zint, M., Kraemer, A. & Kolenic, G. E. 2014. Evaluating Meaningful Watershed Educational Experiences: An exploration into the effects on participating students’ environmental stewardship characteristics and the relationships between these predictors of environmentally responsible behavior. Studies in Educational Evaluation: Special Issue on Research in Environmental Education Evaluation, Vol. 41, 4-17.





ATTACHMENTS

1a-f. Revised Questionnaires: Grantee, Grantee Nonresponse, Teacher Post-PD, Teacher Post-PD Nonresponse, Teacher Post-MWEE, Teacher Post-MWEE Nonresponse

2. Email correspondence with grantees, PD teachers, and MWEE teachers

3. Comparison of Teacher Post-PD Initial vs. Nonresponse Questionnaire Results

4. Comparison of Teacher Post-MWEE Initial vs. Nonresponse Questionnaire Results




File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorAdrienne Thomas
File Modified0000-00-00
File Created2021-01-15

© 2024 OMB.report | Privacy Policy