0648-0658 Supporting Statement Part A 2019-07-18

0648-0658 Supporting Statement Part A 2019-07-18.docx

NOAA Bay Watershed Education and Training (B-WET) Program National Evaluation System

OMB: 0648-0658

Document [docx]
Download: docx | pdf

SUPPORTING STATEMENT

NOAA BAY WATERSHED EDUCATION AND TRAINING (B-WET) PROGRAM NATIONAL EVALUATION SYSTEM

OMB CONTROL NO. 0648-0658


A. JUSTIFICATION


1. Explain the circumstances that make the collection of information necessary.


This request is for revision and extension of a currently approved information collection. The survey instruments have been revised in several ways to reflect respondent suggestions (see A8 and A15).


The NOAA Office of Education’s Bay Watershed Education and Training (B-WET) program seeks to contribute to NOAA’s mission by immersing participants in Meaningful Watershed Education Experiences (MWEEs) to create an environmentally literate citizenry with the knowledge, attitudes, and skills needed to protect watersheds and related ocean, coastal, and Great Lakes ecosystems (http://www.noaa.gov/office-education/bwet). B-WET currently funds projects in seven regions: California, Chesapeake Bay, Great Lakes, Gulf of Mexico, Hawaii, New England, and the Pacific Northwest.


In keeping with Executive Order 12862, Setting Customer Service Standards, B-WET created a cross-region, internal evaluation system to monitor program implementation and outcomes on an ongoing basis. Based on a review of annual evaluation system results, B-WET has made adjustments to its Federal Notice of Funding Opportunities and proposal review activities, such as requesting a plan for participation in the national evaluation. On-going data collection enables assessment of the benefits of continuous improvements and, thus, supports adaptive management of the program. This effort is consistent with the goals and plans outlined in the NOAA Education Strategic Plan 2015-20351. See in particular Objective 5.4 on page 31 as a part of “Organizational Excellence.”


To meet evaluation needs, B-WET’s evaluation system was designed to answer the following questions:

  1. To what extent do regional B-WET programs support grantees in implementing Meaningful Watershed Educational Experiences (MWEEs)?

  2. How are MWEEs implemented by grantees and teachers?

  3. To what extent do B-WET-funded projects increase teachers’ knowledge of watershed science concepts, their confidence in their ability to integrate MWEEs into their teaching practices, and the likelihood that they will implement high quality MWEEs?

  4. To what extent do B-WET-funded projects increase students’ knowledge of watershed concepts, attitudes toward watersheds, inquiry and stewardship skills, and aspirations towards protecting watersheds?


B-WET grantees and teacher-participants in the grantees’ professional development are asked to voluntarily complete online questionnaires to provide evaluation data. One individual from each grantee organization is asked to complete a questionnaire once per year of the award, and the teacher-participants are asked to complete one questionnaire at the close of their professional development (PD) and one after implementing MWEEs with their students (before the end of the following school year). An online survey platform is used to collect and store these data, as well as to automatically generate results in the form of aggregate descriptive statistics.


The proposed evaluation system is maintained by B-WET staff with occasional assistance from an external professional evaluation contractor.


2. Explain how, by whom, how frequently, and for what purpose the information will be used. If the information collected will be disseminated to the public or used to support information that will be disseminated to the public, then explain how the collection complies with all applicable Information Quality Guidelines.


Program Improvement

The evaluation system, influenced by the principles underlying utilization-focused evaluation (Patton, 2008), was specifically designed by a team of researchers from the University of Michigan (UM) and the Institute of Learning Innovation (ILI) to meet users’ information and decisions needs. The primary users of the proposed evaluation system are the B-WET staff members who administer the B-WET grant program, and its national coordinator. These individuals review the evaluation system’s results annually to determine what changes may be necessary to the grant program to maximize benefits for K-12 teachers and students. The system automatically generates results in the form of aggregate descriptive statistics (at the national and regional level) to inform decisions about the program at both of these tiers.


B-WET staff members will share findings with secondary users, including staff members in the NOAA Office of Education and other parts of the agency who may choose to use information to improve other NOAA education programs. Evaluation findings will also be used at the national level to report on agency performance measures and respond to other Administration data collection activities, as appropriate. Tertiary users are grant recipients who are provided with access to a synthesis of findings so that they may identify ways to improve their respective environmental science and education programs.


Public Dissemination

Aggregated results from the teacher surveys are continuously available to grantees via the evaluation system’s on-line platform. Preliminary results of an initial analysis of data were discussed at in interactive session at the North American Association for Environmental Education (NAAEE) national conference in Madison, Wisconsin in October, 2016, and additional information was shared as part of the NAAEE virtual conference in 2017. Program managers regularly include review of evaluation results in regional grantee workshops and evaluation findings inform the development of regional funding opportunities. Evaluation system results were also presented to the NOAA Education Council in 2018. In the future, results associated with each of the evaluation system’s questions will continue to be shared online and through professional conferences, reports, and peer-reviewed journal articles.


The data collection design ensures that the Information Quality Guidelines of utility, objectivity, and integrity are met.


Utility:

The evaluation system is designed to answer the questions described earlier in Question 1, primarily to meet B-WET’s decision needs. To answer these evaluation questions, the ILI-UM team of researchers first identified relevant constructs (based on B-WET’s logic model and MWEE characteristics). Next, they adapted and adopted items to measure these constructs from existing valid and reliable indices and scales or developed new ones (when existing ones were not available). As a result, only data which has a necessary purpose for answering the system’s evaluation questions and, thus, meeting B-WET’s information needs, are being collected.


Objectivity:

Presentation: The descriptive statistics (e.g., frequencies) that are automatically generated based on the online data collected from respondents are accurate, clear, complete, and unbiased. In addition, only aggregate statistics at the national, regional, and organizational level are reported. Thus, individual sources of data are not disclosed and study participants remain anonymous.


Substance: The items included in the questionnaire, as well as the questionnaires themselves, were developed by the ILI-UM team based on best social science research practices. The majority of items included in the questionnaire, for example, were adopted or adapted (with respective researchers’ permission) from existing studies, including an evaluation of NOAA’s Chesapeake Bay Watershed Education and Training Program (Kraemer et al., 2007, Zint et al., 2014) (data gathered under OMB Control Number: 0648-0530), an exploratory study of the benefits of Meaningful Watershed Education Experiences (Zint, 2012), and a range of other relevant science and environmental education studies published in peer-reviewed journals (Zint, 2011). New items were developed only when existing measures for a construct were not available. The face and content validity of all of the items in the proposed questionnaires were established through reviews by nine internal NOAA B-WET Advisory Group (BWAG) members, three B-WET grantees, three evaluation specialists, and two watershed science researchers. Face validity is established by showing the questionnaire to a group of experts (e.g., researchers, practitioners) and asking them for feedback on whether the measures look like they will measure the constructs. We established face validity with review by B-WET, evaluators, grantees, and teachers. For content validity, we consulted with these experts and also did an extensive literature review (Zint, 2011).


Exploratory factor analyses conducted with SPSS and M+ of data collected through a pilot study revealed that the evaluation system’s scales (Zint, 2012) had good to excellent reliability (i.e., Cronbach Alpha range: .70 to .90) (Nunally & Bernstein 1994; Carmines & Zeller, 1979). The respective factors also explained a substantial amount of variance (i.e., range: 40% to 90%)) (Zint, 2012), thus providing additional support for the validity of the evaluation system’s measures.

Integrity:

The Qualtrics online platform is designed to meet Federal Information Security Management Act (FISMA) security guidelines to ensure all data provided by respondents is secure.2


Once data are downloaded from Qualtrics, NOAA’s Office of Education retains control over the information and safeguards it from improper access, modification, and destruction, consistent with NOAA standards for confidentiality, privacy, and electronic information. See response to Question 10 of this Supporting Statement for more information on confidentiality and privacy. The information collection is designed to yield data that meet all applicable information quality guidelines. Prior to dissemination, the information will be subjected to quality control measures and a pre-dissemination review pursuant to Section 515 of Public Law 106-554.


3. Describe whether, and to what extent, the collection of information involves the use of automated, electronic, mechanical, or other technological techniques or other forms of information technology.


The evaluation system data collection is electronic. Study participants (i.e., B-WET grantees and teachers who participate in their professional development) receive email prompts to complete the online instruments accessed through Qualtrics, an online survey platform. The Qualtrics surveys have built-in “logic” prompts so respondents complete only items relevant to their experience. Data are stored on Qualtrics’ server that automatically generates descriptive statistics. The proposed data collection process minimizes costs, while also being sensitive to issues of respondent burden, accuracy, and efficiency. It is assumed that most respondents (i.e., grantees, K-12 teachers) have access to the Internet at work, home, on a smartphone, or at a public institution such as a local library.


4. Describe efforts to identify duplication.


In some cases B-WET-funded projects that have additional funding or partnerships with other parts of NOAA may also be asked to report in to other NOAA data collections; however the B-WET system is the only NOAA data collection taking place that is focused on Meaningful Watershed Educational Experiences and specific characteristics of B-WET awards. NOAA education programs and evaluation efforts are coordinated through the NOAA Office of Education and the NOAA Education Council, and data collection is coordinated to ensure that individual survey items are not duplicative.


5. If the collection of information involves small businesses or other small entities, describe the methods used to minimize burden.


The evaluation system asks individuals working for non-profit organizations and some businesses, state and local government employees, and teachers in K-12 schools to participate by completing online questionnaires. The study minimizes burden on respondents because completion of the proposed questionnaires is voluntary. In addition, an iterative item review process was used to eliminate any non-essential questions, thus keeping the questionnaires as streamlined as possible while ensuring that sufficient data are collected to answer the evaluation questions. Should they choose to complete the proposed questionnaires, grantees will be able to complete their questionnaire within 30-60 minutes (depending on the nature of their program) and teachers, within 30 minutes. These estimates are based on completion times by respondents since April 2016 (Table 1). Average observed completion time for the Teacher Post-PD survey were longer than anticipated, so this survey is being revised to reduce the completion time in the future.


Table 1: Questionnaire Actual Completion Time

Respondent

Data Collection Period

Na

Median (minutes)

Mean (minutes)

Std dev

Grantee

June 2016 - April 2018

99b

42

61

70.4

Teacher Post-PD

April 2016 - May 2018

376c

21

46

105.3

Teacher Post-PD Nonresponse

May 2016 - April 2018

137d

3

5

4.8

Teacher Post-MWEE

May 2016 - May 2018

294e

17

27

36.0

Teacher Post-MWEE Nonresponse

June 2016 - February 2018

80f

3

5

8.6

aNumber of respondents who completed the full questionnaire minus those who left the questionnaire open for an excessive amount of time before submitting data, assuming they inadvertently neglected to close the questionnaire. Frequency distributions were examined to determine reasonable cut-off points.

b40 grantees had the questionnaire open for over 19 hours before submitting their responses and are excluded from this analysis.

c80 post-PD teachers had the questionnaire open for more than 16 hours before submitting their responses and are excluded from this analysis.

d5 post-PD teachers had the nonresponse questionnaire open for over 2 before submitting responses and are excluded from this analysis.

e41 post-MWEE teachers had the questionnaire open for more than 16 hours before submitting their responses and are excluded from this analysis.

f1 post-MWEE teacher had the nonresponse questionnaire open for over 2 hours before submitting responses and is excluded from this analysis.


6. Describe the consequences to the Federal program or policy activities if the collection is not conducted or is conducted less frequently.


The evaluation system contributes to ensuring that federal funding is used in an effective and efficient manner to educate teachers and students about watershed science and environmental issues. The evaluation system provides B-WET with scientific data to assess the effectiveness of its grant funded programs (i.e., B-WET-funded teacher professional development and student MWEEs). The results of the evaluation system also provide insights into how to improve watershed education programs.


If the evaluation system were not conducted, B-WET would not have the needed data to scientifically assess the effectiveness of its program/MWEEs and/or to scientifically determine how best to improve its program/MWEEs. The continuous data collection of the evaluation system allows on-going monitoring of outcome results and, thus, on-going program/MWEE improvements.


7. Explain any special circumstances that require the collection to be conducted in a manner inconsistent with OMB guidelines.


The collection is being conducted in a manner consistent with OMB guidelines.


8. Provide information on the PRA Federal Register Notice that solicited public comments on the information collection prior to this submission. Summarize the public comments received in response to that notice and describe the actions taken by the agency in response to those comments. Describe the efforts to consult with persons outside the agency to obtain their views on the availability of data, frequency of collection, the clarity of instructions and recordkeeping, disclosure, or reporting format (if any), and on the data elements to be recorded, disclosed, or reported.


A Federal Register Notice published on July 25, 2018 (83 FR 35240) solicited public comments. No comments were received.


During the development of the B-WET evaluation system, the ILI-UM team solicited input from a range of individuals including B-WET grantees, evaluation experts, watershed scientists, and statisticians on all aspects of the proposed evaluation system. Their suggestions informed the design of the proposed study (e.g., type of data collection, frequency and timing of data collection, reporting formats, etc.). Their feedback was also used to improve the questionnaire items and led to confirmation of their face and content validity.


In addition, the grantee and teacher questionnaires include several measures at the end of the respective instruments to allow respondents to comment on the data collection process and content. This on-going feedback will continue to be used to improve both the data collection process and instruments over time.


For the 2 years between April 2016 and April 2018, 66 grantees, 160 post-PD teachers, and 66 post-MWEE teachers provided suggestions for improving the questionnaires. They were asked three closed-ended questions about questionnaire quality and length (Table 2) and one open-ended question, “How can this questionnaire be improved?” (Table 3, Table 4, and Table 5). In general, grantees were satisfied with the quality of the questionnaire, but had recommendations for improving the wording and many teachers felt the post-PD questionnaire was long.


Based in part on this feedback, NOAA B-WET staff members reviewed each item in the three questionnaires, made wording changes where needed, and deleted measures considered to be less important or redundant.


Table 2: Closed-ended Feedback on Questionnaires

This questionnaire was ...

Grantees (n=138)

Post-PD Teachers (n=451)

Post-MWEE Teachers (n=118)

Difficult to complete=1, Easy to complete =7

mean=5.23, SD=1.46

mean=5.35, SD=1.54

mean=5.79, SD=1.46

Not informative=1, Informative=7

mean=5.33, SD=1.50

NA

NA

Long=1, Short=7

mean=3.13, SD=1.42

mean=2.74, SD=1.48

mean=3.33, SD=1.50


Table 3: Open-ended Feedback on Improving Grantee Questionnaire (N=66)

Type of Comment

n1

Example

NOAA B-WET Response

Reduce length

19

Shorten it up, but the tool was effective.

Reviewed to eliminate unnecessary questions

Change some question formats

9

Perhaps have more open-ended questions.

Although open-ended questions are informative, they are also time-consuming to complete, therefore open ended questions are minimized to keep the survey shorter

Reduce redundancy

8

It is too redundant. Many questions used for Teacher PD are the same ones used for the student experience. Why can't these be collapsed so it’s not so long?

Similar questions are asked about different audiences within a single survey. Reviewed to reduce redundancy, but unable to ask questions in parallel because not all respondents complete all sections.

Provide questionnaire preview

7

A document listing the information that will be asked for in the questionnaire would be helpful, so that I can have it all ready to go before I begin.

A link to access a full copy of the questionnaire is emailed to the grantees along with the survey link so that they can research their responses before they complete the survey online.

Make more relevant

7

Our program was so unique that some of the questions could not reflect the successes and learning that our organization experienced.

The national evaluation is designed to be “one size fits all”; project-specific evaluation is a responsibility of the grantee

Clarify terms

6

I was a little confused about the section of questions pertaining to students with regards to our staff working with them and teachers working with them.

Each item was reviewed and edited to improve clarity.

Allow for review of survey responses

6

I think the questionnaire needs to be more user friendly. I needed to go back to the beginning and check my answers and was not able to do so. I jumped ahead and skipped some sections and I am not able to return beyond the evaluation section.

Revisions to previously completed sections are not always possible due to survey logic. Introductory text on the survey has been updated to clarify that respondents will not be able to revise completed sections and direct them to a Word version of the survey so they can consider their responses offline

Add new questions

5

This questionnaire could gather data on technology used in B-WET projects.

In order to keep the surveys a reasonable length, survey items are focused on those that directly inform the evaluation questions

Allow multiple respondents

3

I rated this questionnaire as somewhat difficult to complete because it required input from two individuals: the PI who is more knowledgeable of the financial aspects, and the education coordinator who worked closely with the teachers and students.

A link to access a full copy of the questionnaire is emailed to the grantees along with the survey link so that they can research their responses and collect needed information before they complete the survey online.

Clarify purpose of evaluation

2

It would be nice for those of us funded by BWET to have an information session at the beginning of the grant (WebEx or similar) where you could talk more about the larger goals and evaluation and what you hope that we will accomplish with our teachers and students.

An informational evaluation orientation webinar is provided at the beginning of each grant cycle (annually) and is archived online for future reference.

Use alternate data source

1

If there were a way for the national evaluation to be pulled from local grant reports, it would not only be easier for grantees but would likely provide better feedback.

Regional grant reports are monitored and do provide project information and outputs, however these reports are not anonymous, are not easily aggregated, and don’t provide feedback directly from teacher participants.

1Some respondents provided more than one comment.

Table 4: Open-ended Feedback on Improving Teacher Post-PD Questionnaire (N=160)

Type of Comment

n1

Example

NOAA B-WET Response

Reduce length

71

It needs to be shorter. Letting me know up front it would take 30 minutes to complete was nice but I did postpone taking the survey for way too long and almost backed out several times without finishing it due to time.

Questions deemed not essential have been deleted, however the questionnaire remains designed to take 30 minutes to complete to allow answering the evaluation system’s questions

Improve timing

24

It's been a while since taking the workshop so my answers might be different than had I filled this out within a week or two of attending.

The survey is sent within a month of the PD end date whenever possible. Reminders are sent to grantees monthly to ensure accurate information about contacts and PD end dates are in the system. Grantees may also contact the national coordinator to schedule the survey to be sent on a specific day and time.

Clarify terms

21

More definitions of acronyms.

Commonly used acronyms are defined at the beginning of the survey and use of other acronyms is minimized. Each item was reviewed and edited to improve clarity.

Change question formats

17

The pre and post question were a little difficult, I would change it to a scale of 1-10.

Questions were created and tested for validity and reliability by an evaluation consultant.

Make more relevant

17

Tailor it to specific PD I was involved in.

The national evaluation is designed to be “one size fits all”; project-specific evaluation is a responsibility of the grantee.

Reduce redundancy

13

Many of the questions seemed too similar.

The questionnaire was reviewed to reduce redundancy, however some redundancy is intentionally built into the questionnaire to increase the validity of responses.

Make mobile friendly

4

Make it easier to use on a mobile device.

Although Qualtrics offers smartphone-friendly questionnaires, this survey is not suited for a small screen.

Add new questions

3

Ask us what we gained from the experience, and how we plan to change our lives because of it. Also, you didn't ask anything about the art component.

The national evaluation is designed to be focused on only the common elements of all projects, and therefore the questions will not necessarily cover all aspects of each project. In order to keep the surveys a reasonable length, survey items are focused on those that directly inform the evaluation questions.

Provide incentive

1

Cash Money always helps out. The Research group Mathematica pays about $20 bucks for us to accurately answer a questionnaire this length.

Grantees are encouraged to include incentives for participation in the evaluation data collection as part of their grant project.

1Some respondents provided more than one comment.



Table 5: Open-ended Feedback on Improving Teacher Post-MWEE Questionnaire (N=66)

Type of Comment

n1

Example

NOAA B-WET Response

Reduce length

36

Make it shorter. I am a busy teacher.

Questions deemed not essential have been deleted.

Change question format

8

No fill in answers

Questions were created and tested for validity and reliability by an evaluation consultant.

Make more relevant

7

Many of the questions didn't apply to my student-led project based learning which is different depending on the student.

The national evaluation is designed to be “one size fits all”; project-specific evaluation is a responsibility of the grantee.

Reduce redundancy

6

It seems you ask the same questions multiple times

The questionnaire was reviewed to reduce redundancy, however some redundancy is intentionally built into the questionnaire to increase the validity of responses

Clarify terms

4

I am new to teaching science so was not too familiar with some of the terminology and agencies listed.

Commonly used acronyms are defined at the beginning of the survey and use of other acronyms is minimized. Each item was reviewed and edited to improve clarity.

Improve timing

3

Making it closer to the time that I did the B-WET initiative.

The survey is a follow-up survey distributed at the end of the school year to capture all teachers’ work.

Change data collection

1

10 questions given to my students

Student data collection is not approved as part of this information collection. The B-WET program provides resources for project evaluation with students on the national program website.

Make mobile friendly

1

Have a version especially for mobile devices. I had to scroll back and forth a little bit to read the full questions on my phone. This isn't a major problem though.

Although Qualtrics offers smartphone-friendly questionnaires, this survey is not suited for a small screen.

Provide incentive

1

Send Cash Money (incentive)

Grantees are encouraged to include incentives for participation in the evaluation data collection as part of their grant project.

1Some respondents provided more than one comment.




9. Explain any decisions to provide payments or gifts to respondents, other than remuneration of contractors or grantees.


NOAA B-WET encourages grantees to ask teachers to complete the surveys as part of their professional development responsibilities. For example, if the grantees provide stipends to their professional development teachers, they could include a requirement that teachers complete the questionnaire to receive the payment.


10. Describe any assurance of confidentiality provided to respondents and the basis for assurance in statute, regulation, or agency policy.


An assurance of confidentiality is not provided to respondents. B-WET grantees and teachers who respond to the questionnaires, however, remain anonymous to B-WET and NOAA.


Anonymity is guaranteed in the following ways:

  • Neither B-WET grantees nor teacher respondents are asked to provide information that can identify them as individuals as part of the questionnaire.

  • Information that is needed to link data, that is (1) award numbers to link data provided by grantees with teachers participating in their professional development and (2) teacher-generated codes to link responses to their initial and subsequent questionnaires, are not associated with any of the other data they provide.

  • Email addresses, used to (1) invite prospective participants to participate in the study with a link to the questionnaire and (2) track response rates and prompt non-respondents, are not associated with any of the data provided by respondents.

  • Results are only presented in aggregate form (across all grantees or teacher respondents), not by individual.


11. Provide additional justification for any questions of a sensitive nature, such as sexual behavior and attitudes, religious beliefs, and other matters that are commonly considered private.


No questions of a sensitive nature are asked.


12. Provide an estimate in hours of the burden of the collection of information.

Table 6 provides estimates of the time and cost burden for the proposed information collection. Burden has been adjusted as a result of both a reduced estimate of the number of possible respondents annually, and observed response rates in recent years of data collection. For this submission, the numbers of possible respondents indicated in the table are estimated from the annualized number (3-year average) of grantee and teacher participants from fiscal years 2015, 2016, and 2017, as reported by grantees. Previously, this estimate was based on the highest number of participants in B-WET’s 2012-2014 fiscal years (Table 7 in section A.15.). Future numbers of respondents will vary based on annual program funding and the resources grantees are able to leverage. Participants who do not respond to the initial Grantee, Post-PD, and Post-MWEE questionnaires (“nonrespondents”) are asked to complete significantly-abbreviated questionnaires. Response rates used in the burden calculations are projected from actual response rates obtained April 2016 to April 2018 (Table 8 in section B.1.).

Table 6: Estimate of Annual Burden Hours for Information Collection

Informant

Number of possible respondents annuallya

Response frequency

Expected number of responses

Average time per response (hours)

Total respondent time (hours)

Estimated hourly wage (dollars)

Estimated labor cost burden to respondents (dollars)

Grantees

115

1

86b

1.0b

86

45.80i

$3,893

Grantees nonresponse

1

15c

0.17 c

3

45.80i

$137

Post-PD teachers

2,507

1

1,003d

0.5d

502

30.78j

$15,452

Post-PD teachers nonresponse

1

376e

0.1e

38

30.78j

$1,170

Post-MWEE teachersf

2,507

1

752g

0.5g

376

30.78j

$11,573

Post-MWEE teachers nonresponse

1

351h

0.1h

35

30.78j

$1,077

TOTALS

5,129

2,583

1,040

$33,302

aBased on the average of actual participation during three fiscal years: FY15=107 grants, 1,420 teachers; FY16=120 grants, 3,600 teachers; FY17=117 grants, 2,500 teachers

bAssumes maximum of 75% response rate and 1 hour completion time (actual response rate = 71%, actual average completion time = 61 minutes).

cPredicts a 50% response rate and a 10-minute completion time (new questionnaire).

dAssumes maximum of 40% response rate and ½ hour completion time (actual response rate = 39%, actual average completion time = 46 minutes).

eAssumes a maximum of 25% response rate and 5 minute completion time (actual response rate = 22%, actual average completion time = 5 minutes).

fThe same teachers are surveyed after their PD (Post-PD Teachers) and again at the end of the following school year (Post-MWEE Teachers).

gAssumes 30% response rate and ½ hour completion time (actual response rate = 25%, actual average completion time = 27 minutes).

hAssumes 20% response rate and 5 minute completion time (actual response rate = 15%, actual average completion time = 5 minutes).

iU. S. Department of Labor, Bureau of Labor Statistics. May 2017. National Occupational Employment and Wage Estimates, United States: Education administrators (mean hourly wage $45.80) https://www.bls.gov/oes/current/oes_nat.htm#25-0000

j Calculated from U. S. Department of Labor, Bureau of Labor Statistics. May 2014. National Occupational Employment and Wage Estimates, United States: Secondary School Teachers (mean hourly wage not available; mean annual salary $62,730) https://www.bls.gov/oes/current/oes_nat.htm#25-0000 and Krantz-Kent, Rachel. 2008. Teachers’ work patterns: when, where, and how much do U.S. teachers work? U. S. Department of Labor, Bureau of Labor, http://www.bls.gov/opub/mlr/2008/03/art4full.pdf (“On average for all days of the week, full-time teachers worked 5.6 hours per day” = 39.2 hours per week = 2,038 hours per year @ 52 weeks/year = $30.78 per hour)


13. Provide an estimate of the total annual cost burden to the respondents or record-keepers resulting from the collection (excluding the value of the burden hours in Question 12 above).


There are no direct costs to participants. The only costs are the opportunity costs of respondents’ time required to provide information as explained in Question 12 above. No capital equipment, start-up, or record maintenance requirements are placed on respondents.


14. Provide estimates of annualized cost to the Federal government.


The estimated cost to the federal government of implementing the NOAA B-WET National Evaluation System is based on the government's cost for yearly maintenance of the data collection, periodic study and analysis activities, and personnel cost of government employees involved in oversight and/or analysis. For the data collection activities for which OMB approval is currently being requested, the overall cost to the government is $270,000 over a three year period. This includes:

  • $45,000 total (annualized to $15,000) for contracted activities including preparing and conducting up to two analyses of data with results reports

  • $15,000 annually ($45,000 over three years) for online survey management platform license and support

  • $60,000 annually ($180,000 over three years) for government personnel costs in overseeing the evaluation activity

Total annualized cost: $90,000.

It is anticipated that basic maintenance and operation of the system will be $75,000 annually (survey management license and government personnel oversight, as described above). It is expected that these costs would need to be sustained over the duration of the use of the evaluation system, with periodic contracted work to analyze data and produce evaluation reports. These estimates are based on the evaluation contractor's previous experience managing other research and data collection activities of this type and costs observed during the 2014-2018 period of data collection on this project.


15. Explain the reasons for any program changes or adjustments.


Program Changes

Based on an initial analysis of results collected from the three questionnaires, feedback from respondents via open- and closed-ended items, and a detailed review of the questionnaire items by B-WET staff members, a number of changes are proposed to the measures included in the questionnaires. These changes are identified using “track changes” in the questionnaires included in Attachments 1a-f. Changes were made in order to shorten the instruments where possible and obtain more focused feedback on high priority program questions. These changes do not change the estimated survey response times.


In addition, due to lower than anticipated response rates for grantees in the last three year period, a nonresponse survey for this audience has been included.


Adjustments

Increased cost to the Federal government is due to increased cost of survey platform license as well as a greater cost anticipated for a future analysis.


Based on grantee and teacher response rates in the last 3 years, we have lowered our estimated response rates (Table 8 in section B.1.). In addition, the expected number of participants has decreased (from 8,086 to 5,129), so the overall burden has decreased (Table 6 in section A.12.).


Table 7: Change in Expected Responses


Previous Cycle

Current Submission


Informant

Number of possible respondents annuallya

Estimated Response Rate

Expected

number

of

responses

Number of possible respondents annuallyb

Estimated Response Ratec

Expected number of responses

Grantees

86

90%

77

115

75%

86

Grantees nonresponse

N/A

N/A

N/A

50%

15

Post-PD teachers

4,000

40%

1600

2,507

40%

1,003

Post-PD teachers nonresponse


20%

480

25%

376

Post-MWEE teachers

4,000

40%

1600

2,507

30%

752

Post-MWEE teachers nonresponse


20%

480

20%

351

TOTALS

8,086

4,237

5,129


2,583


aEstimated based on the highest number of participants in B-WET’s 2012-2014 fiscal years: FY12=79 grants, 4,000 teachers; FY13=81 grants, 1,900 teachers; FY14=86 grants, 2,600 teachers

bEstimated based on the average of actual number of participants during B-WET’s 2015-2017 fiscal years: FY15=107 grants, 1,420 teachers; FY16=120 grants, 3,600 teachers; FY17=117 grants, 2,500 teachers

cSee Table 8 in section B.1.


16. For collections whose results will be published, outline the plans for tabulation and publication.


For the primary stakeholders and users of the proposed evaluation system, i.e., the internal NOAA B-WET staff members who administer the B-WET grant program, the data collection system automatically shares results as aggregate descriptive statistics (at the national and regional levels). For each question, the system indicates how many individuals responded, and the frequency with which a particular response option was selected. In the future, as long as funding is available, a contractor may be hired to complete more sophisticated analyses of the data (i.e., inferential statistics) and to produce a traditional research report and/or article for publication in a peer reviewed journal consisting of introduction, methods, results, and discussion/recommendation sections.


Depending on the availability of the necessary funding, regular syntheses of the main findings as related to the evaluation system questions (see #1 above) will be prepared to meet the needs of different stakeholder groups. These stakeholders include the NOAA Office of Education which seeks information to improve its education grant programs, and external stakeholders such as B-WET grantees and teacher participants seeking ways to improve their MWEE practices, as well as tertiary members of the public. B-WET (potentially with the help of a contractor) will prepare these syntheses, ensuring that they meet respective stakeholders’ needs both in terms of content and presentation. These syntheses will be made available online.


17. If seeking approval to not display the expiration date for OMB approval of the information collection, explain the reasons why display would be inappropriate.


NA.


18. Explain each exception to the certification statement.


NA.


1 http://www.noaa.gov/office-education/noaa-education-council/strategic-planning-evaluation


2 “The Federal Information Processing Standards (FIPS) Publication Series of the National Institute of Standards and Technology (NIST) is the official series of publications relating to standards and guidelines adopted and promulgated under the provisions of the Federal Information Security Management Act (FISMA) of 2002. Publication 202, ‘Minimum Security Requirements for Federal Information and Information Systems,’ states the basis for sound security practices in any organization. Qualtrics meets all requirements listed in section 3, such as awareness and training, incident response, media protection, and risk assessment.” (Qualtrics, 2013, page 11)

12

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorBronwen rice
File Modified0000-00-00
File Created2021-01-15

© 2024 OMB.report | Privacy Policy