0658 SS Part A 1022015

0658 SS Part A 1022015.docx

NOAA Bay Watershed Education and Training (B-WET) Program National Evaluation System

OMB: 0648-0658

Document [docx]
Download: docx | pdf

SUPPORTING STATEMENT

NOAA BAY WATERSHED EDUCATION AND TRAINING (B-WET) PROGRAM NATIONAL EVALUATION SYSTEM

OMB CONTROL NO. 0648-0658


A. JUSTIFICATION


1. Explain the circumstances that make the collection of information necessary.


This request is for revision and extension of a currently approved information collection. The survey instruments have been revised in several ways to reflect respondent suggestions (see A8 and A15).


The NOAA Office of Education’s Bay Watershed Education and Training (B-WET) program seeks to contribute to NOAA’s mission by immersing participants in Meaningful Watershed Education Experiences (MWEEs) to create an environmentally literate citizenry with the knowledge, attitudes, and skills needed to protect watersheds and related ocean, coastal, and Great Lakes ecosystems (http://www.oesd.noaa.gov/grants/bwet.html). B-WET currently funds projects in seven regions: California, Chesapeake Bay, Great Lakes, Gulf of Mexico, Hawaii, New England, and the Pacific Northwest.


In keeping with Executive Order 12862, Setting Customer Service Standards, B-WET created a cross-region, internal evaluation system to monitor program implementation and outcomes on an ongoing basis. Based on a review of annual evaluation system results, B-WET has made adjustments to its Federal Funding Opportunities (FFOs) and proposal review activities, such as requesting a plan for participation in the national evaluation. On-going data collection enables assessment of the benefits of continuous improvements and, thus, supports adaptive management of the program. This effort is consistent with the goals and plans outlined in the NOAA Education Strategic Plan 2015-20351. See in particular Objective 5.4 on page 31 as a part of “Organizational Excellence.”


To meet evaluation needs, B-WET’s evaluation system was designed to answer the following questions:

  1. To what extent do regional B-WET programs support grantees in implementing Meaningful Watershed Educational Experiences (MWEEs)?

  2. How are MWEEs implemented by grantees and teachers?

  3. To what extent do B-WET-funded projects increase teachers’ knowledge of watershed science concepts, their confidence in their ability to integrate MWEEs into their teaching practices, and the likelihood that they will implement high quality MWEEs?

  4. To what extent do B-WET-funded projects increase students’ knowledge of watershed concepts, attitudes toward watersheds, inquiry and stewardship skills, and aspirations towards protecting watersheds?


B-WET grantees and teacher-participants in the grantees’ professional development are asked to voluntarily complete online questionnaires to provide evaluation data. One individual from each grantee organization is asked to complete a questionnaire once per year of the award, and the teacher-participants are asked to complete one questionnaire at the close of their professional development (PD) and one after implementing MWEEs with their students (before the end of the following school year). An online survey platform is used to collect and store these data, as well as to automatically generate results in the form of aggregate descriptive statistics.


The proposed evaluation system is maintained by B-WET staff with occasional assistance from an external professional evaluation contractor.


2. Explain how, by whom, how frequently, and for what purpose the information will be used. If the information collected will be disseminated to the public or used to support information that will be disseminated to the public, then explain how the collection complies with all applicable Information Quality Guidelines.


Program Improvement

The evaluation system, influenced by the principles underlying utilization-focused evaluation (Patton, 2008), was specifically designed by a team of researchers from the University of Michigan (UM) and the Institute of Learning Innovation (ILI) to meet users’ information and decisions needs. The primary users of the proposed evaluation system are the B-WET staff members who administer the B-WET grant program, and its national coordinator. These individuals review the evaluation system’s results annually to determine what changes may be necessary to the grant program to maximize benefits for K-12 teachers and students. The system automatically generates results in the form of aggregate descriptive statistics (at the national and regional level) to inform decisions about the program at both of these tiers.


B-WET staff members will share findings with secondary users, including staff members in the NOAA Office of Education and other parts of the agency who may choose to use information to improve other NOAA education programs. Evaluation findings will also be used at the national level to report on agency performance measures and respond to other Administration data collection activities, as appropriate. Tertiary users are grant recipients who are provided with access to a synthesis of findings so that they may identify ways to improve their respective environmental science and education programs.


Public Dissemination

Aggregated results from the teacher surveys are continuously available to grantees via the evaluation system’s on-line platform. In the future, once sufficient national-level data is available, results associated with each of the evaluation system’s questions will be shared through professional conferences, reports, and peer-reviewed journal articles.


The data collection’s design ensures that the Information Quality Guidelines of utility, objectivity, and integrity are met.


Utility:

The evaluation system is designed to answer the questions described earlier in Question 1, primarily to meet B-WET’s decision needs. To answer these evaluation questions, the ILI-UM team of researchers first identified relevant constructs (based on B-WET’s logic model and MWEE characteristics). Next, they adapted and adopted items to measure these constructs from existing valid and reliable indices and scales or developed new ones (when existing ones were not available). As a result, only data which has a necessary purpose for answering the system’s evaluation questions and, thus, meeting B-WET’s information needs, are being collected. Please refer to the updated evaluation system metrics matrix illustrating the connections between evaluation questions, constructs, and items included in the instruments (Attachment 1).


Objectivity:

Presentation: The descriptive statistics (e.g., frequencies) that are automatically generated based on the online data collected from respondents is accurate, clear, complete, and unbiased. In addition, only aggregate statistics at the national, regional, and organizational level are reported. Thus, individual sources of data are not disclosed and study participants remain anonymous.


Substance: The items included in the questionnaire, as well as the questionnaires themselves, were developed by the ILI-UM team based on best social science research practices. The majority of items included in the questionnaire, for example, were adopted or adapted (with respective researchers’ permission) from existing studies, including an evaluation of NOAA’s Chesapeake Bay Watershed Education and Training Program (Kraemer et al., 2007, Zint et al., 2014) (data gathered under OMB Control Number: 0648-0530), an exploratory study of the benefits of Meaningful Watershed Education Experiences (Zint, 2012), and a range of other relevant science and environmental education studies published in peer-reviewed journals (Zint, 2011). New items were developed only when existing measures for a construct were not available. The face and content validity of all of the items in the proposed questionnaires were established through reviews by nine internal NOAA B-WET Advisory Group (BWAG) members, three B-WET grantees, three evaluation specialists, and two watershed science researchers. Face validity is established by showing the questionnaire to a group of experts (e.g., researchers, practitioners) and asking them for feedback on whether the measures look like they will measure the constructs. We established face validity with review by B-WET, evaluators, grantees, and teachers. For content validity, we consulted with these experts and also did an extensive literature review (Zint, 2011).


Exploratory factor analyses conducted with SPSS and M+ of data collected through a pilot study revealed that the evaluation system’s scales (Zint, 2012) had good to excellent reliability (i.e., Cronbach Alpha range: .70 to .90) (Nunally & Bernstein 1994; Carmines & Zeller, 1979). The respective factors also explained a substantial amount of variance (i.e., range: 40% to 90%)) (Zint, 2012), thus providing additional support for the validity of the evaluation system’s measures.

Integrity:

The Qualtrics online platform is designed to meet Federal Information Security Management Act (FISMA) security guidelines to ensure all data provided by respondents is secure.2


Once data are downloaded from Qualtrics, NOAA’s Office of Education retains control over the information and safeguards it from improper access, modification, and destruction, consistent with NOAA standards for confidentiality, privacy, and electronic information. See response to Question 10 of this Supporting Statement for more information on confidentiality and privacy. The information collection is designed to yield data that meet all applicable information quality guidelines. Prior to dissemination, the information will be subjected to quality control measures and a pre-dissemination review pursuant to Section 515 of Public Law 106-554.


3. Describe whether, and to what extent, the collection of information involves the use of automated, electronic, mechanical, or other technological techniques or other forms of information technology.


The evaluation system data collection is electronic. Study participants (i.e., B-WET grantees and teachers who participate in their professional development) receive email prompts to complete the online instruments accessed through Qualtrics, an online survey platform. The Qualtrics surveys have built-in “logic” prompts so respondents complete only items relevant to their experience. Data is stored on Qualtrics’ server that automatically generates descriptive statistics. The proposed data collection process minimizes costs, while also being sensitive to issues of respondent burden, accuracy, and efficiency. It is assumed that most respondents (i.e., grantees, K-12 teachers) have access to the Internet at work, home, on a smartphone, or at a public institution such as a local library.


4. Describe efforts to identify duplication.


In some cases B-WET-funded projects that have additional funding or partnerships with other parts of NOAA may also be asked to report in to other NOAA data collections; however the B-WET system is the only NOAA data collection taking place that is focused on Meaningful Watershed Educational Experiences and specific characteristics of B-WET awards. NOAA education programs and evaluation efforts are coordinated through the NOAA Office of Education and the NOAA Education Council, and data collection is coordinated to ensure that individual survey items are not duplicative.


5. If the collection of information involves small businesses or other small entities, describe the methods used to minimize burden.


The evaluation system asks individuals working for non-profit organizations and some businesses, state and local government employees, and teachers in K-12 schools to participate by completing online questionnaires. The study minimizes burden on respondents because completion of the proposed questionnaires is voluntary. In addition, an iterative item review process was used to eliminate any non-essential questions, thus keeping the questionnaires as streamlined as possible while ensuring that sufficient data is collected to answer the evaluation questions. Should they choose to complete the proposed questionnaires, grantees will be able to complete their questionnaire within 30-60 minutes (depending on the nature of their program) and teachers, within 30 minutes. These estimates are based on completion times by respondents since January 2014 (Table 1).



Table 1: Minutes for Questionnaire Completion

Respondent

Time period

Na

Mean

Std dev

Grantee

January 2014 - June 2015

59b

62

46.5

Teacher Post-PD

April 2014 - July 2015

110c

27

22.9

Teacher Post-PD Nonresponse

July 2014 - July 2015

27d

3

2.1

Teacher Post-MWEE

March 2014 - June 2015

108e

18

17.4

Teacher Post-MWEE Nonresponse

June - July 2015

28f

5

7.4

aNumber of respondents who completed the full questionnaire minus those who left the questionnaire open for an excessive amount of time before submitting data.

b25 grantees had the questionnaire open for 18-722 hours before submitting their responses are excluded from this analysis, it is assumed that they accidentally neglected to close the questionnaire.

c15 Post-PD teachers had the questionnaire open for more than 7 hours before submitting their responses and are excluded from this analysis, again it is assumed that they accidentally neglected to close the questionnaire.

d17 Post-PD nonrespondent teachers indicated that they had not completed the PD and are excluded from the analysis. 6 teachers did not fully complete the survey and are excluded from this calculation.

e13 Post-MWEE teachers had the questionnaire open for 4 or more hours before submitting their responses and are excluded from this analysis, again it is assumed that they accidentally neglected to close the questionnaire.

f21 Post-MWEE nonrespondent teachers had not completed a MWEE and were excluded from the analysis. 1 Post-MWEE teacher had the nonresponse questionnaire open for almost 5 hours before submitting responses and is excluded from this analysis.


6. Describe the consequences to the Federal program or policy activities if the collection is not conducted or is conducted less frequently.


The evaluation system contributes to ensuring that federal funding is used in an effective and efficient manner to educate teachers and students about watershed science and environmental issues. The evaluation system provides B-WET with scientific data to assess the effectiveness of its grant funded programs (i.e., B-WET-funded teacher professional development and student MWEEs). The results of the evaluation system also provide insights into how to improve watershed education programs.


If the evaluation system were not conducted, B-WET would not have the needed data to scientifically assess the effectiveness of its program/MWEEs and/or to scientifically determine how best to improve its program/MWEEs. The continuous data collection of the evaluation system allows on-going monitoring of outcome results and, thus, on-going program/MWEE improvements.


7. Explain any special circumstances that require the collection to be conducted in a manner inconsistent with OMB guidelines.


The collection is being conducted in a manner consistent with OMB guidelines.


8. Provide information on the PRA Federal Register Notice that solicited public comments on the information collection prior to this submission. Summarize the public comments received in response to that notice and describe the actions taken by the agency in response to those comments. Describe the efforts to consult with persons outside the agency to obtain their views on the availability of data, frequency of collection, the clarity of instructions and recordkeeping, disclosure, or reporting format (if any), and on the data elements to be recorded, disclosed, or reported.


A Federal Register Notice published on August 19, 2015 (80 FR 50268) solicited public comments. No comments were received.


During the development of the B-WET evaluation system, the ILI-UM team solicited input from a range of individuals including B-WET grantees, evaluation experts, watershed scientists, and statisticians on all aspects of the proposed evaluation system. Their suggestions informed the design of the proposed study (e.g., type of data collection, frequency and timing of data collection, reporting formats, etc.). Their feedback was also used to improve the questionnaire items and led to confirmation of their face and content validity.


In addition, the grantee and teacher questionnaires include several measures at the end of the respective instruments to allow respondents to comment on the data collection process and content. This on-going feedback will continue to be used to improve both the data collection process and instruments over time.


For the 18 months between March 2014 and August 2015, 95 grantees, 201 post-PD teachers, and 118 post-MWEE teachers provided feedback on the questionnaires. They were asked three closed-ended questions about questionnaire quality and length (Table 2) and one open-ended question, “How can this questionnaire be improved?” (Table 3, Table 4, and Table 5). In general, grantees were satisfied with the quality of the questionnaire, but had recommendations for improving the wording and many teachers felt the post-PD questionnaire was long.


Based in part on this feedback, NOAA B-WET staff members reviewed each item in the three questionnaires, made wording changes where needed, and deleted measures considered to be less important or redundant.


Table 2: Closed-ended Feedback on Questionnaires

This questionnaire was ...

Grantees (n=95)

Post-PD Teachers (n=201)

Post-MWEE Teachers (n=118)

Difficult=1, Easy=7

mean=5.6, SD=1.5

mean=5.4, SD=1.9

mean=6.0, SD=1.3

Not informative=1, Informative=7

mean=5.3, SD=1.5

mean=5.0, SD=1.8

mean=5.4, SD=1.4

Long=1, Short=7

mean=3.3, SD=1.3

mean=2.9, SD=1.7

mean=3.9, SD=1.6



Table 3: Open-ended Feedback on Grantee Questionnaire (N=43)

Type of Comment

n

Example

NOAA B-WET Response

Satisfied with questionnaire as is

20

I actually enjoyed completing it. I don't have any suggestions for improvement. It was neither too long nor too short, and the questions were thought-provoking.

None

Questionnaire was too long

7

Shorter, took a long time to complete

Reviewed to eliminate unnecessary questions

Reduce redundancy

5

Where questions are repeated, combine them. For example, the same questions were asked as to the MWEEs for the PD participants as were asked about our organization. It would have been easier to answer these questions in parallel.

Reviewed to reduce redundancy, but unable to ask questions in parallel because not all respondents complete all sections

Clarify definition of terms

3

Our interpretation of watersheds extends to the estuary and coastal wetlands. Estuaries were not included in questions, although this may be implied

Clarified terms and definitions, including the definition of “watershed”

Questionnaire was informative

2

I thought the questionnaire was informative and gave me ideas of ways to improve our education program for the coming year.

None

Allow printing copy of responses

2

A printable copy for our records would be nice.

When the questionnaire closes, the respondent has the opportunity to download a PDF of their responses

Allow for more open-ended responses

2

Provide some open-ended response questions for responders to discuss the three things they did well and three things they wish they know now that they wish they had known at the start

Although open-ended questions are informative, they are also time-consuming to complete. In an effort not to lengthen the questionnaire, we only added one open-ended question

Allow multiple respondents

1

Allow multiple computers to access the questionnaire from a single organization

For the same questionnaire to be closed and reopened requires a cookie to be saved on a computer, thus completing the questionnaire using multiple computers is not possible

Provide preview

1

Let me know what statistics are needed ahead of time so I don't need to stop and pull documents to fill in % or numbers, etc.

A link to access a Word copy of the questionnaire is emailed to the grantees along with the survey link so that they can research their responses before they complete the survey online



Table 4: Open-ended Feedback on Teacher Post-PD Questionnaire (N=90)

Type of Comment

n

Example

NOAA B-WET Response

Too long

35

Fewer categories so it only takes 15 minutes to complete. It was fine during the summer when I am off, but during the school year this would have put me over the edge.

Questions deemed not essential have been deleted, however the questionnaire remains designed to take 30 minutes to complete to allow answering the evaluation system’s questions

Problems with formatting on computer screen

14

I could not access the future questions on the right hand side when answering before, after, future...

The matrix was split into two questions to eliminate the need to scroll back and forth.

Timing of survey not appropriate

9

Provide it sooner after course completion so the impact of the course is fresher in my memory.

The national coordinator sends monthly reminders to grantees so they have an opportunity to adjust the PD end date if it has changed

Satisfied with questionnaire as is

8

It did a reasonable job gathering info. for quantitative and qualitative assessment. It also provided opportunities for our thoughts.

None

Too general

7

[Make it] more specific to the project being evaluated.

The national evaluation is designed to be “one size fits all”; project-specific evaluation is a responsibility of the grantee

Clarify terms

4

Delineate better between the actual professional development and the student field experience, as I did specific professional development and my students had a day of curricular field experiences in which I accompanied them.

B-WET staff reviewed each question and clarified terms, such as the definition of professional development

Provide smartphone version

3

Make it smart phone friendly.

Although Qualtrics offers smartphone-friendly questionnaires, this survey is not suited for a small screen

Reduce redundancy

3

So repetitive with questions that differed little from others.

B-WET staff reviewed each question and eliminated any deemed unnecessary

Allow for more open-ended responses

2

Give room for comments on each page in case there is a specific point that needs to be addressed.

Comment boxes were not added to keep the questionnaire as streamlined as possible

Address baseline on outcomes

2

When I was asked if I am better able to define a watershed, I would be tempted to say strongly disagree because I was already confident in what a watershed was. However, this may give the false impression the workshop did not address the definition of a watershed, so I put strongly agree instead. A "before" and "after" format would more accurately assess how the workshop improved my knowledge of the questions being asked.

The watershed literacy questions have been changed to a retrospective-pre/post format

Change response choices

2

Add drop down menus for the NOAA resources.

Examples of NOAA resources have been added

Add background music

1

Play watershed themed songs in the background while it is running and then offer a download of the music at the end for use in the classroom.

Not currently a feature offered by Qualtrics


Table 5: Open-ended Feedback on Teacher Post-MWEE Questionnaire (N=34)

Type of Comment

n

Example

NOAA B-WET Response

Satisfied with questionnaire as is

10

It was okay the way it is.

None

Too long

9

Shorter is always better.

Questions deemed not essential have been deleted, however the questionnaire remains designed to take 30 minutes to complete to answer the evaluation system’s questions

Reformat or clarify questions

7

Have examples such as pictures and other forms of examples such as forestry, plants, animals.

Each item was reviewed and edited to improve clarity

Need to know more about the purpose

3

It depends what your overall and side purposes are.

Introductory text explains the general purpose of the evaluation

Reduce redundancy

3

It seems like some of the questions were repeated.

Questions were examined and those deemed nonessential were eliminated

Share results with grantee

1

The one suggestion that I would make would be incorporating the [project staff] into the questions that you had asked throughout the questionnaire. After all, they are the ones in the end overseeing the entire program and holding the participants accountable

Aggregate results are shared with the grantees

Timing issue

1

Much of this work was completed by December. I'm taking this survey at the end of March. I think it would have been more helpful to have this available closer to when the curriculum was instructed.

The survey is distributed at the end of the school year to capture all teachers’ work



9. Explain any decisions to provide payments or gifts to respondents, other than remuneration of contractors or grantees.


Incentives, in the form of financial compensation or material gifts, are known to increase response rates (Dillman et al., 2009; James & Bolstein, 1990). Because NOAA is a federal agency, however, it cannot offer such an incentive to grantees. Therefore, NOAA B-WET encourages grantees to ask teachers to complete the surveys as part of their professional development responsibilities. For example, if the grantees provide stipends to their professional development teachers, they could include a requirement that teachers complete the questionnaire to receive the payment.


10. Describe any assurance of confidentiality provided to respondents and the basis for assurance in statute, regulation, or agency policy.


An assurance of confidentiality is not provided to respondents. B-WET grantees and teachers who respond to the questionnaires, however, remain anonymous to B-WET and NOAA.


Anonymity is guaranteed in the following ways:

  • Neither B-WET grantees nor teacher respondents are asked to provide information that can identify them as individuals as part of the questionnaire.

  • Information that is needed to link data, that is (1) award numbers to link data provided by grantees with teachers participating in their professional development and (2) teacher-generated codes to link responses to their initial and subsequent questionnaires, are not associated with any of the other data they provide.

  • Email addresses, used to (1) invite prospective participants to participate in the study with a link to the questionnaire and (2) track response rates and prompt non-respondents, are not associated with any of the data provided by respondents.

  • Results are only presented in aggregate form (across all grantees or teacher respondents), not by individual.


11. Provide additional justification for any questions of a sensitive nature, such as sexual behavior and attitudes, religious beliefs, and other matters that are commonly considered private.


No questions of a sensitive nature are asked.


12. Provide an estimate in hours of the burden of the collection of information.


Table 6 provides estimates of the time and cost burden for the proposed information collection. The numbers of possible respondents indicated in the table are estimated from the highest number of participants in B-WET’s 2012-2014 fiscal years. Future numbers of respondents will vary based on annual program funding and the resources grantees are able to leverage. Teachers who do not respond to the initial Post-PD and Post-MWEE questionnaires (“nonrespondents”) are asked to complete significantly-abbreviated questionnaires. Response rates used in the burden calculations are projected from actual response rates obtained January 2014 to July 2015 (Table 7).


Table 6: Estimate of Annual Burden Hours for Information Collection




Informant


Number of possible respondents annuallya




Response frequency


Expected number of responses


Average time per response (hours)


Total respondent time (hours)


Estimated hourly wage (dollars)

Estimated labor cost burden to respondents (dollars)

Grantees

86

1

77b

1.0b

77

43.23h

3,329

Post-PD teachers

4,000

1

1,600c

0.5c

800

29.04i

23,232

Post-PD teachers nonresponse


1

480d

0.1d

48

29.04i

1,394

Post-MWEE teacherse

4,000

1

1,600f

0.5f

800

29.04i

23,232

Post-MWEE teachers nonresponse


1

480g

0.1g

48

29.04i

1,394

TOTALS

8,086


4,237


1,773


$52,581

aFY12=79 grants, 4,000 teachers; FY13=81 grants, 1,900 teachers; FY14=86 grants, 2,600 teachers

b Assumes maximum of 90% response rate and 1 hour completion time (actual pilot response rate = 88%, actual pilot average completion time = 62 minutes).

c Assumes maximum of 40% response rate and ½ hour completion time (actual pilot response rate = 32%, actual pilot average completion time = 27 minutes).

d Assumes a maximum of 20% response rate for Post-PD nonrespondents to calculate maximum possible burden hours (actual pilot response rate = 16%, actual pilot average completion time = 3 minutes).

e The same teachers are surveyed after their PD (Post-PD Teachers) and again at the end of the following school year (Post-MWEE Teachers).

f Assumes 40% response rate and ½ hour completion time (actual pilot response rate = 33%, actual pilot average completion time = 18 minutes).

g Assumes 20% response rate for Post-MWEE nonrespondents to calculate maximum possible burden hours (actual pilot response rate = 15%, actual pilot average completion time = 5 minutes).

h U. S. Department of Labor, Bureau of Labor Statistics. May 2014. National Occupational Employment and Wage Estimates, United States: Education administrators (mean hourly age $43.23) http://www.bls.gov/oes/current/oes_nat.htm#25-0000

i Calculated from U. S. Department of Labor, Bureau of Labor Statistics. May 2014. National Occupational Employment and Wage Estimates, United States: Secondary School Teachers (mean hourly wage not available; mean annual salary $59,180) http://www.bls.gov/oes/current/oes_nat.htm#25-0000 and Krantz-Kent, Rachel. 2008. Teachers’ work patterns: when, where, and how much do U.S. teachers work? U. S. Department of Labor, Bureau of Labor, http://www.bls.gov/opub/mlr/2008/03/art4full.pdf (“On average for all days of the week, full-time teachers worked 5.6 hours per day” = 39.2 hours per week = 2,038 hours per year @ 52 weeks/year = $29.04 per hour)


13. Provide an estimate of the total annual cost burden to the respondents or record-keepers resulting from the collection (excluding the value of the burden hours in Question 12 above).


There are no direct costs to participants. The only costs are the opportunity costs of respondents’ time required to provide information as explained in Question12 above. No capital equipment, start-up, or record maintenance requirements are placed on respondents.


14. Provide estimates of annualized cost to the Federal government.


The estimated cost to the federal government of implementing the NOAA B-WET National Evaluation System is based on the government's contracted cost for yearly maintenance of the data collection, periodic study and analysis activities, and personnel cost of government employees involved in oversight and/or analysis. For the data collection activities for which OMB approval is currently being requested, the overall cost to the government is $250,000 over a three year period. This includes:

  • $40,000 total (annualized to $13,333) for contracted activities including preparing and conducting up to two analyses of data with results reports

  • $10,000 annually ($30,000 over three years) for online survey management platform license and support

  • $60,000 annually ($180,000 over three years) for government personnel costs in overseeing the evaluation activity

Total annualized cost: $83,333.

It is anticipated that basic maintenance and operation of the system will be $70,000 annually (survey management license and government personnel oversight, as described above). It is expected that these costs would need to be sustained over the duration of the use of the evaluation system, with periodic contracted work to analyze data and produce evaluation reports. These estimates are based on the evaluation contractor's previous experience managing other research and data collection activities of this type and costs observed during the initial 2014-2015 period of data collection on this project.


15. Explain the reasons for any program changes or adjustments.Program Changes


Based on a review of initial results collected from the three questionnaires, feedback from respondents via open- and closed-ended items, and a multi-month, detailed review of all of the questionnaire items by B-WET staff members, a number of changes are proposed to the measures included in the questionnaires. These changes are identified using “track changes” in the questionnaires included in Attachments 2a-e and include:

Shortened questions to improve ease of response. For example, the closed-ended grantee question, “If offered, how likely is it that you will make use of each of the following to help you implement your B-WET-funded programs projects? (Extremely unlikely=1 to Extremely likely=7)” with 16 items was changed to a more simple, open-ended question, “How could regional B-WET programs better support your implementation of MWEEs?”

Adjusted the format of measures to ensure accurate responses. For example, on the teacher post-PD questionnaire, teachers are asked, “For each statement, select one response for BEFORE, one response for AFTER, and one response for FUTURE (scroll to the right),” but not all teachers saw the future questions (they were off their screens) or complained about having to scroll back and forth. Therefore, the future responses were moved to a separate question, just below the before/after questions.

Clarified terms to ensure consistency in how respondents interpret the questions. For example, the term “watershed” now includes bubble text to explain that the term includes ocean, coastal, riverine, estuarine, and Great Lakes watersheds.

Eliminated nonessential or redundant questions to reduce the length of the questionnaires. For example, “The health of our local watershed(s) has improved as a result of my organization's B-WET-funded MWEEs (agreement scale)” was eliminated from the grantee questionnaire because grantees are likely to provide this information as part of their project progress reports.

These changes did not cause us to revise our estimated response times.


Adjustments

Although there is an increase in respondents of 761, there has been a decrease in estimated responses and thus in hours (2,682 and 1,746). Based on teacher response rates in the last 3 years, we have lowered our estimated response rates.


16. For collections whose results will be published, outline the plans for tabulation and publication.


For the primary stakeholders and users of the proposed evaluation system, i.e., the internal NOAA B-WET staff members who administer the B-WET grant program, the data collection system automatically shares results as aggregate descriptive statistics (at the national and regional levels). For each question, the system indicates how many individuals responded, and the frequency with which a particular response option was selected. In the future, as long as funding is available, a contractor will be hired to complete more sophisticated analyses of the data (i.e., inferential statistics) and to produce a traditional research report and/or article for publication in a peer reviewed journal consisting of introduction, methods, results, and discussion/recommendation sections.


Depending on the availability of the necessary funding, regular syntheses of the main findings as related to the questions the evaluation system was designed to answer (see #1 above) will be prepared to meet the needs of different stakeholder groups. These stakeholders include the NOAA Office of Education which seeks information to improve its education grant programs, and external stakeholders such as B-WET grantees and teacher participants seeking ways to improve their MWEE practices, as well as tertiary members of the public. B-WET (potentially with the help of a contractor) will prepare these syntheses, ensuring that they meet respective stakeholders’ needs both in terms of content and presentation. These syntheses will be made available online.


17. If seeking approval to not display the expiration date for OMB approval of the information collection, explain the reasons why display would be inappropriate.


NA.


18. Explain each exception to the certification statement.


NA.


2 “The Federal Information Processing Standards (FIPS) Publication Series of the National Institute of Standards and Technology (NIST) is the official series of publications relating to standards and guidelines adopted and promulgated under the provisions of the Federal Information Security Management Act (FISMA) of 2002. Publication 202, ‘Minimum Security Requirements for Federal Information and Information Systems,’ states the basis for sound security practices in any organization. Qualtrics meets all requirements listed in section 3, such as awareness and training, incident response, media protection, and risk assessment.” (Qualtrics, 2013, page 11)

12


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File TitleSUPPORTING STATEMENT
AuthorRichard Roberts
File Modified0000-00-00
File Created2021-01-24

© 2024 OMB.report | Privacy Policy