2024-02-14 Supplementary Statement A

2024-02-14 Supplementary Statement A.docx

Generic Clearance for the National Science Foundation (NSF) Regional Innovation Engines (RIE) Evaluation and Monitoring Plan

OMB:

Document [docx]
Download: docx | pdf


SUPPORTING STATEMENT FOR PAPERWORK REDUCTION SUBMISSION

Generic Clearance for the National Science Foundation (NSF) Regional Innovation Engines (RIE) Evaluation and Monitoring Plan


SUPPLEMENTARY INFORMATION


Title of Collection: Generic Clearance for the National Science Foundation (NSF) Regional Innovation Engines (RIE) Program Evaluation and Monitoring Plan


Type of Request: New information collection



Section A. Justification

This request is for approval of data collection—through web-based surveys and focus group interviews—from NSF RIE awardees, participants, and partner organizations intended to monitor progresses, outputs, short-term, intermediate, and long-term outcomes of the National Science Foundation (NSF) investments in the Regional Innovation Engines (hereby referred to as NSF Engines) Program.

The CHIPS and Science Act of 2022 codified the National Science Foundation’s cross-cutting Directorate for Technology, Innovation and Partnerships (TIP), NSF’s first new directorate in more than 30 years, and charged it with the critical mission of advancing U.S. competitiveness through investments that accelerate the development of key technologies and address pressing national, societal, and geostrategic challenges.

The NSF Engines program was authorized in the CHIPS and Science Act of 2022 (Section 10388) to (1) advance multidisciplinary, collaborative, use-inspired and translational research, technology development, in key technology focus areas; (2) address regional, national, societal, or geostrategic challenges; (3) leverage the expertise of multidisciplinary and multi-sector partners, including partners from private industry, nonprofit organizations, and civil society organizations; and (4) support the development of scientific, innovation, entrepreneurial, and STEM educational capacity within the region of the Regional Innovation Engine to grow and sustain regional innovation. The NSF Engines program serves as a flagship funding program of the TIP directorate, with the goal of expanding and accelerating scientific and technological innovation within the United States by catalyzing regional innovation ecosystems throughout every region of our nation.

In January 2024, NSF established 10 inaugural NSF Engine awards across 18 states, uniquely placing science and technology leadership as the central driver for regional economic competitiveness. Detailed information about the 10 inaugural Engine teams including each Engine’s lead institution, region of service, competitive advantage, and key technology areas can be found at https://new.nsf.gov/funding/initiatives/regional-innovation-engines/portfolio.

Each Engine is focused on addressing specific aspects of a major national, societal and/or geostrategic challenge that are of significant interest in the NSF Engine’s defined ‘‘region of service.’’ The NSF Engines program envisions a future in which all sectors of the American population can participate in, and benefit from, advancements in scientific research and development equitably to advance U.S. global competitiveness and leadership. The program’s mission is to establish sustainable regional innovation ecosystems that address pressing regional, national, societal, or geostrategic challenges by advancing use-inspired and translational research and development in key technology focus areas. The programmatic level goals of NSF Engines are to:

  • Goal 1: Establish self-sustaining innovation ecosystems;

  • Goal 2: Establish nationally-recognized regional ecosystems for key industries;

  • Goal 3: Broaden participation in inclusive innovation ecosystems;

  • Goal 4: Advance technologies relevant to national competitiveness;

  • Goal 5: Catalyze regions with nascent innovation ecosystems;

  • Goal 6: Increase economic growth;

  • Goal 7: Increase job creation.



To achieve these goals, each Engine will carry out an integrated and comprehensive set of activities spanning use-inspired research, translation-to practice, entrepreneurship, and workforce development to nurture and accelerate regional industries. In addition, each Engine is expected to embody a culture of innovation and have a demonstrated, intense, and meaningful focus on improving diversity throughout its regional science and technology ecosystem.

This data collection, which entails collecting information from NSF Engines awardees, participants, and partner organizations is in accordance with the NSF’s commitment to improving service delivery as well as the agency’s strategic goal to ‘‘advance the capability of the Nation to meet current and future challenges.’’



A.1 Circumstances Requiring the Collection of Data


As the total potential funding for each Engine is up to $160 million over 10 years—one of the largest R&D awards given out by the agency, and the funding vehicle is through cooperative agreements, effective oversight, continuous monitoring on performance, and annual comprehensive evaluation of each Engine (through the life of the award) are critical to inform programmatic funding decisions, increase understanding of how regional innovation ecosystems are catalyzed and grown, as well as the overall success of the program.

Systematic data and information collection will be qualitative, quantitative, and descriptive in nature and will provide a means for managing Program Directors to monitor progress of an award and ensure that the award is in good standing. These data will also allow the assessment of the NSF Engines in terms of intellectual, technological, societal, commercial, and economic impacts that are core to the NSF merit review criteria. Finally, in compliance with the Evidence Act of 2019, information collected will be used for both internal and external program evaluation and assessment, satisfying Congressional requests, and supporting the Agency’s policymaking and reporting needs.

A.2 Purpose and Use of the Data


The data collection consists of six survey instruments—3 intake surveys and 3 follow-on surveys (see Figure 1), and focus group interview questions.

The six survey instruments are:

  1. Intake: Survey on programmatic activities

  2. Intake: Survey on additional funds received

  3. Intake: Survey on program participants and/or individuals involved in Engines’ activities

  4. Survey on programmatic activities

  5. Survey on program participants and/or individuals involved in Engines’ activities

  6. Survey on partner organizations


The surveys will enable the program to collect baseline measures from awardees, program participants, and partners at the start of the program as well as information on the progress made by each Engine to ascertain inclusive, sustainable regional innovation ecosystems are being developed. The information collected will allow managing Program Directors to ensure that the Engines awarded are in good standing, and complying with program requirements. Data collected is also used for recommending changes to improve and strengthen the Program.

In addition, data pertaining to individual Engines will also be made available to external evaluator(s) for each respective Engine to be used for their own internal analyses, assessments, and validation. 

Descriptions of each survey instrument is provided below and a summary of the number of survey items by type for each survey instrument can be found in Table 1. Lastly, we identified the target population who will be responding to each survey instrument in Table 2. The survey questions presented in this package are to collect administrative data for fact-finding purposes.

Survey Instrument (1) | Intake: Survey on programmatic activities

This survey allows an Engine to identify the programmatic activities taking place in their Engine along with the associated programmatic activity lead and his/her email address. Designated Engine personnel will provide basic information on each NSF Engine programmatic activity (e.g., title of activity, activity lead name and email address, short description of the activity). Programmatic activities only need to be entered once into this survey. Once a programmatic activity has been entered, it does not need to be re-entered in future years. Each survey response is one programmatic activity. If an NSF Engine does not start any new programmatic activities within a reporting year, then the NSF Engine does not have to submit any responses to this intake survey.

Survey Instrument (2) | Intake: Survey on additional funds received

This survey allows designated Engine personnel to report additional funds received by each Engine during the reporting year from entities such as state or local government, philanthropic organizations, and other Federal sources of funding. Data collected from this survey include the amount of funds received; type of entity that provided the funds (e.g., venture capital; local or state government; philanthropic organization); and a summary of the terms and conditions under which certain funds (i.e., investments) are made. Each survey response indicates one instance of funding or in-kind contribution that was received by an Engine. If an Engine does not receive any additional funding or in-kind contributions within a reporting year, then the NSF Engine does not have to submit any responses to this intake survey.

Survey Instrument (3) | Intake: Survey on program participants and/or individuals involved in Engines’ activities

This survey allows an Engine to identify individuals who are involved or participate in any Engine programmatic activities, including hired Engine personnel and those who serve on an Engine leadership team, governance board, or advisory committee. Designated Engine personnel will provide basic information on each participant (e.g., name of individual, email address of individual, which Engine activity the individual is involved in). Upon each successful questionnaire entry, the individual identified in this questionnaire receives an automatic email with a unique link and invitation to complete the survey. Individuals only need to be entered their information once for the lifetime of the data collection.

Survey Instrument (4) | Follow on: Survey on programmatic activities

The programmatic activity lead identified in Intake: Survey on programmatic activities will receive an automatic email asking them to complete the programmatic activities survey for the activity in which they serve as lead.

This survey collects information on each Engine activity such as the status of the activity; the milestones associated with the activity; the status of each of the milestones; the technology and adoption readiness level (TRL and ARL) of the activity; the intellectual property that has resulted from the activity; and the partner organizations that are involved in the activity. Within this survey, the point of contact for each partner organization involved in the programmatic activity is identified.

Leads of workforce development activities will be asked survey questions regarding who the targeted population(s) of the workforce development activity is, whether workforce development participants will receive some form of credential upon successful completion of the program, and other relevant workforce development questions. Individual NSF Engines may use the data for internal assessments and to help inform decision making. Data collected from this effort will also be used to monitor and assess the progress made in use-inspired and translational research, workforce development, DEIA, and ecosystem building within and across all Engines.

There are three variations on the category in which the programmatic activity falls (i.e., R&D and translation, workforce development, or ecosystem building). Information specific to each programmatic category will also be collected. For instance, programmatic activities that are categorized as R&D and translation will have survey questions related to intellectual property (e.g., invention disclosures, patents, licensing agreements, royalties earned), and associated TRLs and ARLs. Correspondingly, programmatic activities that are categorized as workforce development will have survey questions such as identifying who the targeted populations or groups are, number of individuals who participated and completed the activity, and whether individuals receive any credentials or certifications upon successful completion of the activity. While there are 113 survey items on the survey, respondents will only be asked a subset of these questions based on which programmatic category their activity falls under. Individual Engines may use the data for internal assessments and to help inform decision making. Data collected from this effort will also be used to monitor and assess the progress made in use-inspired and translational research, workforce development, DEIA, and ecosystem building within and across all Engines.

Activity leads will be asked to review and update their survey responses for any programmatic activities that have an active or on hold1 status the previous reporting year.

Survey Instrument (5) | Follow on: Survey on program participants and/or individuals involved in Engines’ activities

This survey collects demographic information (e.g., gender, race, ethnicity, country of origin, income, education level) for all individuals identified in the Intake: Survey on program participants and/or individuals involved in Engines’ activities. Data collected from individuals will be used to monitor and assess whether the Engine's participants reflect the demographic diversity of the region of service defined by the NSF Engine. In addition, these data can be used by individual Engines to assess whether they are meeting their diversity, equity, inclusion, and accessibility (DEIA) objectives and targets.

There are four variations of the survey based on the roles and responsibilities an individual has within an Engine. Specifically, an individual will be classified as an R&D or translation participant; a workforce development participant; or someone who serves as a programmatic lead, is part of the leadership team, or is a member of the governance board. Someone who is not in one of these three categories will be classified in the catch-all fourth category of all other Engine participants and personnel. Individuals will be asked additional survey questions based on the category in which they fall. For instance, workforce development participants will be asked questions to determine their economic self-sufficiency, employment hope, financial strain, and their perceived barriers to employment. Alternatively, individuals who are part of the Engine leadership team, governance board, or are programmatic leads will be asked questions regarding collaborations and trust within their Engine. While there are 52 survey items in the survey, survey respondents will only be asked a subset of these questions based on the roles and responsibilities they have within an Engine.

Each survey response is from a unique individual who is involved or participates in an Engine activity, initiative, or effort. Individuals who are involved or participate in Engine activities, initiatives, or efforts across reporting years will not have to re-answer demographic survey questions (i.e., items 2 through 25) but will be asked to review and update their answers to role- and responsibility-specific survey questions (i.e., items 26 through 42).

Survey Instrument (6) | Survey on partner organizations

The point of contact of partner organization(s) identified in the Survey on programmatic activities will receive an automatic email asking them to complete the partner organizations survey based on the programmatic activity in which their organization was identified as a partner organization. Partner organizations will be asked to provide basic information about its organization (e.g., employer identification number, legal name of the organization, type of organization); the monetary or estimated value of in-kind and other resources they contributed to the programmatic activity in which they participated; with which other partner organizations within the NSF Engine they collaborated; what motivated them to become a partner of the NSF Engine; and other information related to the roles and responsibilities the organization has within NSF Engine.

Individual Engines may use the data for internal assessments and to help inform decision making. Data collected from this effort will be used to monitor and assess the level of cross-sector partnerships created within and across all Engines.

Rather than completing a new survey each year, partner organizations that are identified for the same programmatic activity across reporting years will be asked to review and update their responses for the current reporting year.

Focus Group Interviews

In addition to the survey instruments, focus groups will be used to collect qualitatively rich discourse and observational information that cannot be collected through survey instruments from individuals in the Engine leadership team (e.g., Chief Executive Officer (CEO), programmatic leads), members of the governance boards or advisory committees, and NSF Engines stakeholders such as NSF Engines participants, and partner and community-based organizations. Data collected from the focus groups will offer valuable insights and enrich the overall comprehensiveness of the data collection efforts of the NSF Engines program.

Insights from the survey instruments along with the informal knowledge from Engines program staff will be used to guide the selection of suitable participants as well as the design of questions to be used in the focus groups. Individuals identified as suitable participants will receive an invitation email to participate in a focus group. Individuals who decide to participate will receive a secondary email confirming the details of the focus group session, and a third email, about a week before the scheduled focus group session, containing guiding questions to prepare them for the focus group discussion. The three main guiding questions that will be used to guide the focus group sessions are:

  • What has been the most important benefit(s) of your NSF Engine to date?

  • Of the efforts that your NSF Engine has launched, which one(s) are you the most proud of or excited about? And why?

  • Is the work of your NSF Engine spurring other projects or programs that were unexpected or unanticipated by your team? These could include activities that other groups may be pursuing as an indirect result of the efforts made by your NSF Engine.

Aside from these high-level guiding questions, the focus group interviews will be unstructured. Each focus group session will begin with participants doing self-introductions. Subsequently, participants will be grouped in pairs to discuss the guiding questions provided before the session. Trained focus group facilitators will ask follow-up questions as participants collectively build a map of activities and outcomes associated with their NSF Engine’s work.

In the first year of the NSF Engines award, the focus group sessions will focus on how each NSF Engine is establishing and building the foundation of their innovation ecosystem. We intend to repeat this focus group at years 4 and 6 of the award with the aim to building on insights from the first year. During these periods, we anticipate expanding on the lessons learned and formulating new sets of guiding questions to further our understanding of Engine ecosystems and the program's impact.


A.3 Use of Information Technology to Reduce Burden

All components in the collection will utilize electronic forms to minimize data errors and respondent burden. In some cases, Program Directors, NSF staff, and/or NSF authorized representatives may contact the respondents for clarifications or follow-up questions to ensure quality assurance and use these conversations to increase the robustness on the data.

To reduce the risk of survey respondents misinterpreting questions asked and potentially undermining the accuracy of the answers provided, definitions of keywords, phrases, and concepts available to respondents via a tooltip feature. Underlined words, phrases, and concepts will provide pop-up definitions when respondents hover over the underlined portion with their cursor. A glossary of defined survey terms is provided in Appendix A. In addition, to simplify the design of the survey instruments and reduce burden, piped text are employed whenever possible to reduce the need for respondents to type in the same text again. Piped text from the same survey or text that have been carried over from other surveys are italicized.


A.4 Efforts to Identify Duplication

The data collection does not duplicate other efforts undertaken by NSF, other federal agencies, or other data collection agents. 


A.5 Small Business

Not applicable.

A.6 Consequences of Not Collecting the Information

If the information were not collected, NSF would be unable to (1) meet its accountability requirements, (2) assess the degree to which NSF Engines are meeting its programmatic and congressional goals over time, and (3) document progress and outcomes of the Engines.

Less frequent data collection would also preclude NSF from adequately monitoring, assessing, and documenting the progress of each Engine. This inhibits NSF from making informed funding decisions and the timely correction of weaknesses identified in a Engine’s activities. The consequence of less frequent collection would manifest itself in lack of an effective way to monitor the investment of resources and time that NSF has committed to the NSF Engines Program.


A.7 Special Circumstances Justifying Inconsistences with Guidelines in 5 CFR 1320.6

Data collected will comply with 5 CFR 1320.6. First, a valid OMB control number will be displayed at the beginning of the electronic form. Second, as the reporting requirement is necessary for the success of the Program, the NSF Engines Program will communicate clearly—through proposal solicitations, the NSF Engines website, and program announcements—that such collection of these information will be treated as a means for satisfying a condition for the receipt of the NSF RIE grant.


A.8 Federal Register Notice and Consultation Outside the Agency

The agency’s notice, as required by 5 CFR 1320.8(d), was published in the Federal Register on October 13, 2023, at 88 FR 71033; the public comment period closed on December 12, 2023 and no public comments were received during this time.



A.9 Payments or Gifts to Respondents

Not applicable.


A.10 Assurance of Confidentiality

Respondents will be informed that any information on specific individuals is maintained in accordance with the Privacy Act of 1974. Every data collection instrument will display both OMB and Privacy Act notices.

Respondents will be told that data collected are available to NSF officials and staff, and authorized contractors and/or grantees, who manage the data and data collection software. Data will be processed according to federal and state privacy statutes. The system will limit access to personally identifiable information to authorized users. Data submitted will be used in accordance with criteria established by NSF for monitoring research and education grants and in response to Public Law 99-383 and 42 USC 1885c.



A.11 Questions of a Sensitive Nature

Information from survey correspondents, including name, affiliated organization, and email address are requested. This information will be used to automate email invitations to follow-up surveys (i.e., programmatic activities survey, individuals survey, and partner organization survey); pipe in embedded data from one survey to the next; and in case further clarifications are needed.

In addition, respondents in the individuals survey are asked about the following topics. All questions have an opt-out response option of “I prefer not to answer.”

  • Birth year

  • Race

  • Ethnicity

  • Sex at birth

  • Gender identity

  • Sexual orientation

  • Country of birth

  • Disability status (i.e., difficulty hearing; difficulty seeing; difficulty concentrating, remembering, or making decisions; difficulty walking or climbing stairs; difficulty dressing or bathing; and difficulty doing errands alone)

  • Marital status

  • Citizenship status

  • Country of citizenship

  • Educational attainment

  • Employment status

  • Household income

  • Personal income

  • Children status

  • Fluency in different languages

  • Veteran status

  • Zip code of current address


The demographics of an Engine’s participants is intended to reflect the demographics of an Engine’s region of service. The information collected in the individuals survey will enable both the NSF Engines program as well as the individual Engine teams to better understand which groups of individuals are participating within an Engine, and assess whether the participants are reflective of the larger region of service. In addition, this information will help both the teams and the program monitor and assess their progress towards achieving goal 4 of the NSF Engines program (broaden participation of inclusive innovation ecosystems). Lastly, this information will enable NSF to assess whether the Engines program is meeting the legislative language regarding broadening participation as specified in the CHIPS and Science Act of 2022. Specifically, individual Engine teams should “engage in outreach and engagement in the region to broaden participation” as well as “demonstrate a capacity to broaden participation of populations historically underrepresented in STEM.”

Individual-level data that are collected will be provided only to managing Program Directors, NSF senior management, and supporting staff conducting analyses using the data as authorized by NSF. Any public reporting of data will be in aggregate form, and all personal identifiers will be removed.


A.12 Estimates of Response Burden

A total of 10 NSF Engines were awarded in spring 2024.

We anticipate the first NSF Engines programmatic data collection to take place months 9-12 from the award start date, and annually thereafter. Programmatic assessments will be based on the data received for each award year. Intake surveys will be open from months 9-11 and the two follow-on surveys as well as the survey for partner organizations will be open from months 9-12 to give respondents enough time to complete the surveys.

After the first round of data collection is complete, all survey instruments will remain open to allow for on-going reporting from each Engine on an as-needed basis, but the data collected are for the next reporting period. This will enable data to be captured in near real-time, which can help the NSF Engines program as well as each Engine to make more accurate and timely decisions.

To estimate the range of annual burden that will be placed on survey respondents, we assumed a low and high Engine output scenario (Table 3). The total estimated annual burden per Engine and for the NSF Engines program assuming low and high Engine output can be found in Table 4 and Table 5. Under this estimation, the total annual burden of the NSF Engines survey-based data collection is between the range of 18,840 minutes (314 hours) and 51,550 minutes (859 hours).2

We anticipate the survey-based data collection annual burden to decrease substantially after year 1 of the NSF Engines award because (1) survey respondents will not be asked to fill out new surveys but rather be asked to review and update the information they provided from the previous reporting year, and (2) the intake survey instruments only need to be submitted for new programmatic activities, individuals, or funding received.

For the focus group session, we anticipate each focus group interview, lasting up to two hours, will be held for each Engine. Each focus group will comprise approximately 10 participants (Table 6). The estimated annual burden per Engine with regards to focus groups is 120 minutes (20 hours) (Table 7). The total estimated annual burden for the NSF Engines program is 12,000 minutes (200 hours).

A.12.1. Number of Respondents, Frequency of Response, and Annual Hour Burden


Collection Component

Number of Respondents

Number of Hours

Total Burden (Hours)

6 surveys

40-70 respondents per Engine

10 - 15 hours per Engine per year

400 -1050 hours per Engine per year

Focus group interviews

10 participants/ Engine

(10 Engines)

2 hours per session

200 hours per Engine per year

Total



600 - 1250 hours per Engine per year

As shown above, the annual response burden for the collections under this request is in the range of 600 - 1600 hours.



A.12.2. Estimates of Annualized Cost to Respondents for the Hour Burdens


To estimate the annualized cost to respondents for the hour burdens per Engine, we calculated the average hourly rates by taking the median usual weekly earnings for different educational attainments from 2022 as reported by the U.S. Bureau of Labor Statistics3. We assumed a 40-hour work week and the following educational attainments for each respondent type: designated coordinators, R&D and translation participants, and other Engine participants and personnel to have bachelor degrees; programmatic and activity leads, and Engine leadership team, governance board members, and advisory committee members to have professional degrees; workforce development participants to have some college but no degree; and the point of contacts of partner organizations to have master degrees.

Respondent Type

Number of Respondents

Total Burden per Respondent (Hours)

Average Hourly Rate

Estimated Annual Cost for All Respondents

Designated coordinators

2

0.3

$35.80

$21.48

Programmatic leads

3

0.5

$52.00

$85.80

Activity leads

18 to 30

0.45

$52.00

$421.20 to $702

R&D and translation participants

30 to 80

0.2

$35.80

$214.80 to $572.80

Workforce development participants

20 to 100

0.25

$23.38

$116.90 to $584.50

Engine leadership team, governance board members, and advisory committee members

20 to 40

0.2

$52.00

$208 to $416

Other Engine participants or personnel

18 to 28

0.17

$35.80

$109.55 to $170.41

Point of contacts for partner organizations

18 to 90

0.10

$41.53

$74.75 to $373.77


As shown above, the estimated annualized cost to respondents per Engine ranges between $1,252.48 and $2,926.76. For the 10 awarded Engines, the total annualized burden for the NSF Engines program is up to $104,000.


A.13 Estimate of Total Capital and Startup Costs/Operation and Maintenance Costs to Respondents or Record Keepers

Not applicable.


A.14 Estimates of Costs to the Federal Government

All surveys will be administered through Qualtrics, a FedRAMP authorized experience management software. The annual cost of one Qualtrics license to NSF is $2,000.

We anticipate an NSF evaluation program director to spend 0.5 FTE for survey maintenance, data cleaning, and data analysis. The hourly rate of an AD-4 program director ranges between $82 and $100. The estimated total annual labor cost for this individual ranges between $85,000 and $102,000.

We anticipate building a data collection portal for the Engines awardees as part of future enhancements to the programmatic data collection process. The estimated costs for the development, implementation, and maintenance of these future enhancements are approximately $250,000.

Lastly, the estimated costs associated with conducting of the focus groups, qualitative analysis and synthesis of the interviews, and reporting of the findings is $100,000.

The total estimated annual cost to the Federal Government is between $437,000 and $454,000.


A.15 Changes in Burden

Not applicable.


A.16 Plans for Publication, Analysis, and Schedule

Survey-based Programmatic Data Collection

The six survey instruments will be made publicly available after receiving OMB clearance.

We plan to make anonymized and de-identified data from the annual, survey-based programmatic data collection available to the academic research community at some future date through an application-based data use agreement.

We plan to conduct internal analyses and publish an internal, annual programmatic assessment report within 6 months of completing each round of programmatic data collection with the first report aimed at month 18 from award start date. These annual programmatic assessment reports will be made publicly available as soon as possible upon completion of the NSF internal approval process.

Focus Group Interview Data Collection

A report summarizing the findings from the focus group interviews will be made internally available within 8 months of completing the last round of interviews. A publicly approved version of the report will be made available upon completion of the NSF internal approval process.


A.17 Approval to Not Display Expiration Date

Not applicable.


A.18 Exceptions to Item 19 of OMB Form 83-I

No exceptions apply.

Part B.

Not applicable.

Table 1. Summary of the number of survey questions by type for each of the six NSF Engines survey instruments.

Survey

Multiple choice

(single answer)

Multiple choice

(select all that apply)

Dropdown

Matrix table

Free response

Total number of questions*

Intake: programmatic activities survey

2

1

0

0

1

4

Intake: additional funds received survey

3

0

0

0

2

5

Intake: individuals survey

1

1

0

0

2

4

Programmatic activities survey

49

6

1

7

58

121

Individuals survey

25

3

2

15

8

53

Partner organizations survey

6

1

1

7

11

26

*Note: the total number of questions identified may be less than the total number of questions displayed in the survey instruments because certain survey questions are text only and respondents do not need to provide a response. Text only questions are not counted towards the total number of questions. In addition, each statement within a matrix table and each form field free response is counted as an individual question.

Table 2. Summary of estimated burden (minutes per response) by group of survey respondents and instrument.

Survey

Designated coordinators

Programmatic leads

Activity leads

R&D and translation participants

Workforce development participants

Engine leadership team, governance board, and advisory committee members

All other Engine participants and personnel

Partner organizations

Intake: programmatic activities survey


3 minutes







Intake: additional funds received survey

5 minutes








Intake: individuals survey



3 minutes






Programmatic activities survey



15 to 30 minutes*






Individuals survey

10 minutes

15 minutes

10 minutes

12 minutes

15 minutes

12 minutes

10 minutes


Partner organizations survey








5 minutes

* The estimated amount of burden for the programmatic activities survey ranges between 15 minutes per response for a workforce development or ecosystem building to 30 minutes per response for an R&D or translation activity.

Table 3. To estimate the potential range of total annual burden placed on survey respondents, we assumed a low and high Engine output. The estimated number of programmatic activities, Engine funding received, partner organizations, and individuals per Engine for the two scenarios are summarized below.

Category

Low Engine Output

High Engine Output

Number of programmatic activities per Engine



R&D and translation

6

10

Workforce development

2

5

Ecosystem building

10

15

Number of additional funds received

2

5

Number of partner organizations (per programmatic activity)

1

3

Number of individuals (per programmatic activity)



R&D and translation participants (per programmatic activity)

5

8

Workforce development participants (per programmatic activity)

10

20

Engine leadership team, governance board, advisory committee, and programmatic leads

20

40

All other Engine participants and personnel

20

30

Table 4. Summary of total estimated annual burden of NSF Engines data collection by survey instrument and number of activities conducted per Engine awardee. The total estimated annual burden per Engine assuming low output is 1,884 minutes (31.4 hours). The total estimated annual burden the NSF Engines program is 18,840 minutes (314 hours).

Survey instrument

Total number of inputs

Estimated burden (min per response)

Total estimated burden (min)

Intake: programmatic activities survey

18

3

54

Intake: additional funds received survey

2

5

10

Intake: individuals survey

90

3

270

Programmatic activities




R&D and translation

6

30

180

Workforce development

2

15

30

Ecosystem building

10

15

150

Individuals survey




R&D and translation participants

30

12

360

Workforce development participants

20

15

300

Engine leadership team, governance board, advisory committee, and programmatic leads

20

12

240

All other Engine participants and personnel

20

10

200

Partner organizations

18

5

90




Table 5. Summary of total estimated annual burden of NSF Engines data collection by survey instrument and number of activities conducted per Engine awardee. The total estimated annual burden per NSF Engine based on high Engine output is 5,155 minutes (85.9 hours). The total estimated annual burden for the NSF Engines is 51,550 minutes (859 hours).

Survey instrument

Total number of inputs

Estimated burden (min per input)

Total estimated burden (min)

Intake survey: programmatic activities

30

3

90

Intake survey: Engine funding

5

5

25

Intake survey: individuals

250

3

750

Programmatic activities




R&D and translation

10

30

300

Workforce development

5

15

75

Ecosystem building

15

15

225

Individual demographics




R&D and translation participants

80

12

960

Workforce development participants

100

15

1,500

Engine leadership team, governance board, and programmatic leads

40

12

480

All other Engine participants and personnel

30

10

300

Partner organizations

90

5

450

Table 6. Estimated composition of individuals per NSF Engine focus group. Total estimated number of individuals participating in focus groups for the NSF Engines program is 100.

Focus Group Session

Designated coordinators

Programmatic leads

Activity leads

Engine leadership team, governance board, and advisory committee members

All other Engine participants and personnel

Partner organizations

Estimated number of individuals per NSF Engine

1

1

1

2

2

2


Table 7. Summary of estimated annual burden per NSF Engine focus group is 1,200 minutes (20 hours). Total estimated annual burden for the NSF Engines program is 12,000 minutes (200 hours).

Total number of focus group participants per NSF Engine

Estimated burden (min per focus group)

Total estimated burden (min)

10

120

1,200





1 Survey item 41 in the programmatic activities survey asks specifically about the status of the activity. Answer options are active, on hold, cancelled, or completed.

2 Given that the number of activities conducted will differ across Engine awardees, we estimated the total annual burden for (a) a low number and (b) a high number of activities conducted per Engine awardee.

To estimate the total annual burden for low Engine output, we assumed each Engine awardee would conduct six research and development (R&D) and translation activities; two workforce development activities; 10 ecosystem building activities; have two additional sources of funding to report each year; and have one partner organization associated with each programmatic activity. For individuals, we assumed that there would be five participants per R&D and translation activity, and 10 participants per workforce development activity. Lastly, we assumed the leadership team, governance board, advisory committee, and programmatic leads of an Engine would consist of 20 individuals; and all other Engine participant and personnel who do not fall into any of the previous categories would consist of 20 individuals. Based on these estimated values, we calculated the total annual burden per Engine to be approximately 40 hours. The total annual burden for the NSF Engines program would be approximately 400 hours.

To estimate the total annual burden for high Engine output, we assumed each Engine awardee would conduct 10 activities each under research and development (R&D); translation; and workforce development; 15 activities under ecosystem building; have eight additional sources of funding to report each year; and have five partner organizations associated with each programmatic activity. For individuals, we assumed that there would be eight participants per R&D and translation activity, and 20 participants per workforce development activity. Lastly, we assumed the leadership team, governance board, advisory committee, and programmatic leads of an Engine would consist of 40 individuals; and all other Engine participant and personnel who do not fall into any of the previous categories would consist of 30 individuals. Based on these estimated values, we calculated the total annual burden per Engine to 100 hours). The total annual burden for the NSF Engines program would be approximately 1000 hours.


3 U.S. Bureau of Labor Statistics. 2023. “Employment Projections.” Accessed February 12, 2024. Available at:

https://www.bls.gov/emp/tables/unemployment-earnings-education.htm.

  

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorLau, Yuen
File Modified0000-00-00
File Created2024-07-20

© 2026 OMB.report | Privacy Policy