3145-INCLUDES Supporting Statement Part B

3145-INCLUDES Supporting Statement Part B.docx

NSF INCLUDES Network Member Survey

OMB: 3145-0256

Document [docx]
Download: docx | pdf

NSF INCLUDES

Supporting Statement Part B. STATISTICAL METHODS

B.1. Potential Respondent Universe

The universe of respondents is all members of the NSF INCLUDES National Network who have six months experience or more with the Network. At the time of this submission, there are 1500 members on the National Network membership list that meet this criterion. The NSF INCLUDES National Network is comprised of individuals who are interested in or working directly to broaden participation in STEM. Some of these individuals are NSF INCLUDES grantees; others have received other NSF awards, or pursue broadening participation in STEM with support from other sources, including grants from federal, state, philanthropic, or other entities. Some are themselves representatives of these various types of funders, such as program officers at NSF, other Federal agencies, and private foundations. There are variations in the ways that these groups interact in and with the National Network. As such, the National Network survey has skip logic for two groups— one set of questions for all National Network members, regardless of role, and additional questions for members who are currently implementing NSF INCLUDES-funded projects.


Participation in the National Network is required of NSF INCLUDES-funded projects and strongly encouraged for other related projects/organizations that work toward similar goals. Members are individuals; there are no organizational memberships in the National Network (see Part A.1 for description of different groups of Network members)


B.2. Procedures for the Collection of Information

B.2.1 Overview of National Network Survey

The National Network survey is a two-part survey intended to assess the health, development and expansion of the NSF INCLUDES National Network in order to support responsive service and coordination by the Hub. Part A is intended to be completed by all Network members and is comprised of 47 questions about their (a) roles in broadening participation, and (b) perceptions about the Network’s structure, processes, and development. Part B is intended to be completed only by NSF INCLUDES-funded grantees and is comprised of 63 additional items. Most items on both survey parts are close-ended Likert scale and checkbox items, with five open-ended questions (two on Part A and three on Part B). The survey will be administered annually to the entire Network membership list using Survey Monkey, an electronic survey tool. The Hub has included a number of program implementation and demographic questions to help understand whether or not respondents are similar to the National Network population with respect to grantee/non-grantee status, organization type, type of grant, geography, etc. This is a cross-sectional, not a longitudinal survey; respondents’ information will not be linked over time.




B.2.2 Survey Administration Procedures

The Hub will administer the survey annually to all members of the Network on the Hub’s membership list. Currently there are approximately 1500 members who are participating in some fashion in the Network. The survey will be administered electronically with skip logic to direct the sub-set of respondents who are NSF INCLUDES grantees to take both Parts A and B.


Approximately two weeks prior to launching the survey, the Hub will post an announcement on the includesnetwork.org website alerting members that they will be receiving an online survey link, the purpose of the survey, and potential benefits to them. The Hub will launch the survey with a 30-day response window.


B.3. Methods to Maximize Response Rate and Minimize Non-Response Rate

The Hub anticipates a response rate of approximately 60% based on prior experience with Network member responsiveness. Response rates may vary as a function of Network activity level and different roles (e.g., grantee versus non-grantee), however the Hub anticipates that, on average, the response rate will be 60%.


The Hub will implement several strategies to maximize survey response rates. The survey will be open for four weeks. After the initial invitation, the Hub will follow-up weekly for three weeks (for a total of three reminders) to encourage non-respondents to complete the survey. This follow-up will occur through email to Network members as well as posts on the online community. Copies of pre-survey, initial survey, and follow-up survey email scripts are included in this submission.


Regardless of response rate, the Hub will conduct missing data analyses and correct for non-response bias comparing the distribution of our respondents to the universe of potential respondents on various demographic and participation characteristics (e.g., frequency or participation in the Network, role in broadening participation field). Through this comparison, the Hub will determine whether there are meaningful differences between the actual and potential survey respondents. Depending on the results, the Hub will determine if there is need to address non-response using additional statistical procedures such as weighting or suppression of results for groups of less than 10 respondents.


B.4. Test of Procedures or Methods to Be Undertaken

B.4.1 Data Preparation

Survey responses will be collected using the online tool, Survey Monkey. The Hub will conduct simple item analyses on any survey scales to assess internal consistency reliability and convergent and divergent validity of scale scores. For internal consistency estimates, the Hub will calculate Cronbach’s alpha for all survey scales (note, not all survey items are anticipated to load on a composite scale). Scales will be used in further analysis if Cronbach’s alpha = .70 or higher1. For scales with lower than .70 alphas, the Hub will conduct further item analysis to refine the scale or report on individual items.


In addition, the Hub will correlate each scale item with its own scale (removing that item from the scale) to assess convergent validity and will correlate it with other scales from the survey to assess divergent

validity. The Hub expects convergent item-scale correlations to exceed divergent item-scale

correlations by at least 1.5x to demonstrate initial construct validity of the scores for those who respond2.



B.4.2. Quantitative Descriptive and Correlational Analyses of Likert and Checkbox Survey Items

The Hub will conduct initial descriptive analysis of survey data to understand response patterns on the items/scales for the whole sample and by specific subgroups. These descriptive analyses will examine means, standard deviations and frequencies for each item and scale for the overall sample. The Hub will also conduct a series of cross-tabulations and chi-squared statistics to examine survey responses for different categories of respondents, such as by type of grantee, participation level, and organizational type. The Hub will present these results using a series of data tables and graphical representations.


B.4.3. Qualitative Coding of Open-Ended Survey Items

To prepare open-ended survey data for analysis, the Hub will create a codebook guided by the questions. The codebook will contain codes that are identified a priori to align to the question content, as well as emergent codes based on analysis of the data to capture important themes. Each code will include a brief definition of the code as well as examples. The Hub will refine the codebook based on its application to the first 20 responses coded. To ensure the coding is rigorous and reliable, the Hub will conduct inter-rater reliability training with all members of the team. During the training, the team will engage in iterative rounds of coding the same set of responses and using discussion to resolve differences. Coders will be deemed reliable when they reach 80% agreement with the other team members. Themes will be summarized and presented both narratively and graphically.


B.5. Names and Telephone Numbers of Consultants

Terri Akey, Coordination Hub Lead Evaluator, (503) 720-2615, [email protected]

Tim Podkul, Coordination Hub Director, (352) 226-2680, [email protected]




1 Lance, C.E., Butts, M.M., and Michels, L.C.. (2006).The Sources of Four Commonly Reported Cutoff Criteria: What Did They Really Say? Organizational Research Methods. Apr 1;9(2):202–20)

2Gliem, Joseph & Gliem, Rosemary. (2003). Calculating, Interpreting, And Reporting Cronbach’s Alpha Reliability Coefficient For Likert-Type Scales. 2003 Midwest Research to Practice Conference in Adult, Continuing, and Community Education.

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorPlimpton, Suzanne H.
File Modified0000-00-00
File Created2021-01-13

© 2024 OMB.report | Privacy Policy