Diversity OMB Summary Statement B 5-9-2016 V3Final

Diversity OMB Summary Statement B 5-9-2016 V3Final.docx

Evaluation of the Enhancing Diversity of the NIH-funded Workforce Program for the National Institute of General Medical Sciences (NIGMS)

OMB: 0925-0747

Document [docx]
Download: docx | pdf


Supporting Statement B for:


National Institutes of Health

Evaluation of the Enhancing Diversity of the NIH-funded Workforce Program for the National Institute of General Medical Sciences














May 9, 2016










Michael Sesma, Ph.D.

Chief, Postdoctoral Training Branch

Division Training, Workforce Development, and Diversity

Diversity Program Consortium

National Institute of General Medical Sciences

45 Center Drive, 2AS43H

Bethesda, MD 20892

Phone: (301) 594-3900

Email: [email protected]








Table of contents


B. COLLECTIONS OF INFORMATION EMPLOYING STATISTICAL METHODS









List of Attachments


Attachment 1: RFA-RM-13-016 (Building Infrastructure Leading to Diversity (BUILD))


Attachment 2: RFA-RM-13-017 (National Research Mentoring Network (NRMN))


Attachment 3: RFA-RM-13-015 (Coordination & Evaluation Center)


Attachment 4: Hallmarks of Success


Attachment 5: BUILD Interventions Summary Table


Attachment 6: BUILD Logic Models


Attachment 7: NRMN Logic Model


Attachment 8a: HERI Freshman Survey (On-line Version)


Attachment 8b: HERI Freshman Survey (Paper Version)


Attachment 9: HERI Transfer Student Survey


Attachment 10: HERI Your First College Year


Attachment 11: HERI College Senior Survey


Attachment 12: BUILD Student Annual Follow-up Survey


Attachment 13: HERI Faculty Survey


Attachment 14: BUILD Faculty Annual Follow-up Survey


Attachment 15: Mentee Mentor Assessment


Attachment 16: NRMN Data Warehouse Baseline Data


Attachment 17: NRMN Faculty/Mentor Core Follow-up Survey


Attachment 18: NRMN Student/Mentee Core Follow-up Survey


Attachment 19: NRMN Mentor Skills Module


Attachment 20: NRMN Research & Grant Writing Module


Attachment 21: NRMN Coaching Training Module


Attachment 22: NRMN Institutional Context Module


Attachment 23: BUILD Site Visit & Case Studies Protocol


Attachment 24: NRMN Site Visit & Case Studies Protocol


Attachment 25: BUILD Institutional Records & Program Data Requests


Attachment 26: BUILD Implementation Reports


Attachment 27: Coordination & Evaluation Center (CEC) Tracker Security Overview


Attachment 28: Advisory Committee to the Director – Working Group on Diversity in the Biomedical Research Workforce Membership



Supporting Statement for the

Paperwork Reduction Act Submission

B: Justification


National Institutes of Health

Evaluation of the Enhancing Diversity of the NIH-funded Workforce Program

This OMB application seeks approval for 3-year clearance to conduct an evaluation of the NIH Common Fund’s Enhancing Diversity of the NIH-funded Workforce Program (also referred to as the Diversity Program Consortium) - a national consortium comprised of three integrated initiatives: (1) Building Infrastructure Leading to Diversity (BUILD), (2) National Research Mentoring Network (NRMN), and (3) Coordination and Evaluation Center (CEC). We are requesting OMB clearance for the data collection that is required for the CEC to evaluate the overall impact and effectiveness of the BUILD and NRMN initiatives as required by the NIH Common Fund award. The evaluation will assess agreed-upon consortium-wide hallmarks of success at the student/mentee-level, faculty/mentor level, and institutional-level (see Attachment 4: Hallmarks of Success).


Data collection will include annual surveys for BUILD undergraduate students and faculty to assess key outcomes for these groups. Institutional and program data collection along with site visits and case studies will be used to assess BUILD program implementation and institutional outcomes. Evaluation of NRMN outcomes will rely on surveys of a sample of faculty/mentors and students/mentees (including undergraduate and graduate students as well as postdoctoral students and junior faculty) participating in NRMN activities; additional case studies information will be collected from a small number of participants through phone and/or in-person interviews. Site visits with NRMN lead investigators will provide needed information on program implementation. The information gathered from these various sources will document the BUILD and NRMN outcomes and allow the CEC and NIH to evaluate the effectiveness of the initiatives and the consortium as a whole.


B. Collection of Information Employing Statistical Methods

B.1 Respondent Universe and Sampling Methods


We will conduct a longitudinal evaluation using mixed methods to assess implementation and outcomes of the Diversity Program Consortium. The BUILD initiative has three major intervention components aimed at:

  1. better preparing students to enter a biomedical research career,

  2. faculty development, and

  3. institutional infrastructure development to transform training/mentorship.

The NRMN initiative includes a set of intervention activities, including: mentor recruitment, training; matching mentors to mentees; and various professional development modules targeting grant-writing skills and coaching.


The Consortium-wide Diversity Program evaluation will include assessment of outcomes such as those outlined in Table B.1.


Table B.1 Key Consortium Evaluation Outcomes

Outcomes

Data Source

BUILD Student outcomes


1. Engagement in research

HERI Freshman & Transfer Student Surveys; HERI Your First College Year Survey; HERI College Senior Survey; BUILD Student Annual Follow-up Survey

2. Satisfaction with faculty mentorship

BUILD Student Annual Follow-up Survey HERI Freshman & Transfer Student Surveys; HERI Your First College Year; HERI College Senior Survey

3. Enhanced science identity and self-efficacy,

HERI Freshman & Transfer Student Surveys; HERI Your First College Year Survey; HERI College Senior Survey; BUILD Student Annual Follow-up Survey;

4. Participation in academic and professional student organizations

BUILD Student follow-up survey HERI Your First College Year Survey, HERI College Senior Survey);

5. Pursuit, persistence/retention, and success in biomedical science discipline

HERI Freshman & Transfer Student Surveys; HERI College Senior Survey, BUILD Student Annual Follow-up Survey;

6. Evidence of scholarly productivity (e.g. science conference presentation, authorship on papers)

BUILD Student Annual Follow-up Survey;

7. Intent to pursue biomedical research career

HERI Freshman & Transfer Student Surveys; BUILD Student Annual Follow-up Survey

8. Completion of undergraduate degree in biomedical science discipline

BUILD Student Annual Follow-up Survey; HERI College Senior Survey;

9. Application to attend graduate program in biomedical science discipline

HERI College Senior Survey, BUILD Student Annual Follow-up Survey;

10. Entrance to graduate program in biomedical science discipline

BUILD Student Annual Tracking Survey;

11. Perceptions of BUILD program benefits and challenges

BUILD Site Visits; BUILD Case Studies



BUILD Faculty Outcomes


1. Change/increase in self-efficacy as instructor, mentor and/or researcher

BUILD Faculty Annual Follow-up Survey

2. Increase in participation in professional development activities for faculty in BUILD programs (e.g. training in NIH grant applications, technical training for conducting research, workshops on cultural assets/ stereotype threat, etc.)

BUILD Faculty Annual Follow-up Survey; HERI Faculty Survey

3. Increase participation in BUILD mentorship programs

BUILD Faculty Annual Follow-up Survey

4. Increased research productivity in grant submissions and awards

BUILD Faculty Annual Follow-up Survey

5. Increase in number of trainees mentored in BUILD programs

BUILD Faculty Annual Follow-up Survey

6. Increase in the quality of mentoring

BUILD Faculty Annual Follow-up Survey; Mentee “mentor assessment” survey

7. Perceptions of BUILD program and faculty development activities

BUILD Faculty Annual Follow-up Survey BUILD Case studies



BUILD Institutional Outcomes


1. Improved undergraduate retention rates of students in programs relevant to BUILD

Institutional Research Office

2. Increased student and faculty participation in mentoring in BUILD activities

BUILD Student & Faculty Annual Follow-up Surveys; BUILD program implementation reports

3. Increase enrollment and retention of disadvantaged/underrepresented students in BUILD related programs,

BUILD Student Annual Follow-up Survey; BUILD program implementation reports

4. Increase in number of student research training opportunities in BUILD programs,

BUILD program implementation reports

5 Increased inter-institutional collaborations to achieve BUILD outcomes related to research, mentorship, and faculty development (e.g., linkages with community colleges, collaborations with NRMN);

BUILD Site Visits; BUILD Case Studies; BUILD Program Implementation Reports



NRMN Program Outcomes


1. Increase in the number and quality of trained mentors and mentees

NRMN Annual Mentor & Mentee CORE Annual Follow-up Surveys; Mentee “mentor assessment” survey

2. Increase in the numbers and quality of culturally responsive active mentor/mentee pairs

NRMN Annual Mentor & Mentee CORE Annual Follow-up Surveys;

3. Increase in satisfaction with mentoring (for mentors and mentees);

NRMN Mentor Annual Follow-up Survey; Mentee “mentor assessment” survey NRMN Annual Student/ Mentee Annual Follow-up Survey

4. Increase in the quantity, quality, and use of NRMN portal resources

NRMN data warehouse; NRMN Case Studies, NRMN Site Visits

5. Increase in research productivity skills

NRMN Mentor Annual Follow-up Survey; NRMN Annual Student/ Mentee Annual Follow-up Survey

6. Increase in motivation to pursue a biomedical career path;

NRMN Annual Student/Mentee Follow-up Survey

7. Increase in career productivity of NRMN participants

NRMN Mentor Annual Follow-up Survey; NRMN Annual Student/Mentee Follow-up Survey

8. Increase in NRMN participant progression in biomedical careers

NRMN Mentor Annual Follow-up Survey; NRMN Annual Student/Mentee Follow-up Survey

9. Increase in institutional commitment to training diverse students in biomedical fields.

NRMN Institutional Context module; NRMN Case Studies and Site Visits

10. NRMN program implementation successes and challenges

NRMN Site Visits & Case Studies


B.1.1. Respondent Universe

The respondent universe varies depending on the outcome of interest (e.g., BUILD students, BUILD faculty, participants in various NRMN activities).


BUILD Students: We will survey all students identified by each of the 10 BUILD programs as being a participant in their program, starting when they enter the program (usually as either a freshman or transferring junior). We will also sample additional students from each BUILD institution to achieve our target 500 students annually. (This number is based on the power calculations presented in Section B.1.2 below.) Entering freshmen and transferring juniors will be sampled non-proportionally with greater weight (80:20) given to those with a declared biomedical major (the focus of the BUILD initiative). In addition, we will over-sample African American and Hispanic/Native American subgroups in light of their lower response rates as seen in annual HERI surveys. For African Americans, the oversampling rate will be 160%; for the Hispanic/Native American group it will be 130% relative to Whites/Others. In each year (2016-2018), the resulting sample of students at each BUILD institution will reflect the demographic characteristics of entering students at those institutions. BUILD awardees reflect institutions (with relatively high proportions of “under-represented groups”, e.g., race/ethnic minorities and lower SES groups) (see table in A.8.2. for list of awardee institutions); our sampling will yield similar representation of those groups among those asked to participate in the evaluation project and its associated data collection. Students selected for inclusion in the evaluation data collection will be asked to complete the HERI “Freshman Survey” or the “Transfer Student Survey” and will be followed and asked to complete the HERI “Your First College Year” and “College Senior Survey” as well as annual follow-up surveys. For each of the 10 BUILD institutions, we will identify a comparable non-BUILD institution that has also collected HERI Freshman/Transfer student, Your First College Year and Senior Year surveys; matching of BUILD/non-BUILD institutions will be based on ethnic distribution and curricular offerings. Secondary data from HERI surveys at those non-BUILD institutions will be used for comparison with outcomes at BUILD institutions.


BUILD Faculty: We will survey 500 faculty from all BUILD institutions combined (~50/institution) and 500 faculty from non-BUILD institutions (~50/institution) in 2016 (baseline). Faculty are sampled only once as there is expected to be little change in faculty over the 3 year follow-up based on faculty retention and turnover rates. At BUILD institutions, faculty will be sampled such that all faculty participating in the BUILD program activities are included (unless there are more than 25 in which case a random sample will be drawn so that the faculty who have participated in BUILD do not represent more than 50% of the total sample of 50). In addition, a random sample of other faculty in biomedical disciplines will be drawn to complete the total sample of 50. Faculty at non-BUILD institutions will be randomly sampled from existing secondary data from HERI Faculty Surveys administered independent of this initiative by various US academic institutions; faculty will be sampled (based on responses in the HERI Faculty Survey) based on the same biomedical disciplines. Resulting samples will reflect the demographic characteristics of faculty in those disciplines at participating institutions.


NRMN: NRMN targets students (undergraduate through post-doctoral) and faculty/professionals at all levels who are training for or engaged in biomedical research studies or careers. Thus, the description of the respondent universe relies on the respondents’ roles in the NRMN intervention. We will randomly sample from subgroups defined by the specific NRMN activities that individuals have participated in (e.g., sampling those taking mentor training, those taking the grant writing skills workshops). Resulting samples will reflect the demographic characteristics of those participating in the various NRMN activities.


Intervention, contextual, institutional, faculty, and student variables (discussed below) will be used in the multilevel analysis to assess BUILD impact.


B.2 Procedures for the Collection of Information


B.2.1. Power Analysis and Estimation Procedure


Our overall analysis plan will compare BUILD exposed groups (students, faculty, institution) to non-BUILD exposed comparison groups or will compare NRMN participants exposed to a given activities (grant-writing workshop, mentoring) to NRMN participants who were not exposed to that activity. For the various key outcomes (defined in the Hallmarks; see Attachment 4), generalized mixed linear models will be used to test the hypothesis that the BUILD or NRMN interventions results in better outcomes for those participating in BUILD or NRMN activities. Models will test for the significance of an interaction term reflecting the “difference of differences” of the senior minus freshman scores for BUILD students versus similar students in non-BUILD schools. A statistically significant term in a positive direction would indicate success of BUILD, after adjusting for the covariates and clustering within institutions.


The power calculations are based on testing the following key hypothesis:

Students (BUILD)

H1: The Biomedical Career Interest Scale score will improve over time more for students in the BUILD than in the non-BUILD institution.


For this hypothesis, the outcome variable is the Biomedical Career Interest Scale score collected in the freshman and senior years. This scale will be computed from the combined answers to the following questions: Do you intend to pursue a science-related research career? How many months since entering college (including summer) did you work on a professor’s research project? Have you participated in an undergraduate research program? How often did a professor provide you with an opportunity to conduct research? How often have you met with an advisor/counselor about your career plans? Since many possible covariates could be used to predict any outcome, a two-step method will be used to derive the final model. First, we perform a univariate analysis of each covariate as it relates to our outcome and select those that are associated with it at the p=0.20 significance level or less. We then regress the outcome on these selected covariates using a specific best-subset regression approach, from which a generalized mixed linear model relating the outcome to the selected covariates is derived. In the model, we would include as covariates all those selected above as well as: X1 = (BUILD/non-BUILD school), X2 = Year (freshman/senior) and X3 = the product of X1 and X2 as their interaction. Thus, the interaction term is a “difference of differences” of the senior minus freshman score for BUILD versus non-BUILD schools. A statistically significant interaction term in a positive direction would indicate success of BUILD.


We test the null hypothesis that the average change in the biomedical career interest scale from freshman to senior years is the same in the BUILD and non-BUILD institutions versus the alternative that the BUILD students will show a higher increase in the average score. We define Y = change in score (senior – freshman) and define the effect size as: Effect size = [(Mean of Y | BUILD) - (Mean of Y | non-BUILD)]/standard deviation of Y. With significance level of 0.05, we select the sample size necessary to produce 0.8 power to identify a low effect size of 0.25. The necessary sample size is 253 students in each of the BUILD institutions and their comparable non-BUILD institutions. Allowing for annual non-response attrition of 20% over 3-4 years, we proposed to sample 500 students in each institution from the incoming 2016 cohort for follow-up. We will also sample 500 per year in 2017-2019 from each of these institutions in order to provide adequate sample sizes for subsequent cohorts (2017-2018) to evaluate potential differences in shorter-term, interim outcomes (e.g., research self-efficacy, science identify, intent to pursue biomedical career) for these different cohorts as each is likely to be exposed to somewhat different BUILD offerings as those are modified over the funding period. By sampling 500/year from each institution, we also position the Diversity Program to examine longer-term outcomes of these different cohorts (with their likely somewhat different exposures) should additional funding to support follow-up after 2019 be forthcoming. Furthermore, the annual samples will allow us to examine longitudinal trends in our outcomes.


This type of analysis will be used for any similar longitudinal change hypothesis with a continuous outcome and for any two groups, e.g., all BUILD students vs. non-BUILD students, or BUILD faculty vs. non-BUILD faculty, and so forth. In particular, we test the same hypothesis comparing all BUILD students vs. all non-BUILD students. To account for the clustering effect (with institution being a cluster), we incorporate an intra-class correlation ranging from 0.001 to 0.04 (Adams et al., 2004). The following table shows the power for a two-sided alternative and an effect size of 0.25.


Intra-class correlation

0.001

0.005

0.01

0.02

0.03

0.04

Power

1.00

1.00

1.00

0.95

0.86

0.76


Thus the power is at least 0.76 for all scenarios, and is near or equal to 1.0 for most scenarios. The power figures for higher effect sizes of 0.5 and 0.8 are all equal to 1.00.


Another hypothesis to be tested relates to entry into the biomedical fields. Specifically,


H2: The proportion of students in BUILD institutions who graduate with an undergraduate degree in a biomedical science discipline will be higher than the proportion in otherwise similar non-BUILD institutions.


The analysis to test this hypothesis will also be a generalized linear mixed model, but with a binary outcome. The selection of the model and the included covariates will follow the same lines as described for H1.


BUILD Faculty & NRMN faculty/mentors and students/mentees


Turning to faculty/mentor data analysis, NRMN plans to train some 800 mentors; we will sample 500 (sample size justification is below). To test the effectiveness of this training, we measure the Mentoring Competency Assessment scale pre- and post-training (Pfund et al., 2014). We will also gather the same data on a random sample of 500 faculty participating in NRMN other than mentor training. We test the hypothesis:


H3: The mean change of post – pre score is greater in the NRMN-trained group than in the control group.


The analysis will follow the same approach described for hypotheses H1 & H2. The control group will be selected from NRMN faculty who do not receive NRMN mentor training. To find the size of this control group, we use a conservative two-sided t-test with power = 0.80 and N1 = 500 NRMN-trained faculty. We ignore the clustering effect (intra-class correlation) since the faculty will largely be recruited from different universities, with at most two faculty members from any one institution. Allowing for 20% attrition per year over a 3-year follow-up, we use N1 = 320 to calculate the needed number of controls. An effect size of 0.40 was derived from data provided in the reference below from a similar study (Pfund et al., 2014). However, since we will be following this group after training, we use an effect size of 0.25 to account for the longer time effect. The required sample size is N2 = 208 controls. Allowing for 20% attrition per year over 3-4- years, we need to recruit at least 260 controls. To be on the safe side, we recruit 500 controls as this also positions the Diversity Program to examine longer-term outcomes of these groups should additional funding to support follow-up after 2019 be forthcoming. In addition, we will survey the mentees of the NRMN faculty who receive mentor training in order to examine the effect of the mentorship program on those mentees.


For BUILD institutions’ faculty, we will follow the same approach as for NRMN faculty. Therefore, we also need to survey 500 BUILD institution faculty in order to ensure that we have a minimum of 300 faculty at the 3 year follow-up (allowing for 20% attrition annually). These faculty will be sampled from among institutional faculty in biomedical research fields, taking 100% of those participating in BUILD activities (unless there are more than 25 in which case a random sample of 25 will be drawn) and random sampling the balance needed for the target sample size from amongst the rest of the bioscience faculty. As with BUILD students, the control faculty will be sampled from biomedical research faculty at comparable institutions without BUILD programs. Their number is also 500. As for students from non-BUILD institutions, data for faculty at non-BUILD institutions will come from secondary data available from HERI surveys independently administered at these institutions. The outcomes for these analyses include mentoring efficacy (similar to NRMN) as well as the research productivity (e.g., numbers of publications, grant submissions). Thus, some outcomes will be scale measures (continuous) and others are counts to be analyzed with Poisson-type methodology. All will use the generalized mixed model methods described earlier. Subgroup analyses will compare BUILD-connected faculty to others in the same institutions.


B.2.1. Data Collection Procedures


Student and Faculty Data:


Surveys. The student and faculty surveys will be administered according to the schedule outlined in Table A.16. The primary modality for all survey administration will be online. However, understanding that individual respondents may have different preferences or that institutional factors may facilitate different methods (such as a group-administration during student orientation or other group activity), scannable paper surveys will also be available for use as needed. All surveys will also be formatted to be completed as computer-assisted interviews (conducted by CEC interviewers) for respondents who prefer this modality. Regardless of modality, surveys will be designed with skip patterns so that respondents are presented only with questions that are relevant for them.


Consent procedures will be implemented as indicated by the Institutional Review Board at the University of California, Los Angeles. For most online surveys, this will be in the form of a screen after the introductory information that will indicate how data will be handled and the confidentiality of the responses, with contact information for CEC staff should respondents have questions or concerns prior to beginning the survey. All surveys will provide introductory information about the purpose of the survey and the expected time for completion.


Invitations to participate in online surveys will be provided through email, with an individualized link or access code provided for the respondent. Non-respondents will be prompted with four follow up emails every 5-7 days. In addition, CEC staff will make reminder phone calls between reminder 1 and reminder 2 and then between reminder 2 and reminder 3 and then between reminder 3 and reminder 4.


Site Visits & Case Studies. Site visits to the BUILD institutions and NRMN meetings will be conducted over a two-day period by a team of at least three CEC investigators and trained staff. For case studies, appropriate participants in semi-structured interviews will be determined with the participating site in advance of the visit, along with a specific schedule for the interviews.


After completing appropriate consent procedures, interviews will be conducted with two staff: one to serve as interviewer and the other to serve as scribe; interviews will also be recorded. Please see Attachments 23 & 24 for the proposed site visit and case studies protocols.


Institutional & Program Data: Institutional and program data will be requested annual from the Institutional Research Offices and BUILD Principal Investigators respectively at each BUILD institution.


As outline in A.10, all data collected by the CEC will be stored in a manner such that restricted information (e.g., name, address, contact information) will be stored in a different system from study data such as survey responses. The restricted information will be stored in a system behind the CEC firewall and operates on a private IP range. Only a limited number of authorized CEC staff will be able to access these IP addresses. The study data will be maintained in a separate system requiring authorized users using encryption. Any paper files used for data collection (such as handwritten interview notes) shall be stored in locked cabinets with access limited and controlled as with electronic data.



B.2.2. Analysis Procedures


Observation and interview data from site visits and case studies will be analyzed in two cycles (Saldaña, 2013). First, data will be assigned preliminary codes through attribute, structural, descriptive, and in vivo coding. During the second coding cycle, we will develop the categorical, thematic, and conceptual organization of the data. Through pattern coding, we will synthesize findings into more meaningful units of analysis (Miles & Huberman, 1994). By grouping similarly coded passages together and assessing the groupings for thematic commonalities, the final coding scheme will be established. Finally, through elaborative coding, we will examine the data with an eye toward the consortium level logic model (our conceptual framework). One of the drawbacks of a conceptual framework is that it may limit the inductive approach when exploring a phenomenon. To safeguard against becoming deductive, researchers will journal their thoughts and decisions and discuss them to determine if their thinking has become too driven by the framework. In qualitative inquiry, the researcher’s values are not “controlled for” in the study design. Qualitative researchers use reflective journals as a strategy for examining personal assumptions and subjectivities. This reflexive practice provides transparency in the research process. Reflective journals will be used by our researchers to ensure that our process is inductive, that is, not overly reliant upon our conceptual program model so that our work may reveal a contextually sensitive pragmatic descriptions of programs at the time of our data collection. Our researchers will discuss their reflective notes so to sharpen our insights and deepen our understandings of our observation and interview data. We will also be sure that our multiple data sources converge in an attempt to understand the overall case.


For quantitative data, descriptive statistics such as counts, ranges, means, and frequency distributions will be employed using SAS and Stata software. Statistical methods to test our study objectives are described in Section B.1.2 above. In brief, we will use generalized mixed linear models that incorporate appropriate covariates and account for clustering among students within the same school. Software used will be either SAS or Stata depending on the appropriate procedure.


The evaluation of the BUILD and NRMN programs will make a number of important contributions. First, the information collected will be used to evaluate the BUILD and NRMN desired impacts, which are the goals of the programs. Second, the results of the evaluation will inform the NIH Director and Diversity Program Consortium and CEC program staff on the outcomes associated with the BUILD and NRMN interventions and initiatives. Third, the results of the outcome evaluation will be distributed to the biomedical research and education communities. Fourth, this evaluation will enable the BUILD and NRMN program activities to be disseminated across biomedical training programs and the NIH as a whole. To disseminate the evaluation findings to the evaluation and the scientific communities, efforts also will be made to publish the results of the outcome evaluation in professional journals and to present the findings at conferences.



B.3 Methods to Maximize Response Rates and Deal with Nonresponse


In order to be responsive to the Coordination and Evaluation Center Funding Opportunity Announcement (RFA-RM-13-015), and collect comprehensive, longitudinal data to assess the impact of interventions across the consortium, it is critical that we achieve two central goals. First, we must successfully recruit students and faculty from groups with varying degrees of involvement with the interventions that comprise this Consortium – ranging from heavily involved to those with no involvement. The latter in particular are likely to be quite difficult to recruit as they have no direct connection to the Consortium’s activities. We need to secure good recruitment for all of these groups in order to have data that will allow us to draw unbiased conclusions from the program evaluation. Second, once individuals are recruited into the project, we must retain participants over the longitudinal follow-up. This is critical to our ability to derive valid conclusions regarding the impact of the interventions on primary outcomes, including rates of graduation and progression to graduate school for BUILD students and career progress with respect to research, publications and general career advancement for BUILD and NRMN faculty and trainees. Failure to achieve good response rates at baseline and/or failure to retain participants over the next several years of contact has the potential to result in non-representative samples and associated bias in conclusions that would be drawn from those data.


Multiple strategies will be used to maximize response rates. First, all surveys will be implemented with multiple modalities. The primary modality will be online, but all surveys will also be formatted to be completed as computer-assisted interviews (conducted by CEC interviewers) and as paper-based surveys. Thus, respondents will be able to choose the modality with which they are most comfortable.


In addition, we propose to offer for two critical reasons. First, our target student and faculty populations include hard-to-reach underrepresented groups within student and faculty populations. These are groups that have traditionally participated in research studies such as this at lower rates (Sharknesss, 2012; Porter & Whitcomb, 2005). It is essential that we achieve good representation for these groups. Second, our project depends critically on our ability to obtain longitudinal data for both students and faculty and thus depends importantly on participants’ willingness to commit to annual requests that they take time from their busy schedules to complete our surveys. Though the burden to participants for the individual surveys is fairly minimal, it is essential that we maximize the willingness of our participants to respond repeatedly over time to our requests that they complete an annual survey.


Though our recruitment/retention strategies will include non-monetary approaches known to improve response rates (e.g., providing respondents information about the contribution they will be making to the understanding of the important issues on which the project focuses as a means of enhancing their intrinsic motivation to participate; Singer & Ye, 2013), we believe that successful recruitment/retention efforts will also require that we offer a monetary incentive for participants. We are requesting OMB approval to provide an incentive of $25to each of our participants for each survey they are asked to complete. Without such an incentive, prior evidence suggests that we will have a difficult time recruiting a representative group, especially among some of the underrepresented groups among both students and faculty who are key elements of our target populations (Estrada, Woodcock, & Schultz, 2014). There is strong and consistent evidence that provision of a monetary incentive to all participants as part of the survey request is the most effective strategy for ensuring better response rates (LaRose & Tsai, 2014; Singer & Ye, 2013). Once individuals have been recruited, incentives will also be critical to our ability to retain a representative sample over the longitudinal follow-up in order to track primary outcomes for the required program evaluation (Estrada, Woodcock, & Schultz, 2014). The choice of a $25 incentive is based on evidence from prior experimental work showing incentives in this range can improve response rates significantly (To, 2015) as well as the experience of members of our Consortium with the value of such incentives in maintaining better longitudinal response rates (Estrada, Woodcock, & Schultz, 2014).


For all respondents, the scientific value – both overall and for specific communities of students and faculty – will be emphasized during recruitment. Response rate targets for each BUILD program may also be publicized to build “friendly” competition through social media. Aggregate findings will be distributed to respondents through websites, electronic newsletters, scientific conference presentations, and reports in the public media.


Finally, we will continue to emphasize to respondents as they enroll in the various programs how important the continued tracking of information is to the long-term evaluation of the program. Ensuring the messaging associated with each response request places the value on response for individual participants.



B.4 Test of Procedures or Methods to be Undertaken


All surveys will be pilot tested with 5-9 respondents to ensure readability, flow, and time for administration. Semi-structured interviews will be pilot tested with 1-2 respondents to ensure flow and time for administration.



B.5 Individuals Consulted on Statistical Aspects and Individuals Collecting and/or Analyzing Data


All plans for data collection and statistical analysis are the product of CEC investigators and staff, which includes PhD-level biostatistics faculty as well as researchers with extensive expertise in program evaluation, with input from Consortium Working Group members and the Executive Steering Committee. No one outside of the Consortium has been consulted for these aspects.



References:

Adams G, Gulliford MC, Ukoumunne OC et al. Patterns of intra-cluster correlation from primary care research to inform study design and analysis. J Clin Epidemiol 2004; 57: 785–94.


Miles, M. B., & Huberman, A. M. (1994). Qualitative data analysis: An expanded sourcebook. San Francisco, CA: Sage.


Pfund C., S. House, et al. Training Mentors of Clinical and Translational Research

Scholars: A Randomized Controlled Trial, Acad Med. 2014 May ; 89(5): 774–782. doi:10.1097/ACM.0000000000000218.


Saldaña, J. (2014). The coding manual for qualitative researchers (2nd ed.). Thousand Oaks, CA: Sage.


vi








File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File TitleSupporting Statement 'B' Preparation - 03/21/2011
SubjectSupporting Statement 'B' Preparation - 03/21/2011
AuthorOD/USER;CEC
File Modified0000-00-00
File Created2021-01-22

© 2024 OMB.report | Privacy Policy