doc09c_Justification B_2900-0691_2015

doc09c_Justification B_2900-0691_2015.pdf

Learner's Perception (LP) Survey

OMB: 2900-0691

Document [pdf]
Download: pdf | pdf
SUPPORTING STATEMENT FOR 2900-0691
VA FORM 10-0439, LEARNERS’ PERCEPTIONS SURVEY (LPS)

B. COLLECTIONS OF INFORMATION EMPLOYING STATISTICAL METHODS

1. Provide a numerical estimate of the potential respondent universe and
describe any sampling or other respondent selection method to be used. Data on
the number of entities (e.g., households or persons) in the universe and the
corresponding sample are to be provided in tabular format for the universe as a
whole and for each strata. Indicate expected response rates. If this has been
conducted previously include actual response rates achieved.
A.(1) BACKGROUND: Trainees: Each year, we estimate that 120,000 trainees per
year rotate through VA facilities. Trainees in accredited health professions education
programs include students, interns, practicum participants, residents, and fellows.
Trainees come from different professions, disciplines, specialties, and subspecialties,
including medical students and MD and DO physician residents in medicine, medicine
subspecialties, hospital-based medicine, other medicine, surgery, and psychiatry; and
trainees from dentistry including dental students and residents, dental assistants and
dental hygienists; and nursing students including nurse aide and assistant, certified
registered nurse, clinical nurse specialist, licensed practical nurse, licensed vocational
nurse, registered nurse and nurse administrator, educator and midwifery, and nurse
practitioner; and trainees in associated health professions including audiology, blind
rehabilitation, chaplaincy, chiropractic medicine, dietetics, medical imaging, laboratory
sciences, licensed professional mental health counselor, marriage and family therapy,
medical/surgical support technology, occupational therapy, optometry, orthotics and
prosthetics, pharmacy, physical therapy, physician assistant, podiatry, psychology,
radiation therapy, recreation and manual arts therapy, other rehabilitation, social work,
speech pathology, and surgical technician and technology. These trainees come from
over 1,600 education programs that have been properly accredited by the appropriate
professional association and state licensing agencies to train health professionals for
licensing and certification purposes. All education programs whose trainees rotate
through VA medical centers have formal, signed, and current affiliation agreements with
Veterans Health Administration (VHA) within the Department of Veterans Affairs (VA) to
permit trainees to rotate through VA medical centers as part of an approved health
professions education curriculum.
These large numbers of diverse trainees from multiple education programs have highly
variable starting times, appointment durations, and clinical assignment locations.
Manually processing their appointment requirements currently strains human resource
systems and personnel capacity. To date, they are processed manually.
A.(2) BACKGROUND: Registration Systems: It is anticipated that the “Clinical
Trainee Registration System of Records” (CTRSR) initiative would automate the
registration and appointment processes for VHA’s clinical trainees who rotate through
VHA medical facilities. The project also supports a consistent system-wide process to
implement Executive Order 10450, Security Requirements for Government
Employment, for clinical trainees. The “Clinical Trainee Registration System of
Records” is in the Business Requirements Development process, but has not yet been
guaranteed funding.
Page 1

SUPPORTING STATEMENT FOR LEARNERS’ PERCEPTIONS SURVEY, Cont’d
In addition, the Office of Academic Affiliations (OAA) within VHA is charged with
overseeing all health professions training affiliation agreements between VA and
affiliated universities, colleges, schools and institutions with accredited (for licensing)
health professions education programs. Over 99% of VA trainees come from these
programs. (Less than 1% of trainees come from VA sponsored and accredited
education programs). OAA is currently developing a clinical supervision reporting
system, or Clinical Supervision Index (CSI), that will monitor supervision of all health
professions trainees who are involved in patient care at a VA medical facility. The CSI
includes a Clinician Survey that is automated on VHA electronic health record and
computer system. CSI Clinician Survey will reach anyone who has an account on the
Computerized Patient Record System (CPRS), including all clinical staff (comprising
supervisors of trainees) and their clinical trainees who see patients. The survey will
thus provide an accounting of all trainees in VA who, as part of the education program
requirements will have a VA computer account in order to permit the trainee to enter
patient progress notes as part of their required training. The CSI is currently
administered at one hospital (VA Loma Linda Healthcare System) for testing purposes
under a merit-reviewed grant from VHA’s Office of Research and Development.
Until either CTRSR or CSI is fully implemented, our estimates of response rates must
be based on approximations computed by OAA staff. These estimates are described
below, and are computed annually for the academic year that begins July 1st in the prior
calendar year and ends June 30th in the current calendar year.
A.(3) BACKGROUND: OAA’s National Evaluation Workgroup: The OAA’s National
Evaluation Workgroup is made up entirely of VA employees, with and without
compensation, and is made up of OAA’s Directors of Medical and Dental Education,
Associated Health Education, Nursing Education, Special Fellows Program, and Data
Management and Support Center and Evaluation Units, and includes selected
Associated Chiefs of Staff for Education and Designated Learning and Education
Officers at local VA medical centers and VA health care systems, plus invited VHA
health professions education leaders, scholars and evaluation investigators. This group
meets by scheduled conference call every Friday for 49 weeks a year. The Workgroup
discusses, makes recommendations to OAA pertaining to the LPS survey questions,
LPS administration to trainees, solicitation of potential respondents to take the LPS on
line, data processing and storage, process operations to ensure data security, data
integrity, and maintenance of respondent confidentiality, data analyses to create
findings, interpretation of those findings, and translating those interpretations into
program evaluations, performance assessments, and policy recommendations.
A.(4) BACKGROUND: Solicitation of Respondents: Without the benefit of CTRSR or
fully implemented CSI, potential eligible respondents to the survey are solicited to take
the survey through posters located in designated areas throughout the VA medical
center where the trainee was assigned, by face-to-face contacts with their VA education
directors, and through emails to a list of trainees that OAA sends out through the Chief
and/or Deputy Chief Officer for OAA requesting that trainees take the LPS survey,
stressing its importance to improve the quality of VA clinical learning environments and
affiliate education programs, and providing appropriate links to the survey. OAA will
coordinate these national efforts by sending emails to program directors of affiliate
institutions and to the Associate Chiefs of Staff for Education and Designated Learning
Page 2

SUPPORTING STATEMENT FOR LEARNERS’ PERCEPTIONS SURVEY, Cont’d
and Education Officers emphasizing the importance of the survey, and providing
reporting statistics (numbers completed) so that local administrators will be able to gage
their success in recruiting subjects to complete the survey. In addition, the process to
take the survey is simplified by asking respondents to enter designated websites or
portals by clicking onto appropriate links.
A.(5) BACKGROUND: Dissemination of Results: Results, including findings,
interpretation, and policy analyses, are disseminated to OAA, VHA, and VA leadership,
as well as the Congress, Inspector General, and local health professions education
leadership and program directors at individual VA medical centers and health care
systems. Dissemination consists primarily of Current and Special Reports. Current
Reports are produced annually by means of a secure VA intranet website through a
data cube. The data cube permits appropriate users to query the LPS data and
produce graphs and tables showing LPS survey findings. Special Reports are produced
on request and include information for health professions program accreditation,
certification, performance progress, and program evaluation. Special Reports are also
produced for presentations at local and national professional meetings sponsored by
VHA and health professions education associations and societies. Special Reports are
also run for purposes of producing seminars for local practitioners, publications in peer
reviewed scientific journals, program evaluation of specific education programs, and
national presentations at education and scientific meetings for the broader health
professions education practice and research communities. A partial list of peerreviewed presentations, reports, and publications are listed below:
1.

Keitz, S., Holland, G.J., Melander, E.H., Bosworth, H., and Pincus, S.H. for the Learners’
Perceptions Working Group (Gilman, S.C., Mickey, D.D., Singh, D., et al). The Veterans
Affairs Learners’ Perceptions Survey: The Foundation for Educational Quality Improvement.
Academic Medicine 78(9):910-917, 2003.

2.

Singh, D.K., Holland, G.J., Melander, E.H., Mickey, D.D., Pincus, S.H.: VA’s Role in U.S.
Health Professions Workforce Planning. Proceedings of the 13th Federal Forecasters
Conference of 2003:127-133, 2004.

3.

Singh, D. K., Golterman, L., Holland, G. J., Johnson, L.D., and Melander, E. H., Proposed
Forecasting Methodology for Pharmacy Residency Training, Proceedings of the 15th Federal
Forecasters Conference of 2005: 39-42, 2005.

4.

Chang, Barbara K.; Kashner, T. Michael; and Holland, Gloria J. “Evidence-based Expansion
and Realignment of Physician Resident Positions.” Presented at the 3rd Annual Association of
American Medical Colleges Physician Workforce Research Conference. Bethesda MD, May 24, 2007.

5.

Chang, B.K.; Holland, G.J.; Kashner, T.M.; Flynn, T.C.; Gilman, S.C.; Sanders, K.M.; and Cox,
M. “Graduate Medical Education Enhancement in the VA.” Presented at the Association of
American Medical Colleges Group on Resident Affairs Professional Development Meeting,
Small Group Facilitated Discussion. Memphis TN, April 22-25, 2007.

6.

Chang, B. K.; Kashner, T.M.; Holland, G.J. “Allocation Methods to Enhance Graduate
Medical Education.” Presented at the International Medical Workforce Collaborative.
Vancouver B.C., Canada, March 21-24 2007.

7.

Cannon, Grant W.; Keitz, Sheri A.; Holland, Gloria J.; Chang, Barbara K.; Byrne, John M.;
Tomolo, Anne; Aron, David C.; Wicker, Annie B.; and Kashner, T. Michael. “Factors
Determining Medical Students’ and Residents’ Satisfaction during VA-Based Training:
Findings from the VA Learners’ Perceptions Survey.” Academic Medicine. vol. 83, no. 6 (June
2008), pp. 611-620.

Page 3

SUPPORTING STATEMENT FOR LEARNERS’ PERCEPTIONS SURVEY, Cont’d
8.

Chang, Barbara K.; Cox, Malcolm; Sanders, Karen M.; Kashner, T. Michael; and Holland,
Gloria J. Expanding and Redirecting Physician Resident Position by the US Department of
Veterans Affairs.” Presented at the 11th International Medical Workforce Collaborative, Royal
College of Surgeons of Edinburgh, Edinburgh UK, September 17, 2008.

9.

Golden, Richard M.; Henley, Steven S.; White Jr., Halbert L.; and Kashner, T. Michael.
"Correct Statistical Inferences using Misspecified Models with Missing Data with Application
to the Learners’ Perceptions Survey." Presented at the Joint Annual Convention of the 42nd
Annual Meeting of the Society for Mathematical Psychology and the 40th Annual Conference of
the European Mathematical Psychology Group, Amsterdam, Netherlands, August 1-4, 2009.

10. Kashner, T. Michael; Henley, Steven S.; Golden, Richard M.; Byrne, John M.; Keitz, Sheri A.;
Cannon, Grant W.; Chang, Barbara K.; Holland, Gloria J.; Aron, David C.; Muchmore, Elaine
A.; Wicker, Annie; and White Jr., Halbert L. “Studying the Effects of ACGME Duty Hours
Limits on Resident Satisfaction: Results from VA Learners’ Perceptions Survey.” Academic
Medicine. vol. 85, no. 7 (July, 2010), pp. 1130-1139.
11. Golden, Richard M.; Henley, Steven S.; White Jr., Halbert L.; and Kashner, T. Michael.
“Application of a Robust Differencing Variable (RDV) Technique to the Department of
Veterans Affairs Learners’ Perceptions Survey.” Presented at the 43rd Annual Meeting of the
Society for Mathematical Psychology, Portland, OR, August 7-10, 2010.
12. Kaminetzky, Catherine P.; Keitz, Sheri A.; Kashner, Michael; Aron, David C.; Byrne, John M.;
Chang, Barbara K. ; Clarke, Christopher; Gilman, Stuart C.; Holland, Gloria J.; Wicker, Annie;
and Cannon, Grant W. “Training Satisfaction for Subspecialty Fellows in Internal Medicine:
Findings from the Veterans Affairs (VA) Learners’ Perceptions Survey.” BMC Medical
Education. vol. 11, no. 21 (2011), pp. 1-9 (http://www.biomedcentral.com/1472-6920/11/21).
13. Kashner, T. Michael; and Chang, Barbara K. “VA Residents Improve Access and Financial
Value.” Presented at the Annual Meeting of the Association of American Medical Colleges,
Denver, CO, November 4-9, 2011.
14. Lam, Hwai-Tai C.; O’Toole, Terry G.; Arola, Patricia E.; Kashner, T. Michael; and Chang,
Barbara K. “Factors Associated with the Satisfaction of Millennial Generation Dental
Residents.” Journal of Dental Education, vol. 76, no. 11 (November, 2012), pp. 1416-1426.
15. Byrne, John M.; Chang, Barbara K.; Gilman, Stuart; Keitz, Sheri A.; Kaminetzky, Cathy; Aron,
David; Baz, Sam; Cannon, Grant; Zeiss, Robert A.; and Kashner, T. Michael. “The Primary
Care-Learners’ Perceptions Survey: Assessing Resident Perceptions of Internal Medicine
Continuity Clinics and Patient-Centered Care.” Journal of Graduate Medical Education, vol. 5,
no. 4 (December, 2013), pp. 587-593.
16. Chang, Barbara; Muchmore, Elaine; and Kashner, T. Michael. “Taking the Pulse of Your GME
Training Programs.” Presented at the 2014 (AAMC) Group on Resident Affairs Spring
Meeting, Phoenix, AZ, May 4-7, 2014.
17. Byrne, John M.; Kashner, T. Michael; Gilman, Stuart C.; Wicker, Annie B.; Bernett, David S.;
Aron, David C.; Brannen, Judy L.; Cannon, Grant W.; Chang, Barbara K.; Hettler, Debbie L.;
Kaminetzky, Catherine P.; Keitz, Sheri A.; Zeiss, Robert A.; Golden, Richard M.; Paik, DaeHyun; and Henley, Steven S. “Do Patient Aligned Medical Team Models of Care Impact VA’s
Clinical Learning Environments.” Presented at the 2015 Health Services Research and
Development / Quality Enhancement Research Initiative (HSR&D/QUERI) National
Conference, Philadelphia, PA, July 8-10, 2015.
18. Perez, Elena V.; Byrne, John M.; Durkin, Rob; Wicker, Annie B.; Henley, Steven S.; Golden,

Richard M.; Hoffman, Keith A.; Hinson, Robert S.; Aron, David C.; Baz, Samuel; Loo,
Lawrence K.; Velasco, Erwin D.; McKay, Tracy; and Kashner, T. Michael. “Clinical
Supervision Index: Measuring Supervision of Physician Residents in VA Medical Centers.”
Presented at the 2015 Health Services Research and Development / Quality Enhancement
Research Initiative (HSR&D/QUERI) National Conference, Philadelphia, PA, July 8-10, 2015.

Page 4

SUPPORTING STATEMENT FOR LEARNERS’ PERCEPTIONS SURVEY, Cont’d
19. Kashner, T. Michael; Hettler, Debbie L.; Zeiss, Robert A.; with Aron, David C.; Brannen, Judy

L.; Byrne, John M.; Cannon, Grant W.; Chang, Barbara K.; Dougherty, Mary B.; Gilman,
Stuart C.; Holland, Gloria J.; Kaminetzky, Catherine P.; Wicker, Annie B.; Bernett, David S.;
and Keitz, Sheri A. “Has Interprofessional Education Changed Learning Preferences? A
National Perspective,” invited resubmission to Health Services Research.

B. RESPONSE RATE: We approximate the response rate at 51%, based on 16,000
unique respondents who entered the LPS link to take the survey and 45,000 unique
recipients to an email campaign (up to 3 emails per addressee) of existing and former
trainees that is conducted for three months during the Spring of each academic year, of
whom 70% are believed to be unique and eligible to complete the LPS by being near
the end of a VA rotation as part of a required curriculum from an accredited health
professions education program and by having access to a VA intranet computer to take
surveys at the time of the email.
C.(1) JUSTIFICATION: Response rate estimate: Until administration of the CSI or
CTRSR [§1.A(2)], and based on the complexity and diversity of trainees who rotate
through VHA medical facilities [§1.A(1)], under the guidance and direction of OAA’s
National Evaluation Workgroup [§1.A(3)], we used the email solicitation [§1.A(4)] as a
means of estimating our response rate based on those trainees that OAA could reach
by email in order to compute a response rate [§1.B]. The strategy is appropriate based
on our intended use and having been confirmed by reporting the methodology and LPS
methods and findings in peer-reviewed scientific journals [§1.A(5)].
C.(2) JUSTIFICATION: Census Sample: At present, using a sampling methodology is
not possible in the absence of an automated trainee registration system [§1.A(2)]. To
reduce respondent burden, we intend to make the survey available to trainees at VA
training medical facilities throughout the academic year so they can take the survey as
part of their out-processing. Responses captured via the census approach provide an
adequate sample for the intended purpose of this survey; i.e., to accurately and
substantially evaluate VA performance at the facility and individual program levels while
minimizing respondent burden.
C.(2)(i) Process limitations. The LPS is available starting in September in order to
capture non-academic year associated health trainees. We increase our efforts in April
through intensified marketing to increase the population of trainees at VA medical
facilities who take the LPS. The VA will continue to accept responses from trainees up
to three months following March 31. This approach addresses concerns that: (1) only a
small number of trainees may be enrolled at each VA program level; (2) every VA
medical center and program must be evaluated as part of their performance
assessment; and (3) trainees are offered an opportunity to provide feedback to faculty
preceptors, mentors, and supervising attending physicians. This approach is actually
preferred to any type of probability sampling within each program as such sampling
would pose risks to the confidential format that is essential to getting accurate and
unbiased responses from trainees. For example, we provide aggregated results only to
reporting units (a given facility for trainees in a designated specialty and in an academic
year) of no fewer than 8 respondents to minimize re-identification risk that would expose
the responding trainees to loss of confidentiality. In addition, the use of specified
elements and summary questions (described below) are in lieu of open-ended
questions which we have found are: (i) unreliable, (ii) difficult to analyze with current
resources in the absence of unstructured text analyses technology; (iii) are time
Page 5

SUPPORTING STATEMENT FOR LEARNERS’ PERCEPTIONS SURVEY, Cont’d
consuming for the respondent thus often remain unanswered, and (iv) do not lend
themselves to computing scores that permit comparisons across facilities and over time.
C.(2)(ii) Scope and Question Content. The LPS survey is designed to provide
information at the program level for each VA facility, by profession discipline, specialty,
and subspecialty, and by academic level. The usefulness of this information to each
medical center depends on having an adequate number of respondents to take the
survey from each training program. The information collected focuses on assessing the
VA’s performance in providing a productive and effective learning, working, and clinical
environments with sufficient details to enable program directors and VHA clinical and
education leadership to know how to improve performance at the facility and
professional education program level.
(A) Facility-wide summary is constructed from individual questions asking whether the
respondent will use the program again, is recruitable to work for VA as a future
employer, and patient care quality.
(B) Environment-level satisfaction domains. There are nine satisfaction domains that
describe a trainee’s clinical learning experiences, broken down into learning, working,
and clinical experiences. Learning experiences are broken down into 15 element
questions and a domain summary question where respondents rated their satisfaction
with the clinical learning environment, and 13 element questions and a domain
summary question for the preceptors and faculty domain. Working experiences are
broken down into 9 elements and a summary question where respondents rated their
satisfaction with the working environment, 8 elements and a summary question for
physical environment, and 7 elements and a summary question for personal
experience. Clinical experiences are broken down into 7 element questions and a
summary question where respondents rated their satisfaction with the clinical
environment, 13 elements and a summary question rating the availability of support
staff, 6 elements and a summary question rating the quality of support staff given
availability, and 6 elements and a summary question rating the systems and processes
of handling medical errors.
(C) Fact-based domains. Fact-based domains include 2 element questions where
respondent described the psychological safety of their clinical learning experience; 17
element questions plus fact-based summary and satisfaction-based summary questions
where respondents rated their VA experience in terms of patient and family-centered
care, and interprofessional team care; and 9 element questions plus fact-based
summary and satisfaction-based summary questions where respondents rated their VA
experience in terms of interprofessional team care.
(D) Response categories. Respondents rate satisfaction on a 5-item Likert scale for
satisfaction-based questions (“very satisfied,” “somewhat satisfied,” “neither satisfied
nor dissatisfied,” “somewhat dissatisfied,” or “very dissatisfied”) and for fact-based
questions (“strongly agree,” “agree,” “neither agree nor disagree,” “disagree,” or
“strongly disagree”).
(E) Scoring. Both satisfaction and fact-based domains are scored using the following
methods.

Page 6

SUPPORTING STATEMENT FOR LEARNERS’ PERCEPTIONS SURVEY, Cont’d
For elements, (ai) binary element p-scores are calculated by assigning the value of one
to “very satisfied” and “somewhat satisfied” responses and the value of zero to “neither
satisfied nor dissatisfied,” “somewhat dissatisfied,” or “very dissatisfied” responses to
satisfaction-based questions, and the value of one to “strongly agree” and “agree”
responses and the value of zero to “neither agree nor disagree,” “disagree,” or “strongly
disagree” responses. (aii) Ordinal element o-scores are calculated by assigning integer
values to response categories. For satisfaction-based questions, the value of five is
assigned to “very satisfied,” four to “somewhat satisfied,” three to “neither satisfied nor
dissatisfied,” two to “somewhat dissatisfied,” and one to “very dissatisfied”. For
satisfaction-based questions, the value of five is assigned to “strongly agree” four to
“agree,” three to “neither agree nor disagree,” two to “disagree,” and one to “strongly
disagree” responses.
For domains, (bi) binary summary p-scores are calculated by assigning the value of one
to “very satisfied” and “somewhat satisfied” responses and the value of zero to “neither
satisfied nor dissatisfied,” “somewhat dissatisfied,” or “very dissatisfied” responses to
satisfaction-based responses, and the value of one to “strongly agree” and “agree”
responses and the value of zero to “neither agree nor disagree,” “disagree,” or “strongly
disagree” for fact-based responses. (bii) Ordinal summary o-scores are computed by
assigning integer values to response categories to summary domain questions. For
satisfaction-based summary domain questions, the value of five is assigned to “very
satisfied,” four to “somewhat satisfied” three to “neither satisfied nor dissatisfied,” two to
“somewhat dissatisfied,” and one to “very dissatisfied”. For fact-based summary
domain questions, the value of five is assigned to “strongly agree” four to “agree,” three
to “neither agree nor disagree,” two to “disagree,” and one to “strongly disagree”
responses. (biii) Mean element domain, or m-scores, are computed for each domain by
taking the mean of o-scores across element questions comprising a given domain. This
is allowed statistically since the second highest Eigen value among principal
components comprising element responses for a domain is less than one.
Psychometrically, this means that m-scores behave as Rasch models and thus may be
interpreted as a sufficient statistic to represent the latent domain-wide score. (iv) Mean
element z-scores are computed by subtracting the respondent’s m-score by the mean of
m-scores across respondents, and dividing the difference by the standard deviation of
m-scores across respondents. (v) Mean element p-scores are computed for each
domain by assigning the value of one to an m-score of greater than three, and a value
of zero if the m-score is three or smaller.
(E) Analyses. Analyses of binary, ordinal, and interval domain scores are described in
terms of frequencies and non-parametric statistics, and analyzed respectively using
binomial logistic regression, generalized linear model with cumulative logit linking
function and multinomial distributions, and generalized linear model with identity linking
functions and normal or gamma distributions which ever displays the best fit. More
details are provided below to describe adjusted scores used in making cross-facility and
over-time comparisons that controls for response biases and variation in factors that are
outside the locus of control of local VA medical center medical staff and education
administrators and program directors of VA affiliate health professions education
universities, schools, colleges, and institutions [§3.2].

Page 7

SUPPORTING STATEMENT FOR LEARNERS’ PERCEPTIONS SURVEY, Cont’d
2.

Describe the procedures for the collection of information, including:

a.

Statistical methodology for stratification and sample selection

The VA will take a census of all trainees present in VA teaching facilities
participating in the survey during the academic year.
b.

Estimation procedure

We propose to survey an all-inclusive population and hence there is no estimation
procedure for sampling. The VA will take a simple means and frequencies of each
element and domain using binary, ordinal, p-score, and m-score values and computed
by health professions, academic level, facility-wide, by VISN, and for all VA, by year and
across years. Scoring is described above at [§1.C(2)(ii)(E)].
c.

Degree of accuracy needed

In order to increase the response rate to better represent the population, reminder
e-mails are sent to the Associate Chief of Staff for Education or equivalent to increase
the response rate. Facility-specific response numbers will accompany these e-mails.
This is also described in [§1.A(4)].
d.

Unusual problems requiring specialized sampling procedures

There are no unusual problems associated with the selection of the time period for
collection. The census sample obtained from our administration procedures yields
statistically valid and usable results.
e.

Any use of less frequent than monthly data collection to reduce burden

Not applicable. This survey is collected annually. In having the survey available for
most of the academic year the overall response burden is reduced by (a) spreading out
data collections, (b) the convenience of being available when trainees are available and
(c) available when trainees out-process from their clinical training rotations.

3. Describe methods to maximize response rate and to deal with issues of nonresponse. The accuracy and reliability of information collected must be shown to
be adequate for intended uses. For collections based on sampling, a special
justification must be provided for any collection that will not yield “reliable” data
that can be generalized to the universe studied.
3.1 Response Rate
A. Overview: A response rate will be computed by determining the percent of trainees
who responded to the survey based upon the number of trainees who were contacted to
respond to the survey by email assembled from local and national contact lists. To
maximize the number of respondents, the Office of Academic Affiliations will send
reminder e-mails to the Associate Chief of Staff for Education or equivalent Designated
Learning and Education Officers, as appropriate. Facility-specific response numbers
will accompany these e-mails. See also [§1.A(4)].
The response rate bias will be computed by comparing characteristics among trainees
who completed the survey from registrants who did not complete the survey. We will
know the program characteristics of those respondents who did not complete the survey
Page 8

SUPPORTING STATEMENT FOR LEARNERS’ PERCEPTIONS SURVEY, Cont’d
from information supplied by when trainees first login to take the survey but fail to
complete. Further, respondent characteristics will be derived and normalized against
similar characteristics in the trainee population as a whole and against trainees
completing the survey over the last 5 years.
In prior years, we have reported findings that physician resident respondents have
similar academic level and specialty distributions as all trainees in non-Pediatric and
non-OB-GYN residency training programs in an Accreditation Council for Graduate
Medical Education (ACGME) approved residency training program in the U.S. when
OB-GYN and Pediatric residency programs are not included [§1.A(5) references #10,
#19].
B. Adjusted Scores: We compute adjusted p-, o-, m-, and z-scores for each
satisfaction and fact-based domain in order to account for response biases when
ranking facilities or comparing how well a facility has done over time, or comparing
trainees by academic level or health professions education program.
Adjusted scores are computed to reflect differences in the distribution of all trainees,
including both respondents and non-responders. Scores are adjusted based on
generalized estimating equations and generalized linear models where trainee and
facility-level characteristics serve as independent variables. Trainee characteristics
include training program, specialty and subspecialty, trainee’s academic level, facility
complexity level based on a five-item ordinal score provided by VA, and the mix of
patients that the trainee sees including percent of their patients seen who are age 65
and over, female gender, with a chronic medical illness, with a chronic mental health
illness, with multiple medical illnesses, with alcohol or substance dependence, low
income or socioeconomic status, and without social or family support. These will be
entered as median and mean centered values, where medians and means are
computed for all trainees. In addition, we enter a response bias index computed as
follows as an additional independent predictor adjusting final scores.
C. Response Bias Index: Respondents are asked to describe their satisfaction with an
element or domain by selecting from among five response choices (“very satisfied,”
“somewhat satisfied,” “neither satisfied nor dissatisfied,” “somewhat dissatisfied,” “very
dissatisfied”). In so doing, respondents must define each category and mentally
compute cut points to translate the intensity of their satisfaction or dissatisfaction into a
specific choice from among the five response options. Respondents may vary in how
they define those cut values. For example, a rater who is not highly satisfied may report
“very satisfied” while another respondent feeling the same intensity of satisfaction may
choose to report “somewhat satisfied” on the survey.

Page 9

SUPPORTING STATEMENT FOR LEARNERS’ PERCEPTIONS SURVEY, Cont’d
This response bias phenomenon can cause problems with low response rates where
selection biases can confound these response biases. However, the LPS survey
includes questions that can be used to measure directly a response bias. The bases for
computing a response bias index comes from our observations that different trainees
report satisfaction differently for essentially the same, or common, experience. Such
common experiences include interactions with VA’s computer system, facility-level
parking, or the convenience of the facility’s location for among trainees who report on
the same facility for the same time period. Here, variability of responses across
responders would reflect, in part, differences in how respondents chose a response
option when describing the intensity of their satisfaction.
To account for these responder biases, we developed a response bias index,
nicknamed responder “grumpiness.” The theory behind response bias indexes is that
all respondents who report on the same experience should, at least theoretically, be
expected to assign the same rating. Thus a response bias index could be computed by
comparing a respondent’s actual satisfaction rating with the average among other
trainees who reported on the same experience.
The response bias index computed as a mean centered m-score from three existing
element questions taken from two domains. These “common” elements describe
experiences that may vary across facilities, but do not vary between trainees reporting
on the same facility and time period. These common element questions ask
respondents to rate their satisfaction with the facility’s “Computerized Patient Record
System (CPRS)” as a Working Domain element, and the “convenience of facility
location” and “parking” as elements of the Physical Environment Domain. Responses
are recorded on five-point, ordered, Likert scales. The responses were recoded so that
“very satisfied” is assigned to a value of five, “somewhat satisfied” to a value of four,
“neither satisfied nor dissatisfied” to three, “somewhat dissatisfied” to two, and “very
dissatisfied” to one. The mean of these recoded responses over the three elements are
calculated for each respondent. That is, the response bias index equals the m-score for
the three common elements minus the mean of corresponding m-scores across all
respondents at the same facility and time period. To ensure that all trainees were
reporting about the same experience, facilities are defined in terms of a 6-digit facility
code.
The facility’s trainees include those who took the LPS in either the same academic year,
or an earlier or later academic year. To account for small changes that may have
occurred in computers, convenience, and parking over time, trainee ratings were
weighted to reflect differences in time that lapsed between when the given respondent
completed the LPS, and when each facility trainee completed the LPS. Scores taken
from trainees who responded to the LPS in the reporting year were given a weight of
one (1=1/(1+0)). Scores taken from trainees who responded to the LPS either one year
later, or one year earlier from the reporting year were assigned to a weight of 0.50
(computed as: 1/(1+1)=0.50). Scores that are two years apart were weighted by 0.33
(computed as: 1/(1+2)=0.33). This continues so that scores up to 10 years out were
assigned a weight of 0.09 (computed as: 1/(1+10)=0.09). The weighted average is
computed by first multiplying the trainee rate (mean of the three element rates) by the
corresponding weight (based on when the trainee took the LPS), summing the weighted
rates over all of the facility’s trainees, and dividing the weighted sum by the sum of
Page 10

SUPPORTING STATEMENT FOR LEARNERS’ PERCEPTIONS SURVEY, Cont’d
weights over all of the facility’s trainees. Note that for a given year, information to
compute the response bias index to correct for responder biases is taken from both
years prior, and year’s post, to the year the responder completed the LPS survey of
interest.
D. Response rate bias: The response rate bias is computed by taking the difference
between adjusted and unadjusted scores as a percent of the adjusted score. Statistical
significance and confidence intervals will be determined by bootstrapping 10,000
samples. We also compute a robust estimate of the response rate bias by taking the
mean response rate bias across all bootstrapped samples. Adjusted scores will not be
corrected for facility nesting since we wish to know how the distribution of nonresponses across facilities impact national performance estimates. The robust estimate
of the response rate bias measures potential damage of assessing national
performance estimates when registrants fail to respond to the survey.
We also plan to present results that controls for response biases. This will enable us to
compare results across facilities, across disciplines, and within a facility over time.
Such comparisons are necessary to evaluate progress VA is making to achieve their
clinical professional education mission. This will be accomplished with a differencing
variable technique derived from difference-in-differences analyses that is designed to
control for both observed and unobserved differences between responding sample and
population. Estimates for each domain will be computed by estimating the following
regression:

yijk = β 0 + β1 (X i − X ) + β 2 t k + β 3 Fi ( j ) + β 4( j ) t k Fi ( j ) + ui + vij + z ijk

Eq. 1

yij = w0 + w1 yij1 + w2 yij 2 + ... + wk yijk + ... + wK yijK + uij0

Eq. 2

where y ijk is the kth element score (for the domain) of respondent i in VA facility j , y ij is
the summary domain score (summarizing all k=1, …, K elements within the domain). X i
is characteristics of respondent i and X-bar is the mean characteristics of the reference
respondent sample. t k is the continuous differencing variable that ranges from zero to
one for the “kth” element of the domain, where t=0 indicates the element should be
“common” to all VA (e.g., accessing patient records where little variation in respondent
VA experiences is expected with a nation-wide uniform electronic medical record
system), and t=1 indicates the response should be very specific to the respondent’s
particular VA experiences (e.g., interaction with preceptor or clinical supervisor). F i (j) is
equal to one if respondent i is located in facility j, and zero otherwise. The “w’s” are
coefficients, normalized so that w 1 +…+w K equals one, describing how much each
element factor loads onto domain summary scores. The w’s are used to weight records
by facility and respondent for each domain when estimating the parameters to the
model in equation 1. u, v, z, and u0 are random variates assumed to by independent
and identically normally distributed. We thus can compute an adjusted score for the
given domain for facility (j) by β 4 (j), the interaction term to the given hospital j and the
reference group of facilities used to standardize the scores. The reference facilities and
reference respondents are taken from the prior year. We will then compute β 4 for each
facility by year, permitting us to compare progress within facility but over time for the
past three years. Similar strategy is proposed to compute overall hospital score.
Response bias, computed as a percent difference, may be approximated by
Page 11

SUPPORTING STATEMENT FOR LEARNERS’ PERCEPTIONS SURVEY, Cont’d

yj − y
y

−

β 4( j ) − β 4
β4

Eq. 3

Finally, we intend to bootstrap samples (10,000 with replacement) to compute
confidence intervals of estimated performance outcome.
4. Describe any tests of procedures or methods to be undertaken. Testing is
encouraged as an effective means of refining collections to minimize burden and
improve utility. Tests must be approved if they call for answers to identical
questions of 10 or more individuals.
Not applicable. The survey instrument has been previously fielded and extensively
tested, with findings published in scientific journals [§1.A(5)]. Questions are essentially
the same as has been approved previously by OMB, with corrections, updated
discipline and professional categories to reflect changes in education program
designations and discipline classifications, and eliminating questions shown in
subsequent analyses to not provide additional information from responses provided in
remaining questions.
5. Provide the name and telephone number of individuals consulted on statistical
aspects of the design and the name of the agency unit, contractor(s), grantee(s),
or other person(s) who will actually collect and/or analyze the information for the
agency.
Name of agency unit collecting and analyzing the information:
Office of Academic Affiliations (10A2D)
Veterans Health Administration
Department of Veterans Affairs Central Office
Washington, DC 20420
(202) 461-9490
Individual overseeing the analysis of information:
T. Michael Kashner, PhD JD
Senior Investigator and Health Science Specialist
Office of Academic Affiliations, VHA, Department of Veterans Affairs
(909) 825-7084 ext. 2853
and
Research Professor of Medicine
Loma Linda University School of Medicine
Loma Linda, CA
Individuals consulted on statistical aspects of the design:
Richard M. Golden, PhD, M.S.E.E.,
Professor Cognitive Science and Engineering,
Page 12

SUPPORTING STATEMENT FOR LEARNERS’ PERCEPTIONS SURVEY, Cont’d
School of Behavioral and Brain Sciences, GR4.1
800 West Campbell Road,
University of Texas at Dallas,
Richardson, TX 75083-3021
(972) 883-2423
Steven S. Henley, MS
Research Professor of Medicine
Loma Linda University School of Medicine
Loma Linda, CA
and
President, Martingale Research Corporation
Plano, TX
Individual overseeing the collection of information:
Christopher T. Clarke, PhD
Chief Administrative Officer
Office of Academic Affiliations, VHA, Department of Veterans Affairs
Washington, DC
(202) 461-9514
Ed McKay
Director, Data Management Center
Office of Academic Affiliations, VHA, Department of Veterans Affairs
St. Louis, MO
(314) 894-5760, x1

Page 13


File Typeapplication/pdf
Authorvhacostoutm
File Modified2015-10-22
File Created2015-10-21

© 2024 OMB.report | Privacy Policy