2900-0691 JustificationB

2900-0691 JustificationB.DOC

Learner's Perception (LP) Survey

OMB: 2900-0691

Document [doc]
Download: doc | pdf

SUPPORTING STATEMENT FOR LEARNERS’ PERCEPTIONS SURVEY, Cont’d


B. COLLECTIONS OF INFORMATION EMPLOYING STATISTICAL METHODS



1. Provide a numerical estimate of the potential respondent universe and describe any sampling or other respondent selection method to be used. Data on the number of entities (e.g., households or persons) in the universe and the corresponding sample are to be provided in tabular format for the universe as a whole and for each strata. Indicate expected response rates. If this has been conducted previously include actual response rates achieved.


The following proposed changes in the methods for conducting the LPS are provided in response to OMB’s terms of clearance at the time of last approval in August of 2006. Specifically, we were requested to address the following:

  • The response rates obtained during the current approval

  • The utility of continuing to conduct the survey as a census rather than selecting a probability sample and

  • In consultation with the steering committee, VA shall assess the feasibility of incorporating a non-response bias analysis into the next version of the survey.


These queries will be addressed in turn and our proposed, revised methodology presented largely under the discussion of item 1.2 below.


The January 2007 approval OMB Terms of Clearance requested that VA address the utilization of the proposed trainee registration system for this project. As a summary, the “Clinical Trainee Registration System of Records” initiative will automate the registration and appointment processes for VHA’s clinical trainees (i.e. physician residents, dentists, nurses, and trainees in psychology, optometry, podiatry, pharmacy and other associated health disciplines) who rotate through VHA medical facilities. Over 100,000 trainees per year rotate through VA facilities, with highly variable durations of appointment and assignment locations. The appointment and processing requirements of such a large number of trainees strains current human resource systems and personnel capacity. The project also supports a consistent system wide process to implement Executive Order 10450, Security Requirements for Government Employment, for clinical trainees. The “Clinical Trainee Registration System of Records” is in the Business Requirements Development process, but has not yet been guaranteed funding.


1.1. In response to the first item, the response rates to the current LPS are not available at the time of submission of this request. As a result of changes and restructuring of the VA IT Organization, the VA-wide automated trainee registration system that was mentioned in the prior submission and which was alluded to in OMB’s terms of clearance (i.e., “the new system for employee registration will provide significant benefits for surveying and tracking potential respondents”) is not yet available. A request for development has been submitted and is under review for approval and funding by OIT. Moreover, due to changes in the OIT project planning and funding process, a truly functional registration system for trainees is unlikely to be available for the next two to three years.


1.2. Justification of continuing to conduct the LP survey as a census rather than a probability sample – and proposed changes in methods.


At present, using a sampling methodology is not possible. To reduce respondent burden, we intend to make the survey available to trainees at VA training medical facilities throughout the academic year so they can take the survey as part of their out-processing. The 14,000 responses captured via the census approach provides a very adequate sample for the intended purpose of this survey; i.e., to accurately and substantially evaluate VA performance at the facility and individual program levels while significantly minimizing respondent burden. However as soon the Trainee Registration System is available we will enjoy being able to compute a valid probability sample and response rate.



a. Overall design

The rationale for using a Census approach is that, until a trainee registration system is available, a sample is not possible. As noted previously, the LPS is available starting in September in order to capture non-academic year associated health trainees, however in April we increase efforts within VA facilities, through intensified marketing, to increase the population of trainees at VA medical facilities who take the LPS This approach addresses concerns that: (1) only a small number of trainees may be enrolled at each VA program level; (2) every VA medical center and program must be evaluated as part of their performance assessment; and (3) trainees are offered an opportunity to provide feedback to faculty preceptors, mentors, and supervising attending physicians. This approach is actually preferred to any type of probability sampling within each program as such sampling would pose risks to the confidential format that is essential to getting accurate and unbiased responses from trainees. The VA will continue to accept responses from trainees up to four months following March 31.


b. Scope

The LPS survey is designed to provide information at the program level for each VA facility, contingent on the collection of an adequate number of respondents from each training program. The information collected focuses on assessing the current learning environment and providing details on how the facility may improve. The survey results provide (A) a facility wide summary score on a 100 point scale; (B) specific domain scores that describe aspects of the learning environment; (C) breaks down each domain score into specific elements that provide details on where the facility failed and succeeded in producing a high-quality learning environment for the trainee. The use of specified elements are in lieu of open-ended questions which we have found are: (i) unreliable, (ii) difficult to analyze with current resources in the absence of unstructured text analyses technology; (iii) are time consuming for the respondent thus often remain unanswered, and (iv) do not lend themselves to computing scores that permit comparisons across facilities and over time.


c. Content

The LP survey provides two options, one for physician residents and one for other (associated health) trainees. When the survey is accessed, the respondent must select his or her category. Both survey options include questions identifying the respondent’s host VA medical facility, program status, level of training, and gender. All respondents are asked to rate their learning experience at the VA medical facility on a 100-point scale.


LPS survey for Physician Residents: The LPS-Physician Resident survey option is intended for physician residents. It contains a pool of elements comprising 11 domains. There are 128 elements, ranging from 6 to 15 elements per domain. Each element and each domain are evaluated based on a 5-point Likert scale.


LPS survey for Associated Health trainees: The LPS-Associated Health survey option is intended for all non-physician health professionals. It contains a pool of elements comprising 7 domains. There are 96 elements, ranging from 9 to 15 for each domain. Each element and each domain are evaluated based on a 5-point Likert scale.

d. Entering trainees

Office of Academic Affiliations (OAA), with the support of VHA leadership, will instruct local facility management to mount a sustained effort starting during the month of April to identify and contact trainees present at their facilities to complete the LPS. Trainees will be identified through the Associate Chief of Staff for Education or equivalent at each of the training VA medical facilities, who will contact the local program coordinators or site directors. The trainees will be contacted by their program coordinators or site directors and asked to respond to questions on a web-based survey. OAA will supplement and support these activities with on-line real time reporting of facility responses and other marketing materials.



1.3 The question regarding incorporation of non-response bias into the next version of the survey. See 3 below.



2. Describe the procedures for the collection of information, including:


a. Statistical methodology for stratification and sample selection

The VA will take a census of all trainees present in VA teaching facilities participating in the survey during the academic year.


b. Estimation procedure

We propose to survey an all-inclusive population and hence there is no estimation procedure for sampling. The VA will take a simple means and frequencies of each element, domain, and total hospital score, computed by program and facility, by VISN, and for all VA. This is needed in order to assess the performance and learning environment at the administrative program level.


c. Degree of accuracy needed

In order to increase the response rate to better represent the population, a reminder e-mails are sent to the Associate Chief of Staff for Education or equivalent to increase the response rate. These e-mails will be accompanied by facility-specific response numbers.


d. Unusual problems requiring specialized sampling procedures

There are no unusual problems associated with the selection of the time period for collection. The census sample obtained from our administration procedures yields statistically valid and usable results.


  1. Any use of less frequent than monthly data collection to reduce burden


Not applicable. This survey is collected annually. In having the survey available for most of the academic year the overall response burden is reduced by (a) spreading out data collections, (b) the convenience of being available when trainees are available and (c) available when trainees out-process from their clinical training rotations.


3. Describe methods to maximize response rate and to deal with issues of non-response. The accuracy and reliability of information collected must be shown to be adequate for intended uses. For collections based on sampling, a special justification must be provided for any collection that will not yield “reliable” data that can be generalized to the universe studied.

3.1 Response Rate

A response rate will be computed by determining the percent of trainees who responded to the survey based upon the number of trainees who were contacted to respond to the survey. The number of trainees who are contacted to take the survey is available because both local and national contact lists are maintained. To maximize the number of respondents, the Office of Academic Affiliations will send reminder e-mails to the Associate Chief of Staff for Education or equivalent. These e-mails will be accompanied by facility-specific response numbers.


3.2 Response rate bias

The response rate bias will be computed by comparing characteristics among trainees who completed the survey from registrants who did not complete the survey. We will know the program characteristics of those respondents who did not complete the survey from information supplied by when trainees first log-in to take the survey but fail to complete. Further, respondent characteristics will be derived and normalized against similar characteristics in the trainee population as a whole and against trainees completing the survey over the last 5 years.

We will compute the (unadjusted) overall hospital score across all respondents. We then compute an adjusted overall score to reflect the distribution of all trainees, including both respondents and non-responders. Scores are adjusted based on an estimated regression where trainee characteristics serve as independent variables. Trainee characteristics include training program, type, and level, among others. These will be entered as mean centered values, where means are computed for all trainees. The response rate bias is computed by taking the difference between adjusted and unadjusted scores as a percent of the adjusted score. Statistical significance and confidence intervals will be determined by bootstrapping 10,000 samples. We also compute a robust estimate of the response rate bias by taking the mean response rate bias across all bootstrapped samples. Adjusted scores will not be corrected for facility nesting since we wish to know how the distribution of non-responses across facilities impact national performance estimates. The robust estimate of the response rate bias measures potential damage of assessing national performance estimates when registrants fail to respond to the survey.

We also plan to present results that controls for response biases. This will enable us to compare results across facilities, across disciplines, and within a facility over time. Such comparisons are necessary to evaluate progress VA is making to achieve their clinical professional education mission. This will be accomplished with a differencing variable technique derived from difference-in-differences analyses that is designed to control for both observed and unobserved differences between responding sample and population. Estimates for each domain will be computed by estimating the following regression:

Eq. 1

Eq. 2

where yijk is the kth element score (for the domain) of respondent i in VA facility j , yij is the summary domain score (summarizing all k=1, …, K elements within the domain). Xi is characteristics of respondent i and X-bar is the mean characteristics of the reference respondent sample. tk is the continuous differencing variable that ranges from zero to one for the “kth” element of the domain, where t=0 indicates the element should be “common” to all VA (e.g., accessing patient records where little variation in respondent VA experiences is expected with a nation-wide uniform electronic medical record system), and t=1 indicates the response should be very specific to the respondent’s particular VA experiences (e.g., interaction with preceptor or clinical supervisor). Fi(j) is equal to one if respondent i is located in facility j, and zero otherwise. The “w’s” are coefficients, normalized so that w1+…+wK equals one, describing how much each element factor loads onto domain summary scores. The w’s are used to weight records by facility and respondent for each domain when estimating the parameters to the model in equation 1. u, v, z, and u0 are random variates assumed to by IID normally distributed. We thus can compute an adjusted score for the given domain for facility (j) by 4(j), the interaction term to the given hospital j and the reference group of facilities used to standardize the scores. The reference group of facilities will be all VA facilities in the base year 2009. The reference sample (to compute X-bar) are respondents to the reference group of VA facilities. This will allow comparisons across facilities. We also plan to use as our reference responders to 2005-2006 surveys. We will then compute 4 for each facility by year (2007, 2008, 2009). This will permit us to compare progress within facility but over time for the past three years. Similar strategy is proposed to compute overall hospital score.


Response bias, computed as a percent difference, may be approximated by

Eq. 3

Finally, we intend to bootstrap samples (10,000 with replacement) to compute confidence intervals of estimated performance outcome.


4. Describe any tests of procedures or methods to be undertaken. Testing is encouraged as an effective means of refining collections to minimize burden and improve utility. Tests must be approved if they call for answers to identical questions of 10 or more individuals.


Not applicable. The survey instrument has been previously fielded and extensively tested, with findings published in scientific journals. No new questions have been added in the present year and the survey instrument is the same as was previously approved by OMB.


5. Provide the name and telephone number of individuals consulted on statistical aspects of the design and the name of the agency unit, contractor(s), grantee(s), or other person(s) who will actually collect and/or analyze the information for the agency.


Name of agency unit collecting and analyzing the information:


Office of Academic Affiliations (14)

Veterans Health Administration

Department of Veterans Affairs Central Office

Washington, DC 20420

(202) 461-9490


Individual overseeing the analysis of information:


T. Michael Kashner, PhD JD

Director of Program Evaluation,

Office of Academic Affiliations, VHA, Department of Veterans Affairs

(214) 648-4608

Professor, Department of Psychiatry

University of Texas Southwestern Medical Center at Dallas

Dallas, TX



Individual consulted on statistical aspects of the design:


Richard M. Golden, PhD, M.S.E.E.,

Professor Cognitive Science and Engineering,

School of Behavioral and Brain Sciences, GR4.1

800 West Campbell Road,

University of Texas at Dallas,

Richardson, TX 75083-3021

(972) 883-2423


Individual overseeing the collection of information:


Christopher T. Clarke, PhD

Director, OAA Support Services

Office of Academic Affiliations, VHA, Department of Veterans Affairs

(314) 894-5760


Page 7

File Typeapplication/msword
Authorvhacostoutm
File Modified2009-12-09
File Created2009-12-09

© 2024 OMB.report | Privacy Policy