EVHAMHS Justification Part B

EVHAMHS Justification Part B.doc

Evaluation of Veterans Health Administration Mental Health Services

OMB: 2900-0713

Document [doc]
Download: doc | pdf

Evaluation of Veterans Health Administration Mental Health Services

OMB Supporting Statement


B. Collection of Information Employing Statistical Methods


1. Universe and Respondent Selection Methods to be Used


There are two target populations for this survey: VHA users and non-VHA users.


VHA users: The first target population is veterans who have used VHA health services during FY2007 and who have one of the following mental health conditions: schizophrenia, bipolar disorder, MDD, PTSD. These four populations will be defined hierarchically as listed, so that each veteran will be uniquely classified into one of these diagnostic cohorts. In addition to these four conditions, the population of persons with SUD will also be examined. The population of VHA users with SUD is about 213,000, with at least 50% of this population expected to also have one of the aforementioned four conditions. The population totals and by diagnostic cohort are provided in Table 2.


The universe of VHA users will be defined by the VHA’s Medical SAS Datasets, which are maintained at the Austin Automation Center and available through the VA Information Resource Center (VIReC). The Medical SAS data sets contain the universe of administrative person-level files of health care utilization, reflecting both inpatient (Patient Treatment File [PTF]) and outpatient (Outpatient Care File [OPC]) encounters or episodes of care. We expect the Medical SAS data set for FY2007 to become available in December 2007, from which we will identify the members of this population to survey.


Non-VHA users: The second target population is veterans who did not receive VHA care during 2007 and who are service-connected (SC) for one of the following conditions: schizophrenia, bipolar disorder, major depressive disorder, post-traumatic stress disorder. The population of non-VHA users is restricted to SC veterans because there is not a systematic way to identify a population of both SC and non-SC veterans who do not use VHA services. However, SC veterans are a high-priority population for the VA, so understanding why these veterans choose not to receive VHA care is important for understanding VHA quality of mental health care. Veterans cannot be service-connected for substance use disorder, which makes it unlikely that we will be able to identify non-VHA users with SUD only. We will therefore not create a substance use disorder diagnostic cohort for non-VHA users; however, we will have data on the subset of non-VHA users with co-occurring mental health and substance use disorders for analysis. The population totals overall and by diagnostic cohort for non-VHA users are provided in Table 2.


The universe of SC non-VHA users who will be defined by the Compensation and Pension (C&P) files maintained by the Veterans Benefits Administration and death records maintained in the Beneficiary Identification Records Locator Subsystem (BIRLS). We expect these data sources for 2007 to become available in December 2007, from which we will identify the members of the non-VHA population to survey.


Table 2. Population totals of veterans in the diagnostic categories, FY2004.



Diagnosis Group (Hierarchically Defined)


Number of Unique Veterans Using VHA Services

Number of Unique Veterans Not Using VHA Services

Schizophrenic disorders

92,000

117,000

Bipolar disorder

35,000

45,000

Major depressive disorder

130,000

165,000

Post traumatic stress disorder

255,000

325,000

Substance use disorder (who are not included above)

106,500

Not applicable

TOTAL

618,500

652,000


Stratification variables: Key variables for the stratification of the VHA and non-VHA user samples will be diagnostic cohort and VISN. Equal sample sizes will be targeted for these strata. The sizes of the diagnostic cohort strata are provided in Table 2. There are 21 VISNs in the VA network, having populations of users with mental health conditions ranging from about 10,000-45,000. VISN is a key stratification variable for the VHA user sample because the VA would like to make VISN-level estimates. For the non-VHA sample, VISN is a key stratification variable because it will assist with making the non-VHA sample more comparable to the VHA sample in order to achieve the main goal of comparing non-VHA users to VHA users. Since the SUD population will overlap with the other four diagnostic cohorts, the SUD sample in the VHA user group will be stratified by the occurrence versus absence of one of the other four mental health conditions listed in Table 2. A candidate stratification variable for the VHA user sample is service-connectedness. Exact allocation of the sample to different strata has yet to be finalized since analyses of the VHA administrative data are not yet complete.


Response rates: We expect to survey 5,818 VHA users, with equal numbers of persons in each of the five diagnostic cohort and VISN. We expect to survey 2,400 non-VHA users, with equal numbers of persons in four of the diagnostic cohorts (schizophrenia, bipolar disorder, PTSD, MDD) and VISN. To achieve these responses, we will contact 9,696 VHA and 4,000 non-VHA users – an anticipated 60% response rate.


2. Procedures for the Collection of Information


We will sample about 13,696 cases, using stratified random sampling. Key variables for the stratification will be VHA versus non-VHA user, diagnostic cohort and VISN. For the SUD diagnostic cohort for VHA users, the presence/absence of one of the other four conditions listed in Table 2 will be a stratification variable. We will obtain stratified random samples for both the VHA and non-VHA samples.


For the VHA user analyses of each diagnostic cohort, the survey is expected to have 80% power to detect differences of 0.4 standard deviations for continuous performance measures when comparing VISN-level performance to the national average (alpha = 0.05). For proportions, the detectable difference would be 20 percentage points. Comparisons of the non-VHA and VHA user groups will also be conducted by diagnostic cohort. We will have 80% power to detect differences between 7.5-8.5 percentage points. The exact percentage will depend on the anticipated range of service-connected VHA users we will find in the data. We expect this percentage will be between 50%-90%, as about 50% of VHA users of services with schizophrenia and bipolar disorder are service-connected (Blow et al., SMITREC FY 2004 report, page 11).


The need for unusually complex sampling procedures is not envisioned.


The proposed effort is to be fielded in 2008, at which time the RAND SRG will mail an advance letter and/or an informative brochure targeted approximately one to two weeks before the call, on a rolling basis over the course of a year, timed to the loading of new sample in our telephone survey center (TSC). The envelope will be printed with a request for address forwarding and address correction. We will insert a pre paid envelope for the selected respondent to mail back to us with an updated or additional telephone number(s) and best time to call. To those not responding to the phone call, a reminder mailing will be sent. Returned mail will be processed and entered into our Record Management System (RMS), and tracking procedures will ensue. The VA will provide RAND with monthly updates of address and phone numbers of cases not completed and all cases not yet tried. SRG will run all batches of sample through the National Change of Address data base to update outdated addresses and will utilize in-house Nexis Lexis data base to locate those we could not contact. SRG will program the questionnaire in CASES software. Survey coordinators will test the instrument for accurate skip patterns and range checks, and that data is being recorded correctly. The questionnaire, confidentiality reminder, and consent and introduction will take on average 30 minutes to complete. Record Management programmer will develop systems to record and track progress of the contact rates and response rates.


A telephone survey with a relatively small number of response options was chosen for this project. A telephone survey was chosen over in-person interviews for its cost effectiveness and because the large sample size and the dispersed geographic area severely limit the feasibility of fielding in-person interviews. A self-administered mail survey was eliminated because we would expect response rates to be negatively impacted by this relatively passive mode. For a population of veterans with SMI and including persons of low socioeconomic status, we would expect more inaccuracies and missing items on a mail survey than on a phone survey.


We considered a mixed-mode survey to increase the response rate (e.g., Siemiatycki, 1979) through which we would attempt to reach veterans by telephone and then following up with either a mail survey or in-person interview for those who could not be reached by telephone. This strategy could increase overall response rates but is accompanied by the serious limitation of introducing mode effects into responses (i.e., responses systematically differ across various survey modes). Mode effects have not been studied for the population of veterans with SMI for the items included in our survey. However, the presence of mode effects among depressed primary care patients (Chan et al, 2004), Gulf War veterans’ reports of health status (Brewer et al, 2004), and veterans treated for stroke (Duncan et al, 2005) suggest that mode effects could occur if this study employed a mixed mode design. Implementing a mixed mode survey would require that we conduct a randomized sub-study of mode effects in order to assist with the interpretation of survey data collected across the modes. However, the sample size needed to precisely estimate model effects would be prohibitive and such a sub-study would go beyond the scope of the project. Further, we would actually need to conduct five sub-studies of mode effects – one for each diagnostic cohort - since mode effects have been shown to vary by impairment (Chan et al, 2004).


3. Methods to Maximize Response Rates and to Deal with Issues of Non-Response


Response rate justification: The 60% response rate estimate is based on a literature review of published studies of telephone surveys of veterans who are drawn from VA administrative data (e.g., 75% in Hynes et al, 1998; 46% in Baldwin et al, 2002; 67% in Borowsky et al, 1999, and 66% in Rintala et al, 2005).


Methods to be used to maximize the response rate: Methods to be employed by the RAND SRG in maximizing contact rates and response rates include:


RAND will send sample to National Change of Address database, in batches, shortly before mailing the letter, to maximize the chances of having a current address. As needed, we will test sending some portion of the sample to Equifax to determine hit rates.


We will mail an advance letter at close to 6th grade reading level. We will use VA letterhead, signed by a senior VA official. The letter will be personally addressed and will include a summary of what the study is about and why it is being conducted. The letter includes a RAND contact number for recipients to call and includes a pledge of confidentiality. We learned at the time of the pre test that an informational brochure is also useful and is received well. We will include the brochure in the initial mailing. We will also include a self addressed envelope and a short form for the consumer to send us their most recent telephone number (those materials are enclosed in this packet of information)

RAND is planning to obtain cooperation from VSOs to advertise in Veterans Service Organization Newsletters to call attention to our need for veteran’s information regarding their health care.


Participants will be promised $10 for participation.


Interviewers will be trained in gaining respondent participation and addressing their concerns and questions.


RAND will mail reminder postcards for those not responding.


RAND will maximize contact rates by receiving monthly updates from the VA central database for addresses and phone numbers for all previously attempted cases and future cases.


RAND will attempt to have a contact person at several VISNs who will assist us by accessing the local VA database for new addresses or other contact information for those hard to find.


Our telephone electronic scheduler will be programmed to attempt calls to cases at various times of the day and days of the week.


Differential response rates across strata: We do not have prior evidence to anticipate that any particular group will have a different response rate than expected. We will protect against this possibility nonetheless by fielding the survey in three replicates. While the second replicate (consisting of 30% of the targeted sample) is being collected, we will analyze the first replicate (consisting of 30% of the targeted sample) to determine whether the distribution of veterans by VISN, diagnostic cohort, VHA vs. non-VHA user matches the population distribution. The design will be modified for the third replicate (40% of the targeted sample) if there are important differences between the population and the first replicate on the distribution across strata.


Evaluation of respondent non-response bias: We will use the VHA and VBA administrative databases to identify differences between respondents and non-respondents on observable characteristics. Initially we will assess whether there are differences between respondents and non-respondents on key characteristics such as demographics and service-connectedness. We will then fit a multivariable logistic regression model using variables from the administrative data as covariates and respondent/non-respondent designation as the outcome. We will use the results of these analyses to finalize a strategy for adjusting our analyses for non-response to ensure that our results generalize to the target population of veterans. Candidate adjustment strategies include regression adjustment for characteristics that differ between respondents and non-respondents and the development of non-response weights. To create the weights, we would form non-response classes that consist of responding and non-responding veterans who are similar in terms of predicted non-response. Respondents in a particular non-response class would all receive the same non-response weight, which will be calculated based on the number of non-respondents who belong to that class, and thus need to be represented.


Treatment of item non-response: We anticipate item non-response to be very low because of the extensive use of computer-based surveys. In the unexpected event of having higher than expected item non-response, we will establish appropriate imputation algorithms for missing items if needed. For items that fall below a certain level of response, generally about 70% depending upon the item, we will examine whether respondents who completed that item differ from item non-completers in ways that might suggest bias in the set of persons who chose to complete the item. For items that have lower non-response and for which imputation is appropriate, we will impute using methods such as hot-deck imputation and multiple imputation.

4. Tests of Procedures or Methods to be Undertaken


The majority of items in the proposed survey have been previously tested through use in other national surveys, and the instruments have established and acceptable reliability and validity for veterans and/or seriously mentally ill persons. No new items have been developed that would require extensive additional testing (i.e., cognitive interviewing, focus groups, etc.) in advance of pre-testing. Thus, to inform the instrument design team, we have conducted 8 pretests of the instrument on the telephone with SMI veterans from the West LA VAMC, in 2 groups of 4, with revision of the instrument between the first and second group. Through pre testing, we have gained confidence in the ability of the SMI population to understand the questions and stay on the phone and focus for the 25-30 minute length of the interview. Upon receipt of OMB approval, we plan to conduct additional pretests.


5. Consultants on Statistical Aspects of the Design and Persons Who Will Collect and Analyze the Information


Dr. Susan Paddock at the RAND Corporation (310) 393-0411 ext. 7628 consulted on the statistical aspects of this survey design. She will be the lead statistician for the survey analysis.


References


Baldwin et al. (2002) Archives of Internal Medicine, 162:1697-1704.


Blow FC, McCarthy JF, Valenstein M, Austin K, Gillon L. (2004) Care for veterans with psychosis in the Veterans Health Administration, FY04. 6th Annual National Psychosis Registry Report, Serious Mental Illness Treatment Research and Evaluation Center (SMITREC), Health Services Research and Development Center of Excellence.


Borowsky SJ, Cowper DC. (1999) Dual use of VA and non-VA primary care. Journal of General Internal Medicine, 14:274-517.


Brewer NT, Hallman WK, Fiedler N, Kipen HM. (2004) Why do people report better health by phone than by mail? Medical Care, 42: 875–883.

Chan K, Orlando M, Ghosh-Dastidar B, Sherbourne C, Duan N. (2004) The interview mode effect on the Center for Epidemiological Studies Depression (CES-D) Scale: An item response theory analysis. Medical Care, 42(3):281-289.

Cradock J, Young AS, Sullivan G. (2001) The accuracy of medical record documentation in schizophrenia. The Journal of Behavioral Health Services and Research, 28(4):456-465.

Duncan P, Reker D, Kwon S, Lai S, Studenski S, Perera S, Alfrey C, Marquez J. (2005) Measuring stroke impact with the stroke impact scale: Telephone versus mail administration in veterans with stroke. Medical Care, 43:507–515.

Donabedian A. (1980) Definition of quality and approaches to its assessment. Explorations in Quality Assessment and Monitoring, Volume 1.

Hynes et al. (1980) Journal of Women’s Health, 7(2):239-247.

Rintala DH, Holmes SA, Fiess RN, Courtdale D, Loubser PG. (2005) Prevalence and characteristics of chronic pain in veterans with spinal cord injury. Journal of Rehabilitation Research & Development, 42(5):573-584.

Young AS, Sullivan G, Burnam A, Brook RH. (1998) Measuring the quality of outpatient treatment for schizophrenia. Archives of General Psychiatry, 55:611-617.

Siemiatycki J. (1979) A comparison of mail, telephone, and home interview strategies for household health surveys. American Journal of Public Health, 69:238–245.



7


File Typeapplication/msword
File TitleSUPPORTING STATEMENT
AuthorIST
File Modified2008-01-10
File Created2008-01-10

© 2024 OMB.report | Privacy Policy