CMS-10550 Hospital Supporting Statement B 2015 11 10

CMS-10550 Hospital Supporting Statement B 2015 11 10.docx

(CMS-10550) Hospital National Provider Survey

OMB: 0938-1290

Document [docx]
Download: docx | pdf

Measure & Instrument Development and Support (MIDS) Contractor:

Impact Assessment of CMS

Quality and Efficiency Measures



Supporting Statement B:

OMB/PRA Submission Materials for

Hospital National Provider Survey





Contract Number: HHSM-500-2013-13007I

Task Order: HHSM-500-T0002

Deliverable Number: 35

Submitted: October 1, 2014

Revised: November 10, 2015





Noni Bodkin, Contracting Officer’s Representative (COR)

HHS/CMS/OA/CCSQ/QMVIG

7500 Security Boulevard, Mailstop S3-02-01

Baltimore, MD 21244-1850

[email protected]

TABLE OF ContentS



SUPPORTING STATEMENT B

DATA COLLECTION

FOR THE HOSPITAL NATIONAL PROVIDER SURVEY


  1. Respondent Universe and Sampling Methods

The semi-structured interviews and standardized survey will sample from the universe of Inpatient Prospective Payment System (IPPS) hospitals that also have performance scores available from the Hospital Value-Based Purchasing (VBP) Program in 2015 to construct the sampling frame for data collection. Critical access hospitals will be excluded from the universe of hospitals because they do not participate in the VBP program; hospitals in Maryland will also be excluded because CMS gave the state of Maryland a waiver to use an independent quality assessment program.


The sampling approach will support the following analytic objectives:

  1. To make national estimates of the prevalence of the actions that hospitals report taking in response to the CMS measures (e.g., hiring quality improvement staff or implementing clinical decision support tools within their health information technology systems);

  2. To make subgroup estimates (i.e., by quality performance and by hospital size) of the prevalence of the actions that hospitals report taking; and

  3. To examine the correlates of quality performance (i.e., the association between the actions that hospitals report taking and quality performance).


Sampling Frame and Distribution of Hospitals by Size and Performance. The sample frame will consist of approximately 3,000 IPPS hospitals. We will randomly draw a sample of 2,045 hospitals from this universe with the goal of achieving 900 responses (assumes an estimated 44% response rate). The sampling approach does not add sample beyond the original 2,045 hospitals to achieve 900 completes should response rates fall below 44%; budget constraints preclude following this type of approach. A review of prior surveys of hospital leaders (see section B3 of this document) indicates that an expected 44% response rate is a reasonable assumption. We will use multiple modes of outreach to respondents to achieve this response rate. We tried to be conservative in selecting a target response rate, and the data collection strategy relies on multiple modes and outreach strategies to ensure we achieve a 44% response rate.


Our sampling approach relies on stratification of the hospital population using hospital characteristics that are of the greatest importance to the proposed analyses. Stratification serves three purposes:

  1. To facilitate analyses that examine hospitals within the resulting strata and/or compare providers across the strata.

  2. To ensure that there is a sufficient number of hospitals within the various strata so that the aforementioned analyses can be performed reliably.

  3. To create improvements in power in analyses of the correlates of quality performance, achieved by increasing the variance of quality by oversampling high and low performers.


We will draw a random sample of hospitals, stratifying by hospital quality performance on the Hospital VBP composite quality score (categorized as high performance: 1st quintile of performance distribution; medium performance: 2nd–4th quintiles of performance distribution; poor performance: 5th quintile of performance distribution) and bed size (categorized as small: <100 beds; medium: 101-300 beds; and large: >300 beds). Stratifying by quality performance is needed to help the Centers for Medicare & Medicaid Services (CMS) understand what differentiates hospitals that are able to achieve high performance from those that achieve low performance. Stratifying by hospital size will help CMS understand how responses differ according to facilities with potentially different levels of resources. We will categorize hospitals into the nine sample strata that result from interaction of these two characteristics.


Table 1 shows the number of hospitals within the universe of hospitals that fall into each of the nine strata.


Table 1: Universe of Hospitals by Strata for Standardized Survey


Small (1–100 beds)

Medium (101–300 beds)

Large (> 300 beds)

Top 20th percentile performance*

357 Hospitals

199 Hospitals

56 Hospitals

Middle of performance distribution
(20th–80th percentile)

405 Hospitals


833 Hospitals

595 Hospitals

Bottom 20th percentile performance

86 Hospitals

272 Hospitals

253 Hospitals


Sampling Design for Standardized Survey. We propose proportionate sampling by size within three quality strata (corresponding to the 1st, 5th and 2nd–4th quintiles of performance). Based on the allocation of available sample from the universe (see below), this will result in census sampling of top and bottom quintiles of performance with remainder of sample being drawn from the middle 2nd–4th quintile of performance. This approach will provide needed power to generate overall estimates and estimates within size and quality strata. It also provides improved power (compared to sampling proportional to stratum size) in regression models in which quality is a predictor achieved by an increase in variance by oversampling the extremes.



We aim to achieve 900 completed survey responses. We will select all available hospitals in the high and low performance strata (n=611 and n=612, respectively); otherwise, we risk having an inadequate number of respondents from these strata. As a result, we anticipate that our sample will contain a total of 539 hospital respondents (with 269 expected completes in the high performance strata and 270 expected completes in the low performance strata), given an assumed 44% response rate. Our goal is to obtain 361 hospital respondents in medium performance hospitals—hospitals in the medium performance group will be sampled proportionally across the bed size strata. Table 2 shows the distribution of the sampled (and responding) hospitals across the nine strata that results from this sampling strategy.



Table 2: Sample Allocation (n = 2,045 total) by Strata for Standardized Survey


Small (1–100 beds)

Medium (101–300 beds)

Large (> 300 beds)

Top 20th percentile performance*

357 sampled

(157 completes)

199 sampled

(88 completes)

56 sampled

(25 completes)

Middle of performance distribution
(20th–80th percentile)

182 sampled

(80 completes)


373 sampled

(164 completes)

266 sampled

(117 completes)

Bottom 20th percentile performance

86 sampled

(38 completes)

272 samples

(120 completes)

253 sampled

(111 completes)

*Based on CMS HVBP composite quality score



As noted previously, the sample will be weighted to account for differential sampling probabilities. Using the design weights (and the assumed 44% response rate), we approximate that the effective sample size for national estimates will be 774 (with a design effect of 1.16). We conservatively estimate the level of precision of our national estimates and for estimates by hospital quality and size strata for a survey item with a prevalence of 50%, so the standard error estimates below are upper bounds. A national estimate would be obtained with a standard error of 1.8 percentage points or less. Estimates for high-performing or low-performing hospitals will have standard errors of 3.0 percentage points, and for large hospitals—the smallest size stratum—the standard error would be 3.4 percentage points. Lastly, we will have reasonable precision for several (but not all) two-way strata. Specifically, we may not have adequate sample for any analysis that involves either the stratum of high-performing large hospitals or the stratum of low-performing small hospitals. Analyses involving other strata defined using both bed size and performance are more likely to have adequate precision for these strata. An estimate of an item that is 50% prevalent will have a standard error of no more than 5.6 percentage points. In contrast, an estimate across all high-performing large hospitals will have a standard error of 10.1 percentage points. Note that these calculations do not incorporate adjustments that will have to be made in the event that response rates differ across strata.

We will compare subgroups using Cohen’s d, which is the ratio of the difference in means for the outcome variable between the two groups being compared and the standard deviation of the outcome variable. Values of Cohen’s d near 0.2 are considered small, 0.5 medium, and 0.8 large (Cohen, 1988). We will have 80 percent power to detect small differences (Cohen’s d = 0.242) between low- and high-performance hospitals using an
α = 0.05 level two-sided test; similarly, we can detect small differences (Cohen’s d = 0.263) between small and large hospitals. We will not be as well-powered for comparisons of the more refined strata that are defined on the basis of both bed size and performance. For example, we will have 80% power to detect medium differences (Cohen’s d = 0.51) when comparing low-performing small hospitals to high-performing small hospitals.



Sensitivity of results to the response rate assumption: We performed additional power calculations to assess how a lower response rate on the standardized survey might impact our ability to examine differences between subgroups. In the computations presented below, the minimum detectable effect size (MDES) when comparing high- vs. low-performing hospitals is 0.242 with a 44% response rate and 0.322 with a 25% response rate (assuming 80% power to detect a difference). As the calculations below indicate, when we reduce the response rate, the effect sizes are small to moderate in our ability to detect differences between subgroups.


We illustrate these power calculations using a hypothetical survey question: Has your hospital implemented electronic tools to support frontline clinical staff, such as clinical decision support, condition-specific electronic alerts, or automated prompts?). If 90% of high-performing hospitals have electronic tools, then with a 0.242 minimum detectable effect size, we would be able to detect an 8 percentage point difference between the two groups (i.e., 90% for high-performing vs. 82% for low-performing hospitals). We would not have sufficient power to detect smaller differences (i.e., the 5 percentage point difference that would result if 85% of low-performing hospitals were using electronic tools). If the response rate were 25% (leading to a MDES of 0.322), we would be able to detect a difference as small as 11.5 percentage points (i.e., 90% vs. 78.5%).


Power calculations for comparison of high-performing hospitals to low-performing hospitals:

Response Rate

Power1+

Power2#

MDES*

0.25

0.558

0.936

0.322

0.35

0.704

0.985

0.272

0.44

0.800

0.996

0.242

+ Power1 – The power for which an effect size of 0.242 can be detected

# Power2 – The power for which an effect size of 0.4 can be detected

*Assumes 80% power



Consideration of alternative sampling strategies: Other strategies for sampling were considered as alternatives to the one selected above. First, we considered the option of drawing a simple random sample that would yield 900 respondents from the entire population (this is equivalent to sampling from each stratum at a rate that is proportional to the size of the stratum). Such a strategy was deemed to not yield sufficient size for
the various strata. Specifically, such a strategy would involve a (multiplicative) 22% increase in the standard error of estimates calculated across the subpopulation of high- performing hospitals over that which is yielded by our preferred strategy. Although we have exhausted all hospitals with the high and low performance strata, we considered sampling from medium performers in a manner that would yield 300 hospitals in each of the one-way strata based on bed size. Sampling in this manner was not preferred because the resulting design effect would damage the precision of population-wide estimates (i.e., this strategy increases the standard error of such estimates by 7.64% over that which is yielded by our preferred strategy).



Sampling Design for Semi-Structured Interview. The semi-structured interview will employ purposive sampling to interview 40 hospital quality leaders across the nine sample strata. The hospitals completing the semi-structured interview may overlap with those completing the standardized survey. Sampling 40 hospitals across nine strata will result in as few as four and as many as five interviews per stratum. This distribution is outlined in Table 3. Because these data are qualitative, the goal is not to generalize to the larger population, but rather to conduct a sufficient number of interviews per stratum to complement the quantitative data collected in the standardized survey and to provide qualitative details that can help partially explain what we observed in the quantitative results from the standardized survey. We will release sufficient sample for recruitment and scheduling to achieve the target number of completed interviews.


Table 3: Sample Allocation (n = 40 total) by Strata for Semi-Structured Interview


Small (1–100 beds)

Medium (101–300 beds)

Large (>300 beds)

Top 20th percentile performance*

4 completes

5 completes

5 completes

Middle of performance distribution
(20th–80th percentile)

4 completes

4 completes

4 completes

Bottom 20th percentile

performance

4 completes

5 completes

5 completes

*Based on CMS HVBP composite quality score



Questionnaire Content and Design Process. The content of the survey was driven by the five research questions of interest to CMS:

  1. Are there unintended consequences associated with implementation of CMS quality measures?

  2. Are there barriers to providers in implementing CMS quality measures?

  3. Is the collection and reporting of performance measure results associated with changes in provider behavior (i.e., what specific changes are providers making in response?)?

  4. What factors are associated with changes in performance over time?

  5. What characteristics differentiate high- and low-performing providers?



Attachment I to the OMB clearance package, “Development of Two National Provider Surveys,” details the process used to develop and test the survey instruments. This included an environmental scan of the literature related to the five research questions, formative interviews with hospitals, drafting of survey instruments and testing the draft instruments with hospitals, and receiving input from the Technical Expert Panel and Federal Advisory Steering Committee, composed of representatives from various federal agencies (e.g., AHRQ, CDC, HRSA, ASPE). In addition, we conducted formative interviews with hospitals to assess whether the survey domains were of importance to hospitals and to identify other issues or topics not identified through the environmental scan. The formative interview work with hospitals was also useful in defining the structure of the survey and in identifying topics that would be more conducive to standardized questions vs. questions that are open-ended in nature.



The goals of the formative interview work were to explore:

  • How the CMS performance measures are changing the way in which hospital are delivering care

  • Factors that drive hospital investments in performance improvement

  • Issues hospitals face related to reporting the CMS measures

  • Potential undesired effects associated with the measures, and

  • Challenges hospitals face related to improvement on the CMS measures.



By exploring these topics, we were able to develop survey questions that addressed the research questions. Attachment III crosswalks the survey questions from the semi-structured interview protocol and the structured survey to the research questions listed above. Attachment III also displays how the goals of the formative work map to the research questions.



The survey development team considered including a “don’t know” option for all questions; however, the final surveys include a “don’t know” response only for those items where the survey development team thought it was necessary. The reason for this is that we are concerned about increasing item “missingness,” as respondents often default to the “don’t know” option rather than finding the answer within their organization. The results of our limited testing of the instrument revealed that respondents did not generally state they did not know the answers to various questions.

There is also potential concern about positive response bias when fielding surveys. However, in our formative and cognitive testing work, respondents demonstrated variation in how they responded and also were very willing to report negative practices, such as upcoding of data. During these interviews they expressed frustration with the measurement programs and having to collect and report the data and described challenges with being able to improve their performance as well as undesired behaviors. As such, we do not believe the surveys as designed will lead to positive response bias among respondents.


Plan for Tabulating the Results. The analysis plan will include: (1) development of sampling weights, (2) response rate/nonresponse analyses, (3) psychometric evaluation of survey items, (4) development of national and subgroup estimates (where possible, such as by level of performance and size of hospital), and (5) analyses of the association between hospital performance (high/low), and hospital responses and characteristics. All aspects of these analyses will be described in a final project report to CMS.

  1. Weighting. Three types of weights will be considered to allow our analysis of survey responses to appropriately reflect the target populations of interest: sampling weights, nonresponse weights, and post-stratification weights. Sampling weights reflect the probability that each hospital is selected for the survey; nonresponse weights reflect the probability that a sampled hospital responds to the survey; post-stratification weights make the respondent sample’s characteristics similar to those of the population. Sampling weights are readily calculated as the ratio of eligible to sampled hospitals in particular strata (given the proposed stratified sampling design). Complex hospital-level nonresponse or post-stratification weights may be developed using logistic regression and raking/log linear models, respectively, in consultation with CMS.

  2. Response rate/nonresponse analyses. We will examine response rates overall and within particular strata, including by performance on CMS quality and efficiency measures and by hospital size (number of beds). Logistic regression analyses will be used to examine the associations between known hospital characteristics and probability of nonresponse. Hospital characteristics to be included in this analysis are size (e.g., number of beds), for-profit/non-profit status, urban/rural, region, and socioeconomic characteristics of patient population.

  3. Psychometric evaluation of survey items. We will evaluate missing data, item distribution (including ceiling and floor effects), internal consistency, and reliability. We will compute these statistics overall and by strata.

  4. Subgroup estimates. We will produce national and subgroup estimates with appropriate adjustment to account for sampling design and nonresponse. The types of subgroups that are of interest include performance strata (low, medium, high), hospital size (e.g., number of beds), socioeconomic status of patients, and urban/rural. The final list will be determined in consultation with CMS.

  5. Relationship between survey response patterns and hospital characteristics.
    We will provide descriptive analyses of survey findings overall and stratified by hospital characteristics. The descriptive statistics will include the mean and median response, variation in responses, and skewness of responses by item. We will use linear and logistic regressions to examine the association between survey responses and hospital characteristics, including hospital performance, size, and region. We aim to develop two main analyses. First, we will use univariate analyses to examine associations between performance and hospital characteristics, including characteristics obtained from the survey and characteristics obtained from administrative data sources such as practice size and location/region. Second, multivariate regression analyses will be used to examine associations between performance and unintended consequences, barriers to reporting and improvement (e.g., reporting difficulties with reporting data or electronic health record [EHR] use), drivers of improvement, and changes to improve care delivery, adjusting for potential confounding factors identified in the initial univariate analyses. Results from these analyses will allow us to determine the fraction of variation in performance that can be explained by information obtained from the survey. In addition, it may be appropriate to treat survey responses as the response variable for certain analyses. For example, increased self-reported overtreatment may be stimulated in environments where high performance is encouraged, making it useful to examine whether high performance is associated with higher rates of unintended consequences. Therefore, in consultation with CMS, we will consider such additional analyses that investigate survey responses as the response variable and performance as an independent variable.



  1. Procedures for Collection of Information

Identification of Appropriate Survey Respondents. The first step in fielding both the semi-structured interviews and the standardized survey will be to identify the most appropriate respondent for these data collection activities, whom we refer to as the quality leader for the organization—that is, the individual within the organization who is most familiar with the CMS performance measures and the lead actions and quality improvement activities the organization has undertaken to improve performance in response to these measures. Once we have drawn the sample, we will contact each hospital to identify the quality leader.


We understand the potential concern about ensuring that the individuals identified at hospitals are equivalent. To ensure that survey and interview respondents are comparable across facilities, we will call each sampled hospital to identify the correct respondent—the person who is knowledgeable about CMS quality measures and the actions the hospital has taken to respond to these measures—to whom we will address the survey. Although this individual often carries the title of chief quality officer, we purposefully did not identify the hospital leader using a specific title because the exact title may vary between facilities. We used this strategy during formative interviewing and cognitive testing, and we were able to identify a quality leader within each organization. During the interviews, these individuals demonstrated that they possessed the knowledge necessary to address the questions on the survey. The types of responses we obtained
in survey development were comparable across hospitals, and the individuals did
not demonstrate problems providing answers to the questions (see Attachment I, “Development of Two National Provider Surveys,” which summarizes findings from the formative interview work).


Semi-Structured Interview. Using information provided by Health Services Advisory Group, Inc. (HSAG) (name, job title, mailing address, email address, and telephone extension of the hospital quality leader), RAND will send the hospital quality leader a letter via email that describes the study and interview and invites the hospital leader or a designee to take part in the interview. Responding hospital leaders will be contacted to schedule an interview appointment. RAND data collection staff will follow up by phone 3 to 5 days after the invitation letter is emailed to confirm interest and availability in participating in the interview. To minimize non-response bias, we will make up to 10 attempts, both by phone and via email, to contact the quality leaders to encourage them to participate in the interview. We will schedule an appointment for the interview at a date and time that is convenient for the quality leader and as necessary will work with each hospital leader’s administrative assistant to schedule interview appointments; in previous survey work, we have found this protocol to be effective at reducing non-response.


Standardized Survey. Data collection staff will contact each sampled hospital to confirm the name, job title, mailing address, email address, and telephone extension of the hospital quality leader. This will allow us to personalize survey invitations. To promote the likelihood of survey participation, we plan a multi-mode data collection for the hospital quality leader survey. We will employ Web, mail and telephone as data collection or prompting modes. To allow adequate time for each mode and for USPS delivery of mail survey returns, we have planned for a field period of nine to 12 weeks.


As recommended as best practice by Dillman, we propose to contact non-responders using varying modes, including modes different from the data collection modes.1


Weeks 1–3 – Initial and follow up email invitations to complete the survey by Web.

All hospital leaders will receive a maximum of two invitations to participate in the survey via the Web. These invitations will be sent via email 1 week apart and will contain sufficient information for informed consent as well as a hospital-specific personal identification number (PIN) code that allows access to the Web survey for that hospital. If no email address is available, the invitations will be sent via first class mail.


Week 4 – Mail survey is sent to all non-responding quality leaders. To reduce non-response rates, 4 weeks after the initial invitation to the Web survey, non-responding hospital leaders will receive a paper version of the survey via first class mail.


Week 7 – Commence phone calls to non-responding quality leaders to prompt return of the mail survey or completion of the Web survey. Seven weeks after the initial invitation to the Web survey, non-responding hospital leaders will be contacted by telephone to prompt completion of the survey via Web or return of the mailed survey via fax. Note that to minimize data collection costs related to engaging large numbers of hospital leaders by telephone, we will initially contact non-responders by email or by mail and reserve the more expensive phone outreach until later in the data collection period, when there will likely be fewer non-responders. We anticipate close of the field after 12 weeks of data collection.


Throughout data collection, we will track response and cooperation within each sample stratum and employ additional efforts or sample to achieve sufficient response in each stratum. We anticipate the procedures outlined above and the goal of 900 completed surveys will result in a response rate of 40% to 60%.


  1. Methods to Maximize Response Rates and Deal with Non-Response


Semi-Structured Interview. We will maximize response to the semi-structured interview by conducting the interview at a day and time within the field period that is most convenient for the hospital quality leader. In addition, 3 to 5 days after the invitation letter is emailed, RAND data collection staff will follow up by phone to confirm interest and availability in participating in the interview. For those hospital leaders who are willing to participate, we will make up to 10 attempts to schedule an appointment for the interview both by phone and via email in order to minimize non-response bias. We would also work with each hospital leader’s administrative assistant to get the interview scheduled; in previous survey work, we have found this protocol to be effective at reducing non-response. The hospital quality leader may designate another individual within the organization to participate in the interview, which may further maximize participation. Those who refuse participation in the interview or who fail to respond to the invitations altogether will be replaced with a hospital quality leader from another hospital with the same characteristics. During the formative development work, we generally found hospitals willing to participate in the interviews, as they wanted to share their experiences with the CMS measures and what they are doing to improve their performance on these measures.


Standardized Survey. Published surveys of hospital leaders conducted in the past 10 years report response rates as low as 20% and as high as 63% (Blendon et al., 2004; Weissman et al., 2005). In addition, surveys of organizations and/or individuals in leadership roles have experienced an overall decline in response rates similar to surveys of general populations (Cycyota and Harrison, 2006; Baruch and Holtam, 2008). We used these studies and the previous experience of the survey development team in conducting interviews and surveys with hospitals to arrive at our estimate of a 44% response rate. As described in Section B2 above, we plan to maximize response rates for the standardized survey through:

  • Careful identification of the appropriate respondent,

  • Use of personalization,

  • Multiple attempts,

  • Multiple modes of survey administration, and

  • Alternative modes for non-response contacts.


We anticipate the data collection procedures will result in a response rate of 40 to 60 percent, and we will release sufficient sample to achieve 900 completed surveys. We will track both facility characteristics and titles of hospital leaders among non-responding hospitals to better adjust for non-response in analyses of results, to examine possible response bias, and to describe the characteristics of non-responders.


  1. Tests of Procedures or Methods to Be Undertaken

The data collection protocol and draft semi-structured interview guide and draft standardized survey were developed and tested with a small number of providers (please refer to Attachment I, “Development of Two National Provider Surveys,” which summarizes findings from the cognitive testing work). Findings from the formative interviews and cognitive testing helped to determine the structure of the semi-structured interview protocol and the standardized survey and the approach that would need to be used to identify the appropriate respondent(s) to the survey in the provider organization.

Formative interviews were used to guide the development of the structured survey and semi-structured interview protocol. Nine hospitals participated in the formative interviews that were conducted by telephone.


The formative interviews with hospitals were designed to:

  • Assess whether providers could understand the information we sought to collect to address the five research questions

  • Assess whether providers would provide biased (i.e., only favorable) responses with regard to CMS programs or their actions taken in response to performance measurement

  • Explore the language that potential respondents might use to describe the topics, and

  • Identify potential response options or areas to probe.


Hospitals included in the formative interviews were purposively sampled to represent variation in the size of the provider entity, the region of the country and location (urban vs. rural) of the provider, and performance on CMS measures. The individuals interviewed were senior leaders who were responsible for the overall quality and safety of clinical care within the hospital. Interviewees were asked to provide feedback on lessons learned related to the use of the performance measures and on any other concerns not covered in the semi-structured interview guide.


The draft standardized survey was tested with a total of 12 hospitals via cognitive interviews conducted by telephone. A range of hospital types (size, quality performance, and region) were selected for the cognitive interviews to capture variation in the expected range of responses. The cognitive interviews were designed to assess respondents’ understanding of the draft survey items and key concepts and to identify problematic terms, items, or response options. A first round of testing was conducted with six hospitals. During this time, the draft instruments were also reviewed by the RAND and Health Services Advisory Group (HSAG) project teams, a technical expert panel convened by HSAG, and the Federal Advisory Steering Committee. The draft surveys were revised based on the findings from the first round of interviews and feedback received from the various reviewers. A second round of cognitive interviews with an additional six hospitals was conducted to test the revised version of the hospital survey. The draft survey was revised based on findings from both rounds of cognitive interviews and feedback received from the various reviewers to produce the final version of the hospital survey to be used in 2016.


  1. Individuals Consulted on Statistical Aspects and Individuals Collecting and/or Analyzing Data

The survey, sampling approach, and data collection procedures were designed by the RAND Corporation under contract to HSAG under the leadership of:

Cheryl Damberg, PhD Kanaka Shetty, MD

RAND Corporation RAND Corporation

1776 Main Street 1776 Main Street

Santa Monica, CA 90407 Santa Monica, CA 90407


Key input to the statistical aspects of the design was received from the following individuals:

  • Cheryl Damberg, RAND Project Director

  • Kanaka Shetty, RAND Co-Project Director

  • Layla Parast, Statistician

  • Michael Robbins, Statistician

  • Marc Elliott, Senior Statistician


The semi-structured interview data will be collected by RAND; the standardized survey data will be collected by a survey vendor.

References

Cohen J. Statistical Power Analysis for the Behavioral Sciences, 2nd Edition. Hillsdale: Lawrence Erlbaum;1988.


Blendon RJ, Schoen C., DesRoches C. Osborn R., Zapert K, Raleigh E. Confronting competing demands to improve quality: a five-country hospital survey. Health Affairs. 2004;23(3):119–135.


Cycyota, C.S. & Harrison, D.A. What (not) to expect when surveying executives: a meta-analysis of top manager response rates and techniques over time. Organizational Research Methods. 2006;9:133–160.


Baruch Y, Holton BC. Survey Response Rate Levels and Trends in Organizational Research. Human Relations. 2008;61:1139–1160.


Weissman J.S, Annas CL, Epstein AM, Schneider EC, Clarridge B, Kirle L, Gatsonis C, Feibelmann S, Ridley N. Error reporting and disclosure systems: views from hospital leaders. JAMA, 2005;293(11):1359–1366.


1 Dillman, D.A., Smyth, J.D., Christian, L.M., Dillman, D.A. (2009). Internet, mail, and mixed-mode surveys: The tailored design method. Hoboken, N.J: Wiley & Sons.


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
SubjectPassback3
Authordamberg
File Modified0000-00-00
File Created2021-01-24

© 2024 OMB.report | Privacy Policy