CR_OMB Supporting Statement B_130410_clean_FINAL

CR_OMB Supporting Statement B_130410_clean_FINAL.docx

Parts C and D Complaints Resolution Performance Measures

OMB: 0938-1107

Document [docx]
Download: docx | pdf




Part C and D Complaints Resolution Performance Measure

CMS-10308




OMB Supporting Statement Part B




March 15, 2013

























Collection of Information Involving Statistical Methods


1. Respondent Universe


CMS is interested in gathering information to determine beneficiaries’ satisfaction with the complaints resolution process and developing internal monitoring measures and tools. This data collection effort emphasizes that the monitoring measures are developed separately for each contract. The survey population is made up of beneficiaries with complaints that were filed against their respective contracts at any time during a calendar year. This data collection period was chosen because CMS is interested in surveying a census (as opposed to a sample) of beneficiaries in order to achieve the most statistically valid information at the contract level and for certain subpopulations, as well as for monitoring performance along the calendar year. Beneficiaries who filed a complaint from all Medicare Advantage and Prescription Drug contracts will be surveyed regardless of the contract’s enrollment size. However, members of 800 series contracts will be excluded from selection. 800 series contracts are MA Organizations, PDP sponsors, and Section 1876 Cost Plan Sponsors that offer, sponsor, or administer certain types of employer sponsored group contracts (employer/union-only group waiver contracts also referred to as EGWPs). In this case, but also in other situations, CMS excludes EGWPs as they are overseen differently than other MA and PDP contracts. Complaints that are not relevant to the eligible contracts will not be included in the universe. Complaints filed by providers and those that fall under the CTM exclusion criteria established by CMS will also be excluded.


This survey will collect data about beneficiaries experience with the contract sponsor complaint resolution processes and the effectiveness of the resolution (a discussion of the survey questions and monitoring measures is included in Supporting Statement A, section B.16.a. Tabulations). The survey census will be pulled from all the complaints as they are closed in CMS Complaints Tracking Module (CTM) database every two weeks on a flowing basis. The data collection period will allow for a waiting period of 7 days for CMS and contract records to be updated before attempting communicating with the beneficiary.


We propose to survey all complaints in the universe instead of a sample from the universe for two major reasons. First, CMS aims to develop statistical sound monitoring measures with the survey response for all contracts. Given the relatively low response rate for a Web survey, for most contracts (those with small and medium complaint count), all complaints will need to be included to reach the required responses necessary for developing statistical valid measures. Based on 2012 complaints data from CTM, out of 587 eligible contracts with at least one complaint, 580 contracts need to include all complaints for reaching a precision of 5% error margin and 95% confidence level assuming 30% response rate and 1.2 DEEF. The total initial sample size would be 83% of the total universe size. Second, for a few contracts with large numbers of complaints, we could survey a sample instead of the whole population. However, it is challenging to determine which contract we should sample since the number of complaint at a specific point in time is uncertain until close to the midway the calendar year and the number of complaint for each contract changes over year.


Table B.1 summarizes the total number of complaints in the universe, total responses needed based on error margin of 5% and 10% (and 4 different confidence levels), and total initial sample estimated assuming 30% response rate and a design effect of 1.2. The total number of complaints resolved in 2012 was 59,032 and the complaints are from a total 587 eligible contracts. The distribution of complaints among contracts is uneven, ranging from 1 to 500 or more. As we can follow from the Total rows in Table B.1, there are several options for the initial sample size under a DEEF=1.2, 30% response rate and several confidence intervals. For example, the initial sample would be 41,611 complaints for reaching a precision level of 80% confidence level or an initial sample of 49,059 beneficiaries for a sample with 95% confidence level. Please note that These numbers are closer to the total complaint count.


In our current approved OMB survey and sampling methodology, we are using an error margin of 10% to estimate the required number of respondents. As seen below, we could achieve a confidence interval higher than 95% (number of respondents required for a 10% error margin at a 95% confidence interval is 16,753) if, as expected, we obtain a 30% response rate of the universe (59,032), that is, 17,710 complaints.


Table B.1 Summary of estimated sample size for 2012

category by complaint count

count of contracts

complaint population

Required number of respondents with 5% error margin

Estimated initial sample size with DEEF=1.2 and response rate of 30%

Confidence Interval

Confidence Interval

95%

90%

85%

80%

95%

90%

85%

80%

1-19

247

2,024

2024

2024

2024

2024

2024

2024

2024

2024

20-49

131

4,418

4108

3979

3855

3724

4418

4418

4418

4418

50-99

99

6,989

5939

5573

5240

4917

6989

6989

6989

6989

100-499

91

17,521

11259

9857

8753

7772

17521

17521

17521

17521

500+

19

28,080

5290

4044

3255

2672

18107

14998

12808

10659

Total

587

59,032

28620

25477

23127

21109

49059

45950

43760

41611

Category by complaint count

Count of contract

Complaint population

Required number of respondents with 10% error margin

Estimated initial sample size with DEEF=1.2 and response rate of 30%

Confidence Interval

Confidence Interval

95%

90%

85%

80%

95%

90%

85%

80%

1_19

247

2024

2024

2024

2024

2024

2024

2024

2024

2024

20_49

131

4418

3327

3019

2758

2496

4418

4418

4418

4418

50_99

99

6989

4066

3457

3001

2616

6989

6989

6989

6989

100_499

91

17521

5666

4467

3665

3055

16739

15190

13528

11867

500+

19

28080

1669

1208

944

756

6641

4810

3744

2994

Total

587

59032

16753

14176

12393

10948

36812

33432

30704

28293



Table B.2 displays the required number of responses and the estimated sample size for selected contracts with various complaint population sizes in the event that a census survey were not implemented . The last seven contracts at bottom are the contracts that could utilize a sample instead of the whole population for a precision level of 5% margin error with 95% confidence level assuming a 30% response rate and DEEF as 1.2. For example, contract S5803 has a total of 1,581 complaints, 310 responses are needed to develop the measures at a 5% error margin with a 95% confidence level. Assuming the response rate is 30%, the initial sample size needs to be adjusted into 1,033 (310/0.3). Since we would sample complaints every two weeks immediately after the complaints are resolved, we need to consider the effect of differential selection probabilities (DEEF) on measure precision. The final initial sample size would be about 1,237 (1,033x1.2). As it can be seen this number (1,237) is close to the total universe (1,581) which is only known after the end of the calendar year.

For other contracts, we have to include all complaints in the survey in order to have as many responses as possible. For example, contract S5596 has 646 complaints in total. 242 responses would be needed to develop the measures with a 5% margin of error and a 95% confidence level. Assuming the response rate of 30% and DEEF as 1.2, we would need 968 initial sample, which is larger than the total achieved in 2012. For our purposes, we would have to include all the complaints. In such case, we may only be able to have about 193 (646*30%) responses. The developed measure with these 193 responses could only reach the precision level of error margin of 5% with 90% confidence level.



Table B.2 Estimated sample size for contracts in 2012 (selected contracts for illustration)

Contract ID

complaint population

Required number of respondents with 5% error margin

Estimated initial sample size with DEEF=1.2 and response rate of 30%

Confidence Interval

Confidence Interval

95%

90%

85%

80%

95%

90%

85%

80%

H0294

1

1

1

1

1

1

1

1

1

H1302

20

20

20

20

20

20

20

20

20

H1418

50

45

43

41

39

50

50

50

50

H4209

100

80

74

68

63

100

100

100

100

H3456

201

133

116

103

91

201

201

201

201

S7694

311

173

145

125

108

311

311

311

311

S5932

409

199

164

138

118

409

409

409

409

R5941

523

222

179

149

125

523

523

523

500

S5596

646

242

191

158

131

646

646

629

524

S5617

710

250

197

161

134

710

710

643

534

H0543

999

278

214

172

141

999

853

688

564

S5810

1020

280

215

173

142

1020

857

690

566

S5660

1056

282

216

174

142

1056

863

694

568

H0524

1070

283

217

174

143

1070

865

696

569

R5826

1355

300

226

180

147

1198

903

720

586

S5967

1441

304

228

182

148

1214

912

726

589

S5803

1581

310

232

184

149

1237

925

734

595

S5601

2199

328

242

190

153

1309

965

759

611

S5768

2207

328

242

190

153

1310

965

759

611

S5884

4897

357

257

199

159

1426

1026

796

635

S5820

5423

359

258

200

160

1436

1032

800

637



Table B.3 displayed the distribution of complaints by major complaint category and by months in 2012. We anticipate that the survey will be conducted on a monthly basis during an entire calendar year.

Table B.3 Distribution of Complaints by Month(resolved) and Category (2012)




Complaint Category


Total

%

Total

Volume

%

Month

1

%

Month

2

%

Month

3

%

Month

4

%

Month

5

%

Month

6

%

Month

7

%

Month

8

%

Month

9

%

Month

10

%

Month

11

%

Month

12

Enrollment/Disenrollment

19621

33.2%

40.5%

34.9%

32.2%

32.5%

30.5%

29.4%

30.2%

29.6%

30.9%

28.1%

32.5%

38.7%

Benefits/Access

14943

25.3%

23.7%

24.5%

27.4%

26.7%

28.2%

27.4%

25.0%

23.9%

23.4%

25.9%

23.8%

21.8%

Pricing/Co-Insurance

6506

11.0%

9.7%

10.1%

9.8%

11.1%

10.8%

12.4%

12.0%

13.1%

13.4%

13.5%

11.4%

9.8%

Formulary

5794

9.8%

10.6%

12.7%

12.1%

10.0%

9.3%

8.7%

10.0%

8.5%

7.4%

6.8%

5.2%

8.5%

Plan Administration

4361

7.4%

6.6%

7.2%

6.4%

7.2%

7.3%

6.1%

6.8%

7.9%

7.6%

8.7%

10.4%

9.1%

Customer Service

2559

4.3%

2.9%

3.9%

4.2%

4.9%

5.0%

4.9%

4.9%

5.5%

5.1%

4.6%

4.8%

3.3%

Exceptions/Appeals

2479

4.2%

3.6%

3.6%

4.3%

3.7%

4.0%

4.8%

4.4%

4.7%

4.6%

5.1%

4.8%

4.5%

Marketing

2149

3.6%

1.4%

2.4%

2.7%

3.2%

3.8%

5.2%

5.5%

5.1%

6.1%

5.8%

5.3%

3.4%

other

620

1.1%

1.0%

0.6%

0.8%

0.7%

1.1%

1.2%

1.2%

1.7%

1.5%

1.6%

1.8%

0.9%

Total

59032


7237

8014

7769

5936

5254

4245

3704

3421

2554

3168

3107

4623

Note: the first 8 major categories are listed and the rest are represented by other. Percentages are based on column totals.


2. Procedures for the Collection of Information


a) Statistical Methodology, Estimation, and Degree of Accuracy


We recommend not pursuing a sample of complaints for the survey as the primary means of data collection. We propose to survey the universe of CTM complaints. There are two arguments for this approach. First, CMS aims to develop statistical sound monitoring measures with the survey response for all contracts. Given the relatively low response rate for a Web survey, the large majority of contracts would need all complaints selected for sampling to reach the required responses necessary for developing statistical valid measures. Second, it is challenging to determine which contracts we should sample on an going activity (filling CTM complaints) since the level of total complaints at a specific point in time is uncertain until halfway the calendar year for any contract, and contract’s complaints volume changes over year.

If the complaint volume has a similar count in CY2013 as in CY2012, we estimate a census of about 59,032 beneficiaries that will result in 17,710 completed web surveys (30% response rate) for CY2013. These will allow us to reach a precision level that is slightly higher than 10% error of margin at 95%.


b) Unusual Problems Requiring Specialized Sampling Procedures


This survey will collect data about immediate-need complaints, which must be closed within 48 hours, and urgent complaints, which must be closed within 7 to 10 days. To account for the delays needed by health contracts to close the complaints filed during a week, the bi-weekly data pull will include complaints filed during the 7-day period that ended 10 days prior to the beginning of the sample selection. This delay in data collection would allow for allow time for beneficiaries to receive notification of their complaint resolution or for data to be updated in the electronic systems.


c) Periodic Cycles to Reduce Burden


We will implement the survey over a period of 3 months in 2013. The analysis of the survey data and the construction of the monitoring measures will be completed in August 2013. The data collection will continue and run concurrently with the analysis and continue into 2014and 12 months in each calendar year thereafter in order to collect data regarding beneficiaries’ recent experience with their health contract’s complaint resolution process. The need for each collected survey to target one specific complaint makes a cyclical collection of data unfeasible.




3. Methods to Maximize Response Rates and Data Reliability


a) Response Rates


We estimate a census of about 59,032beneficiaries to result in 17,710 completed web surveys (30% response rate) with an additional 500 completed paper surveys. To achieve this target, we will utilize an approach that utilizes a web survey as the primary mode of data collection with a paper and pencil self-administered survey as a secondary mode for beneficiaries who cannot access the web survey.. We believe this response rate is achievable for three reasons: (1) this is a government-sponsored survey related to Medicare; (2) we will be surveying a motivated population of people who have taken a stance and filed a complaint by calling 1-800-Medicare; 3) we have achieved 80% response rate in a telephone-mail survey on the same topic, and (4) we are surveying respondents who filed their complaints through an online portal and asking them to respond to an online survey. Research1 has shown that providing respondents their mode of choice (in this case a web survey with mail option) they are more likely to respond to a survey.


First, before the web survey begins, an advance letter describing the purpose and sponsorship of the survey will be mailed to potential respondents (the letter is presented in Appendix D). The letter will provide a toll-free call-in number and a link and instructions for how to access the survey. One or two reminder postcard with a URL for the web survey will be sent to all nonrespondents approximately two weeks after the advance letter mailing.


b) Reliability of Data Collection


The beneficiary questionnaire was built on questionnaires developed for other studies, including the CAHPS Hospital Survey and the CAHPS Health Plan Survey (Adult Medicaid Questionnaire), both of which were reviewed and approved by OMB. Although the two CAHPS surveys served as the original framework for the questionnaire, PDP Customer Service measures were reflected in several questions. The J.D. Power and Associates 2009 National Health Insurance Plan Study question topics regarding customer satisfaction were also incorporated. The questions were designed to ensure that they would be easily understood by respondents. Revisions were made to the draft questionnaire based on the results of the pretest, feedback from CMS stakeholders, and public comments received from the publication of the 60-day Federal Register Notice during 2010 OMB PRA process.


The use of a programmable survey will help to ensure the consistency of the data. The web-based survey instrument controls question branching (reducing item nonresponse due to interviewer error), modifies wording (providing memory aids and probes and personalizing questions), and constructs complex sequences that are not possible to produce or are less accurate in hard-copy surveys. The probes, verifications, and consistency checks are built into the system and standardize the procedures. These procedures ensure the reliability of the data collection methods and the data collected through those methods. Issues regarding the uniformity of completed surveys through the web-based mode of data collection are detailed in Supporting Statement A (Section B.3. Use of Information Technology).


4. Tests of Procedures or Methods


Pilot Test: After receipt of OMB approval, we will conduct a pilot test with approximately 500 beneficiaries in April 2013. The sample will be selected randomly following the proposed sampling plan for the actual survey. The purpose of the pilot is to test the usability of the web survey, refine the data collection process, and produce preliminary measure statistics – essentially, it is a dry run of all activities for the full-scale data collection. On issues of the data collection process, some of the testing will include:

  • Sending a pre-notification letter to sampled beneficiaries;

  • Loading sample information into the survey website

  • Reviewing the data collected to make sure the questions are performing as intended under real field conditions; and


Findings from the pilot test will be used to refine the data collection process to ensure seamless implementation of the main survey. Both quantitative and qualitative analyses will be conducted with pilot test data. These analyses will focus one main objective: To note any necessary changes to logistics and operations.


The answers from the pilot will not be added to the survey results from the actual data collection. At the end of the pilot test, we will submit a sample report reflecting the information collected from the pilot test. This sample report will assist CMS in refining the reporting requirements.


5. Individuals Consulted on Statistical Methods


The following persons outside of CMS contributed to, reviewed, and/or approved the design, instrumentation and sampling plan:


Name

Affiliation

Telephone Number

Gongmei Yu

IMPAQ International

443-539-9769

Oswaldo Urdapilleta

IMPAQ International

(202) 696-1003



1 Olson, K., Smyth, J.D. & Wood, H.M. (2012) Does giving people their preferred survey mode actually increase survey participation rates? An experimental examination. Public Opinion Quarterly, 76 (4). 611 – 635.

IMPAQ International, LLC ii OMB Supporting Statement B


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorJasmine Ainetchian
File Modified0000-00-00
File Created2021-01-29

© 2024 OMB.report | Privacy Policy