CMS-10488 - 2017 QHP Survey_Part B_clean

CMS-10488 - 2017 QHP Survey_Part B_clean.docx

Health Insurance Marketplace Consumer Experience Surveys: Enrollee Satisfaction Survey and Marketplace Survey Data Collection (CMS-10488)

OMB: 0938-1221

Document [docx]
Download: docx | pdf

Health Insurance Marketplace Consumer Experience Surveys: Qualified Health Plan Enrollee Experience Survey

Supporting Statement—Part B
Collections of Information Employing Statistical Methods

January 23, 2021

Centers for Medicare & Medicaid Services

TABLE OF CONTENTS

Section Page


1. Potential Respondent Universe and Sampling Methods

This supporting statement includes information in support of the Qualified Health Plan Enrollee Experience Survey (“QHP Enrollee Survey”). CMS developed the QHP Enrollee Survey, whose subset of survey measures is included within the Quality Rating System (QRS). CMS designed the QRS to provide comparable and useful information to consumers abou thte quality of health care services and enrollee experience with QHPs offered through the Marketplaces. Public reporting of the QRS was originally planned for the 2017 open enrollment period. In April 2016, CMS announced that public reporting of quality ratings information by the Federally-facilitated Marketplaces (FFMs), including FFMs where the State performs plan management functions and State-based Marketplaces on the Federal Platform (SBM-FPs), will begin during the 2018 open enrollment period, with a limited pilot in place during the 2017 open enrollment period. During the pilot, CMS will display the QRS star ratings in selectStates whose consumers use HealthCare.gov during the 2017 open enrollment period. The States currently selected are Michigan, Ohio, Oregon, Pennsylvania, Virginia, and Wisconsin. CMS selected these states because they have ample participation of QHP issuers on their respective Marketplaces and relative variation in QRS star ratings based on 2015 beta test results. FFMs not in the pilot will not display star ratings during the 2017 open enrollment period; these states will display star ratings during the national implementation beginning during the 2018 open enrollment period. State-based Marketplaces (SBMs) whose consumers do not use HealthCare.gov may display QHP quality information during the open enrollment period for the 2017 plan year or follow the revised timeframe.

CMS is conducting an additional year of focused consumer testing of the display of QRS star ratings to maximize the clarity and consistency of the information provided and to assess how the QHP quality rating information is displayed on HealthCare.gov. Although the public reporting of QRS results will now begin during the 2018 open enrollment period, there are currently no changes to the QRS methodology, which is designed to encourage the delivery of high quality health care services and improve health outcomes of QHP enrollees over time.

1.1 Sampling Units

As outlined in 45 CFR § 156.1125(b), Qualified Health Plan (QHP) issuers are required to conduct the QHP Enrollee Survey at the level specified by HHS for all QHPs that had more than 500 enrollees as of the previous year. CMS has established the sampling/reporting unit as the product type (i.e., Exclusive Provider Organization [EPO], Health Maintenance Organization [HMO], Preferred Provider Organization [PPO], Point of Service [POS]) offered by a QHP issuer through the Marketplace in a particular state. For example, XYZ issuer’s HMOs offered through the Marketplace in Florida would be considered a single sampling unit. Depending on the way a QHP issuer packages its plan offerings, the sampling unit might include anywhere from a single QHP to many QHPs spanning all categories of coverage (i.e., bronze, silver, gold, platinum, catastrophic). QHP issuers will create a sample frame for each product type they offer through the Health Insurance Marketplace (Marketplace) within a particular state. Child-only QHPs as well as Standalone Dental Plans (SADPs) are excluded from the QHP Enrollee Survey at this time.

For the 2017 survey, CMS estimates that no more than 350 reporting units will be required to field the QHP Enrollee Survey. This is lower than previous estimates based on two factors: (1) the actual numbers of reporting units that were required to administer the 2015 and 2016 QHP Enrollee Surveys were lower (298 reporting units in 2015 and 311 reporting units in 2016) and (2)a number of QHP issuers have announced their intention to reduce the number of health plans that they offer through the Health Insurance Marketplace in 2017.

1.2 Sample Frame & Respondent Universe

The sample frame for each sample unit is generated by the QHP issuer and then provided to the issuer’s CMS-approved survey vendor who is responsible for drawing a simple random sample. Eligible sample units for the 2017 QHP Enrollee Survey, are those that meet both of the following criteria:

  1. The sample unit was offered through a Marketplace in 2017 and

  2. The sample unit had more than 500 enrollees regardless of age on July 1, 2016 and January 1, 2017.

Issuers must collect and report survey data for all eligible sample units to comply with the regulation.

Eligible sample units are comprised of eligible QHPs. Individual QHPs must also meet eligibility criteria to be included in a sample unit. An eligible QHP must meet all three of the following criteria:

  1. An eligible QHP has a 14-digit Standard Component ID number (SCID) assigned in the Health Insurance Oversight System (HIOS) database and

  1. An eligible QHP is offered on an Individual Marketplace and/or a SHOP Marketplace and

  2. Provides family and/or adult-only medical coverage, regardless of whether or not it also covers dental care or vision care.

By this definition, an eligible QHP is synonymous with an eligible SCID in the HIOS system. QHPs with HIOS 2-digit variant suffixes other than ‘00’ of eligible 14-digit SCIDs are eligible to be included in the sample unit. Once the eligible SCIDs are identified, issuers group them by product type (i.e., HMO, PPO, POS, or EPO) within each state. Each state-issuer-product type combination is a potential sample unit. Eligible sample units are those potential sample units that meet the two criteria listed above (i.e., offered through a Marketplace in 2017 and more than 500 enrollees on July 1, 2016).

The sample frame for the survey includes all enrollees of each eligible sample unit who meet the following three criteria:

  1. Eligible sample frame members must be 18 years or older on December 31, 2016, and

  2. Eligible sample frame members must have been enrolled in the eligible QHP from July 1, 2016 through Decmber 31, 2016 with no more than one 31-day break in enrollment during those 6 months, and

  3. Eligible sample frame members must have coverage for primary health care through the eligible QHP.

Issuers are to exclude individuals who discontinue their coverage through the QHP for plan year 2017 and those who are deceased as of January 1, 2017. The sample frame is audited by a NCQA Licensed HEDIS®1 Compliance Organization (NCQA Certified HEDIS® Compliance Auditor) to ensure that the sample frame conforms to all established specifications. A random sample is drawn from each audited eligible sample frame by the issuer’s survey vendor and inferences are made to the sample frame.

For the 2017 survey, CMS estimates that no more than 350 reporting units will be required to field the QHP Enrollee Survey. This is lower than previous estimates based on two factors. The first is the number of reporting units that were required to administer the 2015 and 2016 QHP Enrollee Surveys. In 2015, 298 reporting units were required to administer the survey and in 2016, 311 reporting units were required to administer the survey. Secondly, a number of QHP issuers have announced their intention to reduce the number of health plans that they offer through the Health Insurance Marketplace in 2017.

1.3 Sample Size

For the 2017 QHP Enrollee Survey, CMS proposes to continue utilizing a sample size of 1,300 enrollees per reporting unit and allowing QHP issuers to select a larger sample for reporting units (i.e., oversampling) upon approval from the Project Team. In the 2016 administration of the QHP Enrollee Survey, reporting units that had sufficient enrollment to produce the full sample of 1,300 enrollees received an average of 277 responses. An ongoing challenge for the survey is the poor quality of the contact information provided on the sample frame, which results in higher than anticipated noncontact rates. CMS and its contractors have implemented a number of changes to the survey administration procedures in order to attempt to rectify this problem. Additionally, in its technical assistance to QHP issuers, CMS will emphasize the importance of QHP issuers providing up-to-date contact information for all enrollees in order to avoid future increases in the sample size for the survey.

Oversampling in increments of 5 percent and up to 30 percent are permitted for the QHP Enrollee Survey to assist issuers and vendors who anticipate achieving a lower number of completes per reporting unit. The primary implication is that, if response rates are low, some measures may not have denominators large enough to permit QRS scoring. If issuers do not obtain scores for particular measures, then they will not be able to evaluate how well they are performing on that measure. Issuers who are willing to incur the additional costs of collecting data from a larger sample may do so if they wish.Oversampling does have precedence among other CMS CAHPS® surveys, such as MA & PDP CAHPS®, which permits oversampling at the contract level upon request to CMS.

2. Information Collection Procedures

The QHP Enrollee Survey will be conducted by HHS-approved survey vendors who meet minimum business requirements. A similar system is currently used for other CMS surveys, including Medicare CAHPS, Hospital CAHPS® (H-CAHPS), Home Health CAHPS® (HH-CAHPS), the CAHPS Survey for Accountable Care Organizations, and the Health Outcomes Survey. Under this model, all issuers that are required to conduct the QHP Enrollee Survey must contract with a HHS-approved survey vendor to collect the data and submit it to CMS on their behalf (45 CFR § 156.1125(a) ). CMS is responsible for approving and training vendors, providing technical assistance to vendors, and overseeing vendors to ensure that they are following the data collection protocols.

The data collection protocol for the 2017 QHP Enrollee Survey will closely follow the protocol for 2016, which was a mixed-mode methodology that combines web, mail, and telephone surveys. First, all sampled enrollees will receive a pre-notification letter informing them that they have been sampled for the survey and providing them with information about the survey and how the data collected will be used. The pre-notification letter also provides information on completing the survey online, including the website URL and the sample member’s user ID and password, which are unique to each sample member. Three days after the pre-notification letter, individuals will receive a mail questionnaire, which is then followed-up by a reminder letter 14 days after the questionnaire is mailed and includes the web survey information again. Four weeks after the first questionnaire is mailed, nonrespondents will receive a second questionnaire. Finally, three weeks after the second questionnaire is mailed, survey vendors will initiate telephone follow-up calls making no more than six attempts on varying days of the week and at differing times of the day over a minimum of two weeks.

Exhibit B-1. Data Collection Protocol for 2017 QHP Enrollee Survey

Task

Date

Survey vendors sample enrollees according to sampling protocols.

January 2017 –February 2017

Mail prenotification letter to sampled enrollees.

  • Include URL and login credentials that offers the option to complete by Internet.

Day 0

Customer support phone center opens (toll-free phone number required).

Day 1

Mail first questionnaire with survey cover letter to nonrespondents 3 calendar days after the prenotification letter is mailed.

Day 3

Mail reminder letter to nonrespondents 14 calendar days after the first questionnaire is mailed. If the 14th calendar day after the first questionnaire mailing date falls on a weekend, then survey vendors mail the reminder letter the preceding Friday.Include URL and login credentials that offers the option to complete by Internet.

Day 17

Mail second questionnaire with survey cover letter to nonrespondents 4 weeks (28 calendar days) after the first questionnaire is mailed.

Day 31

Initiate telephone follow-up contact for nonrespondents 3 weeks (21 calendar days) after the second questionnaire is mailed.

  • Make no more than 6 call attempts.

  • Call attempts must occur over a minimum of 2 different calendar weeks.

  • Call attempts must be scheduled at different times of the day on different days of the week.

Days 52–70

End data collection activities.

Day 71



All survey estimates generated from the QHP Enrollee Survey are weighted to adjust for the unequal probability of selection across sample units. Weights are generated for each case by multiplying the inverse of the enrollee’s probability of selection by the enrollee’s probability of response.

Additionally, case-mix adjustment is used when comparing sample units to the overall national benchmark scores. Case-mix adjustment accounts for the demographics of a particular sample unit. Case-mix variables include education, age, self-reported health status, mental health rating, receiving help completing the survey, having a chronic condition, the language in which the survey is completed, and survey mode. Additionally, the four-item composite assessing experience with cost (Q54, Q55, Q56, Q57) will be reported in the quality improvement reports with the list of adjusters above and also with metal level as an additional case-mix adjuster. This composite will not be publicly reported through the Quality Rating System. CMS will continue to monitor the appropriateness of metal level as a case-mix adjuster.

3. Methods to Maximize Response Rates and Address Non-Response Bias

Every effort will be made to maximize the response rate, while retaining the voluntary nature of the effort. The mixed-mode methodology of the QHP Enrollee Survey provides sampled individuals with numerous methods for completing the survey across a 71 day time period. This methodology has been developed using best practices within the survey research field, including the Tailored Design Method.2

3.1 Evaluating Non-Response Bias

The 2014 Psychometric Test of the QHP Enrollee Survey achieved a 37.2 percent response rate while the 2015 Beta Test achieved a response rate of 30.9 percent and the 2016 survey achieved a response rate of 30.0 percent. Response rates for the QHP Enrollee Survey are calculated using the Response Rate 3 (RR3) formula established by the American Association for Public Opinion Research (AAPOR). A response is counted as a completed interview if the respondent completes 50 percent of the questions that are applicable to all respondents, excluding the questions included in the “About You” section of the survey.

Given that CMS expects that the response rate will be below 80 percent, CMS plans to conduct a nonresponse bias analysis to determine whether nonrespondents systematically differ from respondents based on their experience with their health plan. In the nonresponse bias analysis of the 2016 QHP Enrollee Survey, CMS found that older enrollees (those over the age of 55) continued to be significantly more likely to respond compared to younger enrolles. This pattern was observed in the 2014 psychometric test as well as the 2015 beta test. Additionally, in the 2016 survey administration there was a 2 percentage point increase in enrollees refusing to complete the survey compared with the 2015 survey administration. CMS will closely monitor the refusal rate in the 2017 QHP Enrollee Survey to determine whether this higher level of refusals continues and what actions may need to be taken in order to decrease refusals.More detailed findings from the nonresponse bias evaluation have been included in Appendix A of this document. CMS intends to summarize this information in a consumer friendly way once we begin national public reporting during the 2018 open enrollment period.

Thus far, the response rates discussed have been at the unit level, where respondents either completed or did not complete the survey. There is also item-level nonresponse where a respondent answers some, but not all of the questions they are eligible to answer in the survey. Although highly unlikely, if the item response rate is less than 70% for any survey questions, CMS will conduct an item nonresponse analysis similar to that discussed above for unit nonresponse as required by Guideline 3.2.10 of the Office of Management and Budget’s Standards and Guideline for Statistical Surveys.

All reporting websites under CMS’ control will provide Marketplace consumers with the overall response rate and the minimum and maximum response rates obtained by reporting units nationwide. This information will also include a statement of findings from the nonresponse bias analysis and CMS’ assessment of the potential implications of those findings for use of the response rates by consumers in choosing a QHP. CMS will report back to OMB before posting results publicly regarding how it intends to communicate these concepts to consumers within the context of the Quality Rating System (QRS).



4. Tests of Procedures

The QHP Enrollee Survey uses the CAHPS® Health Plan 5.0 Survey, which was developed and is maintained by the Agency for Healthcare Research & Quality (AHRQ), as the core of the QHP Enrollee Survey. Additional items included in the survey are from other CAHPS® surveys and supplemental item sets. These items have undergone extensive testing and practical use by AHRQ, CMS, NCQA, and other users since they were first developed nearly 20 years ago. In 2014, CMS conducted a Psychometric Test to evaluate the psychometric properties of the QHP Enrollee Survey in the Marketplace population. This test resulted in numerous questions being dropped from and revised for the questionnaire for the 2015 Beta Test. More specifically, in 2014, CMS conducted the Psychometric Test of the QHP Enrollee Survey with 30 sampling units (defined by state, issuer, and product type (i.e., HMO, PPO, POS, or EPO)). The sample unit definition matches the reporting unit definition CMS developed for the Quality Rating System (QRS). Because the Psychometric Test took place in the second half of 2014, the results were not available in time to influence the content of the questionnaire for the 2015 Beta Test, which had to be finalized for distribution to the Beta Test survey vendors by November 2014. As a result, CMS, in consultation with OMB, reduced the beta test questionnaire to include only the survey items that were part of the CAHPS® Health Plan 5.0 core items and the items that were needed for the QRS. Thus, the Beta Test questionnaire contained 31 fewer items than the Psychometric Test questionnaire.

The 2015 Beta Test was mainly intended to test the survey vendor system but also provided additional information about the questionnaire items and data collection procedures. As a result of the Psychometric and Beta Tests, CMS identified changes in the questionnaire items, the data collection procedures, and the sampling specifications that we obtained approval for the 2016 implementation of the survey. When the results of the Psychometric Test became available in early 2015, CMS determined that nine of the questions that had been removed for the Beta Test were vital for understanding enrollee experiences with their QHPs. These questions were approved to be returned to the questionnaire for the 2016 National Implementation. These restored items address enrollees’ experiences with out-of-pocket costs for covered services, health insurance literacy, and health insurance coverage during the previous year, and whether respondents would recommend their QHPs to their friends and family. For the 2017 implementation and beyond, CMS has added six disability status items to address requirements of section 4302 data collection standards of the ACA and removed the question to assess health insurance coverage during the previous year. The disability status items were tested in the 2014 Psychometric Test and have been extensively tested by the US Census Bureau for administration in the American Community Survey.

In consultation with the project’s Technical Expert Panel (TEP), CMS has continued to review the survey to review the impact of the questionnaire length and possibilitiy for reducing the response burden. As part of our ananlysis, we plan to examine 2015 and 2016 data on partial completes to better understand where in the survey they broke off to determine if there’s a particularly problematic question or a length of time that suggests respondent fatigue has set in. We can also model the propensity for breakoffs to investigate if they have unique characteristics compared to those who completed the survey. We can stratify the analysis by survey mode to see if results differ for telephone or Web respondents. Breakoffs are not a common occurance since less than 1 % of the total sample in the 2016 QHP Enrollee Survey were partially completed. CMS previously analyzed the 2014 psychometric test data which was the longest version of the survey at 107 questions and did not find evidence of any particularly problematic questions or time points in the survey that produced a large number of breakoffs.

In terms of lessons learned from 2016 that could inform survey revisions, our analyses of the 2016 data do not suggest that particular questions need to be dropped. However, there are questions or composites (i.e. questions, such as the QRS Access to Care measure, with low item screen-in rates) that we may explore changing and will address if potential changes affect the QRS methodology and public reporting. CMS will continue to examine methods for reducing the current QHP Enrollee Survey questionnaire length and will provide opportunities for public comment before these changes are implemented.

5. Statistical Consultants

This sampling and statistical plan was prepared and reviewed by staff of CMS and by the American Institutes for Research. The primary statistical design was provided by Chris Evensen, MS, of the American Institutes for Research (AIR) at (919) 918-2310; Lee Hargraves, PhD of AIR at (781) 373-7031; Michael P. Cohen, PhD, of AIR at (202) 403-6453; Steven Garfinkel, PhD, of AIR at (919) 918-2306, and HarmoniJoie Noel, PhD, of AIR at (202) 403-5779.

Appendix A: Detailed Findings from Nonresponse Bias Analysis of the 2016 QHP Enrollee Survey

Unit nonresponse, which is typically referred to as simply nonresponse, occurs when a sampled individual fails to complete the survey. While nonresponse alone is not problematic, if the topic being measured in the survey is directly related to the reason for nonresponse it can result in biased survey estimates. For example, if individuals who had poor experiences with their QHPs were less likely to complete the QHP Enrollee Survey, then the survey results may overestimate the percentage of QHP enrollees who had a positive experience. The mean response rate across all reporting units that participated in the 2016 QHP Enrollee Survey was 28.8 percent with a minimum of 10.6 percent and a maximum of 49.4 percent. This variation is consistent with the variation in response rates across reporting units that CMS saw in the 2015 QHP Enrollee Survey.

Given the potential detrimental impact that nonresponse bias can have on survey estimates, the Office of Management and Budget (OMB) requires that all federal surveys that achieve a response rate below 80 percent perform a nonresponse bias analysis (Office of Management and Budget, 2006). This nonresponse bias analysis utilized information that QHP issuers included on the sample frame about all individuals, such as age, sex, and Census division.

First, the research team examined cross-tabulated frequencies of the frame variables between respondents and nonrespondents to determine if significant differences existed. The results of this analysis is shown in Exhibit B2.

Exhibit B2. Comparison of Selected Demographics between Sample Frame and Survey Respondents

Demographic

Weighted Percent of Sample Frame

Weighted Percent of Survey Respondents

Difference

Sex




Male

45.4

41.5

-3.9

Female

54.7

58.5

+3.9

Age




18-24

9.4

4.5

-4.9

25-34

17.7

11.0

-6.7

35-44

16.9

11.9

-5.0

45-54

24.0

23.7

-0.3

55-64

31.0

47.9

+16.9

65 or older

1.1

1.1

0

Metal Level




Catastrophic

0.5

0.3

-0.2

Bronze

17.6

17.2

-0.4

Silver

68.9

69.7

+0.8

Gold

8.9

8.9

0

Platinum

3.6

3.4

-0.2

Missing

0.4

0.4

0

Census Division




East North Central (IL, IN, MI, OH, WI)

11.2

12.0

+0.8

East South Central (AL, KY, MS, TN)

4.0

4.5

-0.5

Middle Atlantic (NJ, NY, PA)

9.1

8.5

-0.6

Mountain (AZ, CO, ID, MT, NV, NM, UT, WY)

4.4

5.3

+0.9

New England (CT, ME, MA, NH, RI, VT)

6.5

7.6

+1.1

Pacific (AK, CA, HI, OR, WA)

18.3

18.8

+0.5

South Atlantic (DE, DC, FL, GA, MD, NC, SC, VA, WV)

27.7

26.4

-1.3

West North Central (IA, KS, MN, MO, NE, ND, SD)

4.3

5.9

+1.6

West South Central (AR, LA, OK, TX)

14.6

11.1

-3.5

In the 2016 QHP Enrollee Survey, one variable with the biggest differences was enrollee age. These differences are not uncommon in surveys. As was the case in previous administrations, we found that younger enrollees (those under the age of 45) were less likely to respond, which resulted in overrepresentation of enrollees between the ages of 55 and 64 because they were more likely to respond to the survey.

Given that the bivariate analysis indicated some significant differences in response by consumer characteristics, AIR utilized multivariable logistic regression to determine which consumer characteristics were associated with returning the survey and to estimate the direction and size of the effect of these characteristics. The model estimated the propensity to respond (the dependent variable) based on these characteristics. A value of 1 for the dependent variable indicated that the sampled consumer was a respondent and a value of zero indicated that the sampled consumer was a non-respondent.

Exhibit B3 shows the marginal effect (odds ratio) of each variable on the propensity to respond as well as the 95% confidence interval associated with the odds ratio (OR) estimate. Estimates above 1.0 indicate that the variable was associated with an increased propensity to respond in comparison to the reference group, while estimates below 1.0 indicated that the variable was associated with a lower propensity to respond to the survey in comparison to the reference group. For example, male enrollees were slightly less likely to return the survey compared to female enrollees (OR = 0.829). This analysis also shows that older enrollees were more likely to respond In most surveys, older people and women are more likely to respond.

Exhibit B3. Odds Ratios from Variables included in Logistic Regression Modeling Survey Response

Demographic

Point Estimate

Lower 95% CI

Upper 95% CI

Sex (Ref: Female)




Male*

0.829

0.825

0.832

Age (Ref: 45-54)




18-24*

0.502

0.497

0.507

25-34*

0.583

0.579

0.588

35-44*

0.675

0.670

0.680

55-64*

1.709

1.699

1.718

65 or older*

1.200

1.175

1.225

Metal Level (Ref: Silver)




Catastrophic*

0.749

0.720

0.778

Bronze*

0.896

0.891

0.902

Gold*

0.910

0.903

0.917

Platinum*

0.931

0.920

0.942

Missing*

1.234

1.191

1.277

Census Division (Ref: West South Central [AR, LA, OK, TX]




East North Central (IL, IN, MI, OH, WI)*

0.917

0.909

0.926

East South Central (AL, KY, MS, TN)*

1.037

1.024

1.050

Middle Atlantic (NJ, NY, PA)*

0.689

0.682

0.696

Mountain (AZ, CO, ID, MT, NV, NM, UT, WY)*

1.031

1.019

1.043

New England (CT, ME, MA, NH, RI, VT)*

0.890

0.880

0.899

Pacific (AK, CA, HI, OR, WA)*

0.785

0.779

0.792

South Atlantic (DE, DC, FL, GA, MD, NC, SC, VA, WV)*

0.811

0.805

0.818

West North Central (IA, KS, MN, MO, NE, ND, SD)*

1.198

1.184

1.212

Nonworking Telephone Number (Ref: Working Telephone Number)




Nonworking or Unavailable Telephone Number*

0.132

0.131

0.133

Invalid Address (Ref: Valid Address)




Invalid Address*

0.256

0.251

0.262

* = Statistically significant at p<.05

As shown in Exhibit B3, despite controlling for a variety of demographic aspects, many of these variables remained statistically significantly associated with an individual’s response to the survey.



1 Healthcare Effectiveness Data and Information Set (HEDIS®) is a registered trademark of the National Committee for Quality Assurance (NCQA).

2 Dillman, D., Smyth, J., & Christian, L. (2014). Internet, phone, mail, and mixed-mode surveys: The tailored design method (4th ed.). Wiley.

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File Modified0000-00-00
File Created2021-01-23

© 2024 OMB.report | Privacy Policy