Memo

OMB0607-0978_Conjoint_Rev (2).docx

Generic Clearance for Internet Nonprobability Panel Pretesting

Memo

OMB: 0607-0978

Document [docx]
Download: docx | pdf


The Census Bureau plans to conduct new research under the Generic Clearance for Internet Nonprobability Panel Pretesting (OMB number 0607-0978). The Census Bureau is exploring the strategic re-use of administrative data from federal, state, and commercial providers.  The use of administrative records data have proven to make operations more efficient, improve data quality, reduce data collection and processing costs, and reduce participant burden. Through record linkage and statistical matching, we enhance Census Bureau operations and extend demographic and socioeconomic research capabilities. Through the use of conjoint studies, the Census Bureau is interested in learning about the public’s views towards the statistical use of administrative records and the factors that influence opinion. Additionally, the Census Bureau wants to investigate different characteristics of survey invitations that lead to increased survey responses. We will use the conjoint studies to further these research efforts.


Conjoint studies provide quantitative measures called “utility scores” that measure the value of features (levels of attributes). These scores are statistically derived from the choices that respondents make when asked to trade off features (often including price) against each other. Price is defined as what the respondent must pay to get valued features and is not necessarily monetary. In this case, respondents are asked to trade time vs. privacy (subjectively measured).


Under contract with the Census Bureau, QSA Research has constructed two conjoint studies to investigate these characteristics. These data will be collected with respondents from the Census Bureau’s Opt-In Panel.

Overview of Conjoint and Choice-Based Conjoint (CBC)


Conjoint measurement simulates real-world decision-making by asking respondents to make trade-offs between features (attribute levels) in order to get the most valued combination. In these exercises, features are measured “conjointly,” or in the context of other features. Respondents will be presented with a series of choices between “packages” or combinations of features. Each “package” is part of a balanced experimental design.


Conjoint measurement was developed during the 1960s and 1970s, when survey interviews were often carried out in-person. Although very effective is achieving research goals, the original conjoint methodology required respondents to rank-order combinations of attribute levels, and, thus, was often very burdensome to respondents. It was, however, an improvement over simple trade-off analysis, in which respondents had to consider every possible combination of attribute levels.


The original conjoint was based on a single orthogonal array, which is configured so that respondents rate combinations that, as a whole, show each attribute in the context of all other attributes at least once.


However, the ranking task is usually not practicable when administered online, partly because there may not be enough “real estate” on the screen to show all of the options in a way that is easily readable for respondents, particularly if the respondent is using a smart phone or smaller tablet. In addition, the intrinsically difficult task of ranking has been largely abandoned in favor or rating, which obviates one of the key advantages of conjoint, asking the respondent to make the kind of choices he or she makes in real decision-making situations.


Choice-Based Conjoint (CBC), as used in these two exercises, presents the respondent with a series of 12 separate choices (with each choice shown on its own screen) between two alternative full-profile arrays (packages) of attributes, which is not only user-friendly.


The CBC software that we will use combines both measurement and analysis into a single software platform.1

Two Conjoint Exercises

Conjoint Study #1


The primary objective of this exercise is to understand public views about the statistical use of administrative data as a substitute for survey data. The design of the study (below) shows the attributes and attribute levels, the raw material that will be used to create the survey questions.



Attribute

Level 1

Level 2

Level 3

1

Time saved by using outside data

10 Minutes

30 Minutes

One hour

2

What outside data we use.


Number of Household Members

Dates of Birth for all Household Members

Income for all Household Members

3

Outside data source

Federal Government Agencies

State Agencies like the Dept. of Motor Vehicles

Commercial Firms



The survey questions are randomly generated by a computer program to create an experimental design, so that overall each attribute level is measured in the context of every other attribute level an equal number of times. With CBC the array or package of attributes shown to each respondent varies. In this case, there are ten separate arrays. This allows us to gain more reliable measures without undue respondent burden.



For study #1, each email address in the sample will receive a maximum of three notification emails:

  • one of the initial emails on a Monday,

  • a reminder email on the following Thursday (if they have not yet clicked on the link to the survey), and

  • a final reminder email on the following Monday with the survey closing the following Friday.


Copies of these emails are included in Attachment 2. This survey will close the following Friday at midnight.


Conjoint Study #2

The objective of Conjoint Exercise #2 is to investigate the probable impact of different characteristics of survey notification on survey response. The design of the study is shown below.



Attribute

Level 1

Level 2

Level 3

1

Time saved by using outside data

10 Minutes

30 Minutes

One hour

2

What outside data we use.


Number of Household Members

Dates of Birth for all Household Members

Income for all Household Members

3

Outside data source

Federal Government Agencies

State Agencies like the Dept. of Motor Vehicles

Commercial Firms


Conjoint Exercise #2 will utilize the same procedures as Conjoint Exercise #1. Ten different arrays will be randomly generated, and each respondent will make 12 choices between two bundles of attribute levels.


Attachment 3 describes the questions and set-up for the conjoint experiment #2.


For study #2, each email address in the sample will receive the same content, number and distribution of emails as in study #1.

Staff from the Center for Survey Measurement (CSM) will sample 6000 emails from our opt-in panel for each study, totaling a sample of 12,000. We expect to achieve a 10% response rate, in accordance with recent studies, with a goal of 600 completes in each study. This is the number needed for the conjoint analysis, according to the experts in this method. CSM staff will send the emails through GovDelivery. QSA will host the survey through Sawtooth Software. QSA Research will analyze the non-identifiable data and provide analysis back to the Census Bureau.

Analysis of Conjoint Exercises #1 and #2


Conjoint analysis utilizes the pattern of choices that respondent make in order to calculate utility scores (measures of the value of each feature or attribute level for each respondent) and overall attribute importance scores for each respondent. The method also allows the researcher to evaluate any combination of attribute levels regardless or whether or not it was ever presented to respondents, as well as the effect of changing the level of any attribute.


Utility scores are partial logistic regression slope-function coefficients. Utilities are scaled to sum to zero within each attribute. Hierarchical Bayesian analysis is utilized to calculate the utility values at the individual level, which was not possible for the predecessor method, Discrete Choice Analysis. Having individual-level utility scores makes it possible to analyze the data in the same way one might analyze percentages or means.


Conjoint analysis also produces a measure of the overall importance of each attribute. These attributes are calculated based on the range of utility scores across the levels of the attribute. They are scaled so that the sum of all attributes in the exercise is 100% for each respondent.


Since CBC Analysis generates both utility scores for attribute levels and overall attribute importance scores at the individual level, we can analyze them in a number of ways. For this study, we plan to analyze utilities and attribute importance by different demographic variables, including:


  1. Gender

  2. Age

  3. Hispanic or Latino origin

  4. Race

  5. School or college attendance within the last 3 months

  6. Whether the respondent worked for pay at a job (or business) last week and where they worked

  7. Occupation


We will also provide a simulation, which evaluates different combinations of attributes and shows the effects of varying one attribute. For example, if we took the combination of attribute levels with the highest overall (summed) score, we would then assess the impact of changing on attribute level at a time. The results of the simulation are expressed in terms of “Share of Preference,” that is, the percentage of respondents who would prefer that combination of attribute level. This provides very clear and intuitive results. For example, one would lose 15% of Share of Preference if one offered level 2 rather than level 1 of a particular attribute.



We estimate that users will spend approximately 5 minutes reading emails (if every person sampled reads the emails, it will be 5 minutes times 12,000 emails, totaling 1000 hours) and each respondent will spend approximately 10 minutes on average completing the survey (10 minutes times 1200 respondents totals 200 hours). Thus, the total estimated respondent burden for 1200 completes for this study is approximately 1200 hours.


The contact person for questions regarding data collection and statistical aspects of the design of this research is listed below:

Jennifer Childs

Center for Survey Measurement

U.S. Census Bureau

Washington, D.C. 20233

(304) 728-4932

[email protected]


1 We will use Sawtooth Software’s Choice-Based Conjoint program for these studies. Founded by Rich Johnson in 1983, Sawtooth is a pioneer in software for Conjoint analysis and a specialist in the technique. According to the American Marketing Association, Sawtooth software is the fourth most-used software among marketing researchers (after SPSS, Excel, and SAS, which offer many other types of programs).


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File TitleThe purpose of this letter is to inform you of our plans to conduct research under the generic clearance for questionnaire pre
AuthorBureau Of The Census
File Modified0000-00-00
File Created2021-01-23

© 2024 OMB.report | Privacy Policy