Study/Recruitment Plan

Enc 1 - Study and recruitment plan_OMB_clean.docx

Generic Clearance for Questionnaire Pretesting Research

Study/Recruitment Plan

OMB: 0607-0725

Document [docx]
Download: docx | pdf


May 2020

Cognitive Testing for the 2022 ACS Content Test: Recruitment and Study Plans

Final Plans

Prepared for

U.S. Census Bureau

Center for Behavioral Science Methods

Suitland, MD

Prepared by

RTI International

3040 E. Cornwallis Road

Research Triangle Park, NC 27709

RTI Project Number 0217376.001

RTI Project Number
0217376.001

Cognitive Testing for the 2022 ACS Content Test: Recruitment and Study Plans

Final Plans

May 2020

Prepared for

U.S. Census Bureau

Center for Behavioral Science Methods

Suitland, MD


Prepared by

RTI International

3040 E. Cornwallis Road

Research Triangle Park, NC 27709

_________________________________
RTI International is a registered trademark and a trade name of Research Triangle Institute.

Contents

Section Page



Figures

Number Page

1‑1. Advertisement Template for In-Person Interview, Print, and Online Post (Specific) 1-7

1‑2. Example Recruitment Tracking Sheet 1-7

1‑3. Example Recruitment Record Summary 1-8


2‑1. Minimum Number of Interviews per Language, Mode, Question Group, and Question Version by Round 2-3

2‑2. Test Schedule 2-18




Tables

Number Page

1‑1. Specific Recruitment Strategies for Hard-to-Reach Participants 1-3


2‑1. Round One: Minimum Participants per Language, Mode, Question Group, and Question Version 2-4

2‑2. Round Two: Minimum Participants per Language, Mode, Question Group, and Question Version 2-5

2‑3. Round One: Minimum Participants per Sub-characteristic (English Interviews) 2-5

2‑4. Round Two: Minimum Participants per Sub-characteristic (English Interviews) 2-7

2‑4. Minimum Round Three English Interviews by Mode and GQ Type 2-10

2‑6. Round Three Minimum Participants in Each Group by Version and Hispanic Origin 2-10

2‑7. Research Questions by Question Topic 2-12

2‑8. Minimum Participants per Sub-characteristic by Round 2-22

2‑9. Minimum Round Three English Interviews by Mode and GQ Type 2-24

2‑10. Round One Minimum Participants in Each Group by Version and Hispanic Origin 2-25

2‑11. Specific Recruitment Strategies for Hard-to-Reach Participants 2-26



  1. Recruitment Plan

This recruitment plan is a live document describing the participant characteristics and recruitment procedures agreed upon by the Census Bureau and RTI International at the kickoff meeting.

    1. Recruitment Strategy

To ensure efficient and successful recruitment of the targeted populations, the RTI/Research Support Services (RSS) team will use a combination of community-based and online recruitment, which has been effective over the past 2 decades of cognitive recruitment at RTI. These approaches will be tailored to meet specific study needs.

The RTI/RSS team will use a dual approach consisting of online advertising and community-based strategies to meet recruitment targets successfully and on time while efficiently managing resources.

Online Recruitment. RTI/RSS will first place advertisements on www.craigslist.com, which has proven effective for recruiting in-person interview participants. We will also identify popular local and national online destinations frequented by affinity groups pertaining to the targeted participant characteristics. These online destinations may include listservs, forums, blogs, Facebook groups, and Reddit. For example, we will post targeted advertisements to relevant Facebook groups to recruit parents of homeschooled children, retired respondents, or individuals with learning disabilities. Online advertising will primarily be used for recruiting English-speaking interview participants. The RTI/RSS team will also seek to identify Spanish-oriented online destinations, like Latinx immigrant support groups on Facebook, for using online advertising as a supplemental strategy to recruit monolingual Spanish speakers.

Community-Based Recruitment. The RTI/RSS team will also conduct community-based recruiting to find participants. Community-based recruiting may be particularly effective for finding participants who meet specific criteria that are less likely to be found through online recruitment methods, such as living in a housing unit with solar panels, receiving Supplemental Nutrition Assistance Program (SNAP) benefits, or being enrolled in Children’s Health Insurance Program (CHIP). The RTI/RSS team will advertise in community publications and post flyers at a variety of public locations, including community centers, libraries, ethnic grocery stores, local social service offices, college dormitories, and local churches. The RTI/RSS team will also work directly with staff at community-based organizations like adult literacy groups or churches that primarily serve Spanish speakers. Additional strategies include:

  • Posting flyers near electric vehicle charging stations or working with electric vehicle clubs to identify electric vehicle owners;

  • Working with septic system companies or local health departments to identify households with septic systems;

  • Sending letters to residents in specific apartment complexes to identify households composed of multiple families;

  • Snowball sampling, where participants would be asked to provide an email address or phone number for one or more persons they know who have the same sub-characteristics;

  • In-person intercept screening surveys; and

  • Radio ads.

Our dual approach has several benefits. Initial online advertisements allow us to identify and choose from a large pool of initial participants to prioritize participants who meet multiple sub-characteristics. Furthermore, studies have found that participants recruited via online methods differ demographically than participants recruited via traditional methods.1,2,3,4 Using both approaches will help us obtain a more demographically diverse sample, which has been shown to identify additional errors.5 For example, when RTI conducted cognitive testing related to substance use dependence on the National Survey on Drug Use and Health (NSDUH), we recruited people who use substances by placing online advertisements on Craigslist and by working directly with substance use treatment centers. RTI found that participants recruited from substance use treatment centers were more familiar with the concepts of substance use and withdrawal and had a different interpretation of some questions than participants recruited from Craigslist, many of whom had never received treatment for their substance use. Recruiting participants from both sources was critical for fully evaluating potential sources of error in the questions.

Table 1‑1 presents recruitment criteria that may be particularly challenging and suggestions of tailored recruitment sites to maximize the reach to participants with rare or operationally hard-to-reach characteristics.

Table 1‑1. Specific Recruitment Strategies for Hard-to-Reach Participants

Topic

Tailored Recruitment Strategy

Households with multiple families

  • Post advertisements and in-person recruitment at apartment complexes known for accommodating multiple families.

  • Post flyers or intercept at local department of social services, WIC offices, food banks, and/or stores that accept SNAP.

Households with foster children

  • Post advertisement to online forums and social media sites frequented by affinity groups of foster parents.

  • Solicit assistance from local affinity groups and advocacy organizations of foster children.

Individuals in housing units with septic systems only

  • Solicit recruitment assistance from local septic tank companies (e.g., share study information with customers, learn which communities use septic systems and could be targeted with flyers/ads).

  • Work with local health departments to identify homes with septic systems.

Individuals in households in which someone received SNAP benefits in the prior year

  • Post flyers or intercept at local department of social services, WIC offices, food banks, and/or stores that accept SNAP.

Parents/legal guardians of homeschooled children who have not attended regular school

  • Post advertisement to online forums and social media sites frequented by affinity groups of homeschooling parents.

  • Solicit assistance from online programs commonly used by homeschoolers.

  • Solicit assistance from local affinity groups and advocacy organizations of homeschooling parents.

Individuals enrolled in Medicaid

Individuals with household members enrolled in Children’s Health Insurance Program (CHIP)

  • Post flyers or intercept at local department of social services or WIC offices.

Parents/legal guardians of children (including adult children) with disabilities

Individuals with disabilities, especially learning disability

  • Post advertisement to online forums and social media sites frequented by affinity groups of individuals with learning disabilities and parents of children with disabilities.

  • Solicit assistance from local affinity groups and advocacy organizations of individuals with learning disabilities and parents of children with disabilities.

Individuals who earned retirement income

  • Post advertisements to online forums and social media sites frequented by affinity groups of retirees.

  • Solicit assistance from local affinity groups and advocacy organizations of retirees.



    1. Site Selection

For in-person cognitive testing with English speakers, the RTI/RSS team will recruit participants from the following locations based on the location of cognitive interviewers: Chicago, IL; Washington, DC; Research Triangle Park, NC; Saint Louis, MO; San Francisco, CA; and Portland, OR. The RTI/RSS team will conduct research before Round One recruitment to identify locations that are most likely to facilitate recruiting participants with the desired sub-characteristics; additional locations may be included for cognitive interviewing based on the findings of this research. This research and identification of possible additional interviewing locations is important because of potential geographic differences or limitations in specific sub-characteristics of interest, like solar panel use. For each group/topic, participants will be recruited across multiple locations to ensure that any geographic differences in how people answer specific questions can be observed. This approach will both maximize capabilities to recruit participants who meet sub-characteristics efficiently and increase the potential for each set of locations to identify regional differences in how people answer questions.

The bilingual cognitive interviewers are located in Chicago, IL, and Research Triangle Park, NC. For in-person interviews conducted with monolingual Spanish speakers in Round Two, the RTI/RSS team plans to recruit additional participants from metropolitan areas with large Hispanic populations like Miami, New York City, Los Angeles, Houston, and Phoenix. We will carefully select the cities in each round to ensure targets for Hispanic origin are met and to ensure diversity among the other characteristics of interest (e.g., solar panel use). For testing the Puerto Rico Community Survey (PRCS) in Round Three, bilingual interviewers will travel to Puerto Rico.

In all locations, the RTI/RSS team will provide suitable interviewing space that maintains privacy from non-authorized people and ensures the interview cannot be overheard. In locations with no RTI or RSS offices or when the office is not conveniently located for participants, interviews will be conducted in a private room in a public location, like a library. The RTI/RSS team may also rent cognitive interviewing facilities in locations like Puerto Rico.

    1. Over-Recruitment Goals

Because of our experience in recruiting populations for qualitative studies (including cognitive interviews, focus groups, and usability testing), we know that we will have to recruit and screen more volunteers than the targeted number of participants. In particular, this will be the case as we attempt to recruit participants with hard-to-reach characteristics or participants who possess multiple characteristics. We may have to screen as many as 50% more participants than the desired number for some recruitment groups.

The RTI/RSS team will continue to recruit and screen participants until targeted characteristics have been met or until a sufficient reserve of participants (i.e., five more than the number needed) have been screened as eligible. This allows us to quickly replace no-shows or cancellations without causing project delays.

    1. Recruitment Training Plan

The RTI/RSS team will develop a recruitment training plan to ensure that the recruitment team has a complete understanding of the recruitment strategy, the eligibility screening process, the procedure of recording and reporting the recruitment data, and assigning cases to interviewers. To enhance the success in soliciting assistance from local affinity groups, advocacy organizations, or local offices of social services, the recruitment team will learn communication strategies for community-based recruitment. The recruitment team will also be trained to completely understand and follow the protocol to handle personally identifiable information (PII) as well as the information privacy and security procedures for recruitment. For example, per Title 13 requirements, respondent name, address, phone number, email address, demographic data, and other PII of all household members can only be recorded on paper or audiotape when conducting community-based recruitment. The PII from all respondents and all household members will be stored in the computer system hosted within RTI’s secured FIPS-Moderate network. RSS team members will be issued virtual desktop interface logins to access the secure network. In addition, the recruitment team will be instructed that respondent PII can only be accessed for scheduling interviews and that such information will not be stored or distributed with corresponding cognitive interview data.

Because the recruitment goals and approaches differ substantially by language, some aspects of the recruiter trainings may be conducted separately for those recruiting mono-lingual Spanish speakers. Refresher trainings will be conducted as needed before additional rounds of testing, emphasizing changes in recruitment criteria and approach. Copies of all recruitment training materials will be provided to the Census Bureau for review.

    1. Screening Process

Interested individuals will be screened for eligibility criteria using a scripted series of questions. The RTI/RSS team will work with the Census Bureau to develop a draft and final recruitment screening questionnaire that uses terminology and wording that differ slightly from the American Community Survey (ACS) to keep from priming the participants before pretesting.

The recruitment team leader will manage the recruitment process and coordinate assignments for cognitive interviewers based on the results of eligibility determination from the web-based screener, phone screening, or reports from community-based recruiting efforts.

      1. Web Screening

The RTI/RSS team will develop a web-based eligibility screening instrument on Voxco, a survey data collection platform hosted within RTI’s FIPS-Moderate network. The web link of the screening instrument will be presented in online advertisements and physical flyers for recruitment.

The data from the web screener will be downloaded to a recruitment tracking sheet (also located within RTI’s FIPS-Moderate network) so that the lead recruiter can review eligibility criteria and assign cases to interviewers.

In the event that the web screening tool is not approved by Office of Management and Budget (OMB) or information technology (IT) security, all participants will be screened by phone.

      1. Phone Screening and In-Person Screening

For certain characteristics where eligible individuals are less likely to use the internet (e.g., monolingual Spanish speakers, people over 65), advertisements will include a phone number instead of a web link. Responses to the phone screenings will be input immediately into the recruitment tracker (located within RTI’s FIPS-Moderate network), ensuring protection of PII and allowing phone recruiters to immediately determine eligibility and need using the tracking sheet’s algorithms.

In addition, in-person screening will be conducted for participants recruited by a recruiter in the field. For in-person recruitment, the required quota for targeted participant characteristics will be communicated and assigned to the recruitment staff before they travel to recruitment sites to conduct community-based recruitment.

For example, per Title 13 requirements, respondent name, address, phone number, email address, demographic data, and other PII of all household members can only be recorded on paper or audiotape when conducting community-based recruitment.

The recruitment staff will schedule interview appointments with eligible individuals upon consent and report results to the team leader for tracking overall recruitment progress.


    1. Recruitment Advertisement Text

Figure 1‑1. Advertisement Template for In-Person Interview, Print, and Online Post (Specific)

Paid Research Study

We are interviewing adults (age 18 and over) to provide input on a household community survey. We are recruiting participants who meet one or more of the following criteria:

  • Own or rent a home with solar panels

  • Primarily heat your home using butane or propane

  • Are the parent of a child (of any age) with disabilities

The interview takes about 60 minutes. It will be conducted at our office in [LOCATION]. Study participants will receive $40.

To see if you are eligible for the study, please complete a short questionnaire at www.XXXXXXXX.com

This study is being conducted by RTI International, a not-for-profit research organization.

The RTI/RSS team proposes a draft advertisement template for print and online posting, as listed in Figure 1‑1. For Call Order 1, RTI/RSS will first use a generic advertisement that does not include (or suggest) specific recruitment criteria. This will allow us to cast a wide net and meet a variety of recruitment characteristics, and it prevents interested individuals from lying during the screening to meet the perceived criteria. To meet targeted recruitment goals, RTI will then prepare advertisements that specifically mention certain criteria as needed. For printed flyers, the RTI/RSS team may employ a graphic designer to make the advertisements more visually appealing to increase the number of screened individuals.

    1. Recruitment Records

A sophisticated recruitment tracking tool is critical for successfully monitoring data collection across large teams to ensure that targets are met quickly and efficiently while protecting PII. An example recruitment tracking sheet for use on this project, modeled from a recent study with similar complexity, is provided in Figures 1‑2 and 1‑3.

In one tab of the tracking sheet, a recruiter will paste in data from the web screener. The tracker will be hosted in RTI’s Enhanced Security Network, ensuring that PII obtained from the web screener is protected. The second tab in the tracker (Figure 1-2) will read in the raw data from the web screener and automatically determine which recruitment categories a participant qualifies for using preprogrammed formulas. A value of “X” in a cell indicates a participant is eligible for that specific criteria. This allows the recruiter to easily see the categories for which a participant is eligible and prioritize participants who qualify for multiple categories or particularly difficult categories. To select a case, the recruiter changes the value of status from “Eligible” to “Selected” and then indicates the Group (1, 2, or 3) and the Version (1 or 2), if needed. When selected for a particular group, the Xs turn into 1s for the group chosen and into 0s for the groups not selected (because a participant can only be used for one group). The recruiter will then assign the case to an interviewer based on location.

Figure 1‑2. Example Recruitment Tracking Sheet


All interviewers will have access to the tracker to see their case assignments and update interview status. The interviewer will update the status to “Scheduled” when an interview is scheduled and “Completed” when it is completed. The tracker will also include additional categories like “Non-contact,” or “No-show.” Allowing interviewers to update status in real time ensures that the recruiter always has up-to-date information.

The case selections are then read into a third tab (Figure 1‑3), a Recruitment Record Summary, which provides a real-time comparison of recruitment status with goals. The RTI/RSS team can use this summary sheet to provide weekly (or more frequent) updates to the Census Bureau.

Figure 1‑3. Example Recruitment Record Summary


The RTI/RSS team will submit a draft of the summary recruitment record to the Census Bureau to ensure the report is formatted in a manner that is easy to follow. We will use separate tracking sheets for managing English and Spanish interviews.

    1. Schedule and Coordination

Recruiters will be responsible for managing recruitment and selecting participants to be interviewed. They will work collaboratively with the Census Bureau, who will have input on which participants are scheduled. The recruiters will assign cases to interviewers based on location and availability. When an interview is scheduled, interviewers will send participants an email (if their email address is known) confirming the details of the study. Interviewers will remind participants 3 days in advance and will call or text (if permission is granted) the day before the interview as a final reminder.

The RTI/RSS team will coordinate with the Census Bureau to develop an interview schedule that provides the facility name, address, phone number, points of contact, and other pertinent information to facilitate observation. The RTI/RSS team will develop a strategy to schedule sets of interviews over a short time to allow Census Bureau staff to observe these interviews in person. These sets of interviews will be scheduled at least 15 days in advance. The schedule will be updated as frequently as needed to reflect interviewing plans.

    1. Training of Recruiters

Although active recruitment will not begin until OMB clearance is received, RTI/RSS will begin recruitment preparations in advance to facilitate immediate recruitment post-OMB approval. Training the recruiters is a key component of these preparations. The RTI/RSS team will develop a training agenda that ensures consistent understanding of the recruitment plan (e.g., recruitment methods, screening process, scheduling procedures). The training will also cover Title 13 and IT security provisions outlined by the Census Bureau.

    1. Recruitment Methods Report

At the end of each round, the RTI/RSS team will prepare a recruitment methods summary report that documents the recruitment procedures and any challenges encountered. The report will include the following elements at a minimum (1) all recruitment advertisements, (2) recruitment sources used (e.g., Craigslist) along with qualitative and quantitative assessments of how they performed (including the total number of screened, scheduled, completed, and no-show interviews by source, (3) local organizations contacted, (4) any email messages or phone scripts used to contact local organizations, (5) description of experiences with interviewing facilities, and (6) differences in methods by location.

    1. Recruitment Challenges and Risk Mitigation Plan

The most critical recruitment challenges will be recruiting hard-to-reach participants like those with characteristics that are relatively rare in the population or participants who may be reluctant to participate in federal studies (e.g., non-English speakers who are illegal immigrants). For risk mitigation plan, the RTI/RSS team will support the Census Bureau in the preparation of the OMB package. The main objective is to propose and include in the OMB package a comprehensive recruitment strategy that considers every possible method that may be needed.

This strategy will enable the recruitment team to reorient its recruiting effort when a given strategy or recruitment site does not yield desirable outcomes during the approved timeline for participant recruitment and data collection. For example, we believe that we can recruit enough participants via online methods such as Craigslist based on past experience, but we will also enlist community-based recruitment efforts to facilitate recruiting certain populations.

Another challenge is the high rate of no-shows and cancellations often associated with qualitative research. We have found that repeated reminders, including text message reminders (if participants give permission), are particularly effective for preventing no-shows. The RTI/RSS team will continue advertising and screening participants until the number of participants needed in a category is five more than the number needed, to allow for back-ups in the event of cancellations.

Another challenge is the use of bots replying to advertisements on social media. Inclusion of a Captcha in the web screener can help to verify that a human is entering data. Other procedures include monitoring the screening respondents daily to identify unusual patterns and responses that are often suggestive of bots. In addition, humans may lie on the screening to try to qualify for the paid study. The RTI/RSS team will design the questions so that the “needed” answer is not obvious. For example, instead of asking participants if they receive SNAP benefits, the question might ask the respondent to select from a list of financial assistance they receive, including SNAP; Special Supplemental Nutrition Program for Women, Infants, and Children (WIC); Temporary Assistance for Needy Families (TANF); unemployment; Supplemental Security Income (SSI); or other sources of assistance.

The draft screening questionnaire to be used in recruiting and selecting participants will be included with the interview protocols. After review and approval by the Census Bureau, the screening specifications will be added to the recruitment plan.



  1. Study Plan

    1. Test Objectives

ACS is considering modifications to the content of the following questionnaire topics for people living in housing units: household roster, home heating fuel, solar power (new question), electric vehicles (new question), septic systems (new question), year built, SNAP, homeowner association and condominium fees, educational attainment, health insurance coverage, disability, commuting mode, and income and weeks worked.

The objective of this study is to pretest these new and revised questions to ensure that respondents can easily understand and answer them and that the questions are measuring the intended constructs. The results of the cognitive interviews will be used to determine the best version of each question to be evaluated in a split-sample field test.

Three rounds of interviews will be conducted. Within each round, interviews will be distributed across the following domains:

  • Language (English or Spanish),

  • Mode (paper, Computer Assisted Instrument [CAI] with showcards, or CAI without showcards), and

  • Test question version (Version 1 or Version 2).

More specifically, this task will involve

  • Pretesting question topics in the context of the ACS housing unit survey in two rounds:

  • In Round One, the self-administered questions shall be tested in English only and the interviewer-administered questions will be tested in English and stateside Spanish.

  • In Round Two, the questions will be tested in English and stateside Spanish in both self-administered and interviewer-administered modes.

  • Pretesting question topics in the context of the ACS Group Quarters (GQs) survey and the PRCS in one round:

  • In Round Three, pretest the questions in the context of the PRCS in Puerto Rico. The self-administered and interviewer-administered questions will be tested in Puerto Rican Spanish only.

  • In Round Three, pretest the questions in the context of the ACS GQs. The self-administered and interviewer-administered questions will be tested in English, stateside only.

Two versions of the questions will be tested in Round One and a single version in Rounds Two and Round Three. To keep the interview timing reasonable for each participant, the questions will be organized into three groups and participants will be assigned to receive only one group of questions. The three groups will be tested in all the Spanish interviews and in Rounds One and Two of the English interviews. In Round Three, the English interviews will test the “GQ Group” of questions. The question topics will be organized into groups as follows:

  • Group 1

  • Household Roster

  • Septic Systems

  • Home Heating Fuel

  • Solar Power

  • SNAP

  • Group 2

  • Educational Attainment

  • Health Insurance Coverage

  • Disability

  • Group 3

  • Electric Vehicles

  • Condominium or Homeowners Association (HOA) Fee (Round Two only)

  • Commuting Mode

  • Income and Weeks Worked

  • GQ Group

  • Educational Attainment

  • Health Insurance Coverage

  • Disability

  • Commuting Mode

  • Income and Weeks Worked

The condominium fee topic in Group 3 will only be tested in Round Two because it has already undergone an initial round of testing.

Figure 2‑1 shows the minimum number of participants projected per language, mode, question group, and questionnaire version by round. Table 2-1 shows the minimum number of participants by mode, question group, and question version for Round One English interviews and Table 2-2 shows the same information for Round Two English interviews. If needed, RTI/RSS will increase the number of unique participants within an interviewing round to meet testing goals.

Figure 2‑1. Minimum Number of Participants per Language, Mode, Question Group, and Question Version by Round

Language

Mode

Question Group

Questionnaire Version

Round 1

Round 2

Round 3

TOTAL

English

Paper

Group 1

Version 1

16

14

30

English

Paper

Group 2

Version 1

8

14

22

English

Paper

Group 3

Version 1

8

14

22

English

Paper

GQ Group

Version 1

16

16

English

Paper

Group 1

Version 2

16

16

English

Paper

Group 2

Version 2

8

8

English

Paper

Group 3

Version 2

8

8

English

Paper

GQ Group

Version 2

0

English

CAI with showcards

Group 1

Version 1

12

15

27

English

CAI with showcards

Group 2

Version 1

5

8

13

English

CAI with showcards

Group 3

Version 1

5

8

13

English

CAI with showcards

GQ Group

Version 1

8

8

English

CAI with showcards

Group 1

Version 2

12

12

English

CAI with showcards

Group 2

Version 2

5

5

English

CAI with showcards

Group 3

Version 2

5

5

English

CAI with showcards

GQ Group

Version 2

0

English

CAI without showcards

Group 1

Version 1

0

English

CAI without showcards

Group 2

Version 1

5

7

12

English

CAI without showcards

Group 3

Version 1

5

7

12

English

CAI without showcards

GQ Group

Version 1

8

8

English

CAI without showcards

Group 1

Version 2

0

English

CAI without showcards

Group 2

Version 2

5

5

English

CAI without showcards

Group 3

Version 2

5

5

English

CAI without showcards

GQ Group

Version 2

0

TOTAL: English

 

 

128

87

32

247

Spanish

Paper

Group 1

Version 1

10

10

20

Spanish

Paper

Group 2

Version 1

10

10

20

Spanish

Paper

Group 3

Version 1

10

10

20

Spanish

Paper

Group 1

Version 2

0

Spanish

Paper

Group 2

Version 2

0

Spanish

Paper

Group 3

Version 2

0

Spanish

CAI with showcards

Group 1

Version 1

10

10

10

30

Spanish

CAI with showcards

Group 2

Version 1

5

5

5

15

Spanish

CAI with showcards

Group 3

Version 1

5

5

5

15

Spanish

CAI with showcards

Group 1

Version 2

10

10

Spanish

CAI with showcards

Group 2

Version 2

5

5

Spanish

CAI with showcards

Group 3

Version 2

5

5

Spanish

CAI without showcards

Group 1

Version 1

0

Spanish

CAI without showcards

Group 2

Version 1

5

5

5

15

Spanish

CAI without showcards

Group 3

Version 1

5

5

5

15

Spanish

CAI without showcards

Group 1

Version 2

0

Spanish

CAI without showcards

Group 2

Version 2

5

5

Spanish

CAI without showcards

Group 3

Version 2

5

5

TOTAL: Spanish

 

 

60

60

60

180

GRAND TOTAL

 

 

188

147

92

427

Table 2‑1. Round One: Minimum Number of Participants per Mode, Question Group, and Question Version – English Interviews


Group

Paper Mode

CAI Mode


TOTAL

Version

Version

1

2

1

2

Group 1

16

16

12

12

56

Group 2

8

8



16

Group 2 with showcards

-

-

5

5

10

Group 2 without showcards

-

-

5

5

10

Group 3

8

8



16

Group 3 with showcards

-

-

5

5

10

Group 3 with showcards

-

-

5

5

10

TOTAL

32

32

32

32

128




Table 2‑2. Round Two: Minimum Number of Participants per Mode, Question Group, and Question Version – English Interviews


Group


Paper Mode

CAI Mode


TOTAL

Version 1

Version 1

Group 1

14

15

29

Group 2

14

-

14

Group 2 with showcards

-

8

8

Group 2 without showcards

-

7

7

Group 3

14

-

14

Group 3 with showcards

-

8

8

Group 3 without showcards

-

7

7

TOTAL

42

45

87



Round One English. For Round One, the number of participants with each of the distinct sub-characteristics listed in Table 6 of the Call Order will range from four to eight. One exception is the Household Roster because of the large number of sub-characteristics (nine). To keep the costs for testing the Household Roster consistent with other questions, RTI/RSS recommends interviewing at least 28 people across the Household Roster sub-characteristics combined.

To achieve balance, the RTI/RSS team will assign half of participants to Version 1 and half to Version 2, as shown in Table 2-3. Because a given participant may qualify for more than one criterion within a group, this may not result in exactly half of participants in each sub-characteristic receiving one or the other versions. Participants will be assigned to either the interviewer-administered mode or self-administered mode based on the mode in which the participant would most likely respond to the ACS based on their demographic characteristics.

Some participants will meet more than one sub-characteristic within a group. Based on input from the Census Bureau, Table 2-3 shows that approximately 84 participants will be needed to achieve the recruitment targets for Groups 1 and 2 approximately 76 participants will be needed to achieve the targets for Group 3. If needed, RTI/RSS will increase the number of unique participants within an interviewing round to meet testing goals.


Table 2‑3. Round One: Minimum Number of Participants per Sub-characteristic – English Interviews *

Characteristic

Sub-characteristic

Total

Paper Mode

CAI Mode

Version

Version

1

2

1

2

GROUP 1


84

21

21

21

21

Household Roster  

▪     Multiple families cohabitating

28

7

7

7

7

▪     Related subfamilies

▪     Unrelated subfamilies/ individuals

▪     Subfamilies with children, especially

age 0 to 4

▪     Children in custody arrangements

▪     Foster children

▪     No one related to each other

▪     Active duty military (lower priority)

▪     Children who live away at college

(lower priority)

Septic Systems

▪     Septic systems only

8

2

2

2

2

▪     Public sewer

4

1

1

1

1

Home Heating Fuel

▪     Natural gas most

8

2

2

2

2

▪     Butane/propane most

8

2

2

2

2

▪     Other fuel most

4

1

1

1

1

Solar Power

▪     Solar panels

8

2

2

2

2

▪     Without solar panels

4

1

1

1

1

SNAP

▪     Prior year SNAP benefits

8

2

2

2

2

▪     No prior year SNAP benefits

4

1

1

1

1

GROUP 2


84

21

21

21

21

Educational Attainment  **

▪     Parents of homeschooled children

4

1

1

1

1


▪     Age 25+ with less than high school

diploma or General Education

Development (GED)

8

2

2

2

2


▪     Parents of children age 3 to 5

8

2

2

2

2

Health Insurance Coverage

▪     Age 65+

8

2

2

2

2


▪     Enrolled in Medicaid

8

2

2

2

2


▪     Enrolled in CHIP

8

2

2

2

2


▪     Enrolled in state or federal

marketplace

8

2

2

2

2

Disability

▪     Parents of children with disabilities

8

2

2

2

2


▪     Individuals with disabilities,

especially learning disabilities

8

2

2

2

2


▪     Non-native English speakers

8

2

2

2

2


▪     Age 50+

8

2

2

2

2

Table 2‑3. Round One: Minimum Number of Participants per Sub-characteristic – English Interviews (continued)

Characteristic

Sub-characteristic

Total

Paper Mode

CAI Mode

Version

Version

1

2

1

2

GROUP 3


76

19

19

19

19

Electric Vehicles

▪     Purchased a vehicle in the last 10

years

4

1

1

1

1


▪     Own or lease a plug-in electric

vehicle

8

2

2

2

2


▪     Own or lease non–plug-in electric or

hybrid vehicle

8

2

2

2

2

Condominium Fee

▪     Homeowners with Homeowners

Association (HOA) fee

0

0

0

0

0


▪     Homeowners that live in a

condominium

0

0

0

0

0


▪     Homeowners that are part of

voluntary neighborhood associations

0

0

0

0

0


▪     Renters with HOA or condo fee

0

0

0

0

0

Commuting Mode**

▪     Use ride-share (e.g., Lyft/Uber) to

get to work

8

2

2

2

2


▪     Alternate multi-passenger

transportation (e.g., carpool,

vanpool, slug line)

8

2

2

2

2

Income

▪     Irregular workers in the prior year

8

2

2

2

2


▪     Regular workers in the prior year

8

2

2

2

2


▪     Did not work in the prior year

8

2

2

2

2


▪     Earned retirement or rental income,

or commission/bonus/tips in prior

year

8

2

2

2

2


▪     Received SNAP or public assistance

benefits in prior year

8

2

2

2

2

TOTALS


244

61

61

61

61

* CAI interviews with and without showcards are combined in this table.

** These questions use a show card for personal visit interviews. Therefore, there are two variations of the question in the interviewer-administered mode that need to be tested.

Round Two English. Because only one version of the questions will be tested in Round Two (across both modes), comparisons by version are not needed. Based input from the Census Bureau, Table 2-4 shows that approximately 54 participants will be needed to achieve the recruitment targets for Group 1, approximately 44 participants will be needed to achieve the targets for Group 2, and approximately 56 for Group 3. If needed, RTI/RSS will increase the number of unique participants within an interviewing round to meet testing goals.


Table 2‑4. Round Two: Minimum Number of Participants per Sub-characteristic – English Interviews *

Characteristic

Sub-characteristic

Total

Paper Mode

CAI Mode

(Version 1)

(Version 1)

GROUP 1


54

27

27

Household Roster  

▪     Multiple families cohabitating

18

9

9


▪     Related subfamilies





▪     Unrelated subfamilies/

individuals





▪     Subfamilies with children,

especially age 0 to 4





▪     Children in custody

arrangements





▪     Foster children





▪     No one related to each other





▪     Active duty military (lower

priority)





▪     Children who live away at

college (lower priority)




Septic Systems

▪     Septic systems only

4

2

2


▪     Public sewer

4

2

2

Home Heating Fuel

▪     Natural gas most

4

2

2


▪     Butane/propane most

4

2

2


▪     Other fuel most

4

2

2

Solar Power

▪     Solar panels

4

2

2


▪     Without solar panels

4

2

2

SNAP

▪     Prior year SNAP benefits

4

2

2


▪     No prior year SNAP benefits

4

2

2




Table 2‑4. Round Two: Minimum Number of Participants per Sub-characteristic – English Interviews (continued)

Characteristic

Sub-characteristic

Total

Paper Mode

CAI Mode

(Version 1)

(Version 1)

GROUP 2


44

22

22

Educational Attainment  **

▪     Parents of homeschooled

children

4

2

2


▪     Age 25+ with less than high

school diploma or General

Education Development (GED)

4

2

2


▪     Parents of children age 3 to 5

4

2

2

Health Insurance Coverage

▪     Age 65+

4

2

2


▪     Enrolled in Medicaid

4

2

2


▪     Enrolled in CHIP

4

2

2


▪     Enrolled in state or federal

marketplace

4

2

2

Disability

▪     Parents of children with

disabilities

4

2

2


▪     Individuals with disabilities,

especially learning disabilities

4

2

2


▪     Non-native English speakers

4

2

2


▪     Age 50+

4

2

2

GROUP 3


56

28

28

Electric Vehicles

▪     Purchased a vehicle in the last

10 years

4

2

2


▪     Own or lease a plug-in electric

vehicle

4

2

2


▪     Own or lease non–plug-in

electric or hybrid vehicle

4

2

2

Condominium Fee

▪     Homeowners with Homeowners

Association (HOA) fee

4

2

2


▪     Homeowners that live in a

condominium

4

2

2


▪     Homeowners that are part of

voluntary neighborhood

associations

4

2

2


▪     Renters with HOA or condo fee

4

2

2

Commuting Mode **

▪     Use ride-share (e.g., Lyft/Uber)

to get to work

4

2

2


▪     Alternate multi-passenger

transportation (e.g., carpool,

vanpool, slug line)

4

2

2



Table 2‑4. Round Two: Minimum Number of Participants per Sub-characteristic – English Interviews (continued)

Characteristic

Sub-characteristic

Total

Paper Mode

CAI Mode

(Version 1)

(Version 1)

Income

▪     Irregular workers in the prior

year

4

2

2


▪     Regular workers in the prior

year

4

2

2


▪     Did not work in the prior year

4

2

2


▪     Earned retirement or rental

income, or commission / bonus

/ tips in prior year

4

2

2


▪     Received SNAP or public

assistance benefits in prior year

4

2

2

TOTALS


154

77

77

* CAI interviews with and without showcards are combined in this table.

** These questions use a show card for personal visit interviews. Therefore, there are two variations of the question in the interviewer-administered mode that need to be tested.

We anticipate that some questions will perform better than other questions in Round One or that participants with certain sub-characteristics will have fewer problems than other participants. As a result, we will work with Census to distribute the number of participants across sub-characteristic based on the greatest need. That may mean that for some sub-characteristics, no interviews will be conducted if the questions perform well in Round One. For other sub-characteristics, more than four interviews may be conducted.

Round Three English. Round Three cognitive testing will be conducted with participants living in GQs. As such, individual sub-characteristics are less of a concern and recruiting a variety of institutional (nursing homes, jails) and non-institutional (college and university dormitories, homeless shelters, group homes) GQs is needed to identify potential problems respondents may have understanding and completing the questions. The RTI/RSS team proposes conducting 32 cognitive interviews for Round Three as shown in Table 2-5. This allows for eight participants in the paper mode and four in each of the CAI modes (with and without showcards) by GQ type. RTI will work with the Census Bureau to determine the most critical sub-characteristics to include in each group based on the prior rounds of testing. This section will be updated with more detail before Round Three.

Table 2‑5. Minimum Round Three English Interviews by Mode and GQ Type

Mode

GQ

Total

Institutional

Non-institutional

Paper

8

8

16

CAI with showcards

4

4

8

CAI without showcards

4

4

8

Total

16

16

32



Spanish (Rounds 1 and 2). For Round One, RTI/RSS recommends cognitively testing each group of questions with 20 monolingual Spanish speakers 18 or older. This will allow each version of the questionnaire to be tested with 10 Spanish speakers in each group. Table 2-4 shows the proposed minimum number of participants in each group by version and Hispanic origin. No specific targets will be set for any of the other sub-characteristics because of concerns about whether the questions could be adequately tested in Spanish if that were the case. However, RTI will prioritize eligible screening respondents who meet various sub-characteristics to achieve a more diverse set of participants.

Table 2‑6. Round One Minimum Participants in Each Group by Version and Hispanic Origin


Group 1

Group 2

Group 3


Hispanic Origin

V1

V2

V1

V2

V1

V2

Total

Mexican

3

3

3

3

3

3

18

Central American

3

3

3

3

3

3

18

South American

2

2

2

2

2

2

12

Puerto Rican, Cuban, or Dominican

2

2

2

2

2

2

12

Total per Group

10

10

10

10

10

10

60



For Round Two, RTI/RSS recommends also conducting 20 interviews with Spanish speakers in each of the three question groups. The primary difference is three questionnaire modes (paper, CAI with showcards, and CAI without showcards) will be tested instead of two versions of the questions. No specific targets will be set for any of the other sub-characteristics. However, RTI/RSS will strive to recruit a diverse set of participants.

Spanish (Round Three). For Round Three, which will be conducted in Puerto Rico, the RTI/RSS team recommends conducting a total of 20 interviews across each of the three question groups (60 interviews total). No targets will be set by Hispanic origin because all participants will be recruited from Puerto Rico, but RTI/RSS will work with the Census Bureau to determine appropriate sub-characteristic targets for each group based on the results of Rounds One and Two.

    1. Test Assumptions

The research questions for each new or revised question are identified in Table 2‑7. For all topics, in addition to the questions listed in the table, we are also interested in differences in comprehension and reporting by subgroup, which version of the question works better overall and by subgroup type, and how well the Spanish translations work for monolingual Spanish speakers of different Hispanic origins.


Table 2‑7. Research Questions by Question Topic


Research Questions

Group 1


Household Roster

  • Which version helps respondents understand better that they should include unrelated people on the roster?

  • Do respondents notice and comprehend all the text about who to include or exclude outside of the main question stems? Are they utilizing this text when responding?

  • Which version of the paper form yielded the more accurate roster?

  • (For paper) Was one format preferred by respondents?

  • (For Computer-Assisted Interviewing [CAI] V1) What do the terms “short visit,” “short time,” and “overnight stay” mean to respondents?

  • (For CAI V2) What does the term “short time” mean to respondents?

  • Were respondents hesitant or unsure about including anyone that lives or stays with them?

    • If so, what are the relationships and/or living situations of those people?

    • What was the reason they were hesitant or unsure to include them?

    • Were they unsure how long someone had to live there to be included?

  • Do respondents have privacy concerns about reporting certain household members?

    • If so, what are the relationships and/or living situations of those people?

    • What wording increased or decreased their concerns?

  • (For CAI only) Did the respondents express annoyance or confusion about why we were asking the series of coverage questions?

  • If the respondent does not have a complex living situation now (they didn’t add or delete people via the coverage questions), have they ever have had one (lived with someone else or have someone else live with them)? How would they have answered the questions about that situation?

Septic Systems (New—after H7)

  • Do people know the difference between a septic system or cesspool versus one connected to public sewer?

  • Do people have more than one type of disposal system? If so, how do they decide to answer?

  • What type of system do those who choose “Other type of sewage disposal” (V1) or “No, use other type of system” (V2) have?

Home Heating Fuel (H13)

  • Do respondents understand what “natural gas” (response category 1) is as opposed to “gas” (response category 2)?

  • Do the words “butane” or “propane” help respondents find the correct heating fuel category?

  • (V1) What is the impact of removing “bottled or tank” from the second response option?

  • Are respondents only reporting the fuel they use the “MOST”?

(continued)


Table 2‑7. Research Questions by Question Topic (continued)


Research Questions

Group 1 (continued)


Solar Power (New—after H13)

  • How do people understand the terms, “solar panel” (V1) and “solar power” (V2)? Do they view them as meaning the same thing or something different? How do they decide if they have solar power/panels?

  • Do people understand the term “photovoltaic”?

SNAP (H15)

  • Which placement of the SNAP question was better for respondents (current location in version 1, or the end of the housing section in version 2?

  • Is one better than the other at getting respondents to understand the reference period and how it differs from other questions?

  • Does the placement of the question at the end of the housing section (version 2) result in it getting overlooked?

Group 2


Educational Attainment* (P11)

  • Do respondents understand this question and the overall meaning?

  • Use of the term “grade” (V1) rather than “level” (V2) of school in the base question—Is one easier to understand)?

  • How do people currently enrolled in school answer this question? Does the instruction help them answer?

  • Use of “less than grade 1” (V1) versus expanded three categories “less than 1 year of school completed,” “nursery school or preschool,” and “kindergarten” (V2)—Does one version better help people understand who should select these response options? Specifically, does a broader category (V1) help people respond? How do those with no schooling, schooling in levels below first grade, and others respond? This difference in wording between versions also results in differences in the main headings that the specific categories fall under. Do the heading differences affect respondents’ understanding of the categories?

  • Unlike the current version, both test versions remove the heading and subheading “No schooling completed.” Do those with no schooling know how to respond?

  • Do individuals with homeschooled children understand the question and response categories and know where their child should be classified?

(continued)


Table 2‑7. Research Questions by Question Topic (continued)


Research Questions

Group 2 (continued)

Health Insurance Coverage (P16)

  • How are respondents reporting Medicaid and direct-purchase plans?

  • Are the additional instructions (“Do NOT include plans that cover only one type of insurance, such as dental, drug, or vision plans”) clear to respondents and are respondents including single-service health insurance plans in their responses?

  • How do people with Medicare Advantage plans classify their health insurance coverage?

  • How do people with Marketplace coverage classify their coverage?

  • How do respondents with household members who have coverage through the state Children’s Health Insurance Program (CHIP) classify the child’s coverage?

  • For respondents who check more than one option, are they double reporting single coverage, or do they have more than one type?

  • What types of insurance or health plans do respondents associate with “current or former employer, union, or professional association”?

Disability (P18, P19, P20)

  • Modified response categories in both versions to be more detailed than current “Yes/No” response categories: How do respondents distinguish between categories when selecting a category?

  • Introductory text in Computer Assisted Instrument (CAI) before Q18: Does having introductory text, “The next questions ask about difficulties [NAME] may have doing certain activities.” help transition from the previous set of questions to the disability section?

  • Q19a: Do respondents think there is a difference between “stairs” (V1) and “steps” (V2)? Does one term help respondents understand the question better than the other?

  • Q19b: Do adults think this question only applies to people with dementia? Do respondents focus on the term “remembering,” on the term “concentrating,” or both? How do people with a learning disability (or people answering for someone who has one) answer this question? Do respondents associate particular health conditions with difficulty remembering or concentrating?

  • Q19c: Do respondents understand the text “washing all over” (V2) as being a broader (i.e., more inclusive) description of bathing? Do respondents find “washing all over” to be odd or too personal? Do respondents find “bathing” (V1) to be odd or too personal?

  • 19d: What types of disabilities/conditions are being identified for children when the response indicates difficulty with communication? Does the question make sense for younger children who are not old enough to talk? Do respondents understand that this question is not referring to mastering English as a second language (particularly those who do not speak English very well)?

  • Q20: In Version 2, the question preamble, “Because of a physical, mental, or emotional condition…” is included in the beginning of the question. Does this preamble result in a better understanding that this question is asking about disability limitations regarding doing errands? What types of problems/issues do respondents mention when discussing “difficulty doing errands alone such as visiting a doctor’s office or shopping”?

(continued)

Table 2‑7. Research Questions by Question Topic (continued)


Research Questions

Group 2 (continued)


  • Do respondents age 50+ interpret the difficulties in this series of questions as part of the normal aging process? If so, do respondents age 50+ underreport?

Group 3


Electric Vehicles (New—after H12)

  • Are respondents reporting hybrid vehicles that do not require connecting to an electrical source for charging?

  • Are respondents going to consider every household member that might own an eligible vehicle?

  • Are respondents reporting vehicles that they are currently leasing?

  • (V1) What other type of electric vehicles do people have when they respond “Yes” to 14b?

Condominium Fee (H16)

  • Which version of the question works best for homeowners versus people in condominiums?

  • Are participants more easily able to report the monthly or yearly amount?

Commuting Mode* (P32)

  • For all respondents, is the meaning of the “taxi or ride-hailing services” category clear? If not, what descriptive words would have made the meaning clearer? Are there descriptive words missing?

  • Among those who chose ride-hailing as their primary means of transportation to work, what is their second most common mode of work travel?

  • Among those who did not choose ride-hailing as their primary means of transportation to work, do they ever use ride-hailing services to travel to or from work? How often? In what context?

  • Do respondents have any travel modes they use to get to work that are not represented in the ACS travel mode question? What are they?

Income and Weeks Worked (P43, P44)

  • Do the respondents report income for the appropriate reference period (prior year)?

  • Are the respondents reporting income accurately, especially keeping in mind the following changes being made to question or instructional wording:

    • Total Income Amount—Adding “include all sources”

  • Does the respondent report “all sources” or do they leave out some?

    • Self-Employment Income—Adding “including work paid for in cash”

  • Does the respondent report all self-employment income (including side jobs that they may not report as income for tax purposes)?

    • Net Rental Income—splitting up category as its own question (paper)

  • Does splitting up the categories make it easier for the respondent to recall the amounts and report accurately?

  • Does having Net Rental Income as its own category (on Paper) make respondents who are reading quickly misreport their monthly rent to a landlord (instead of rental income)?

  • If a respondent indicates that they did not make any rental income, we want to learn more.

  • How would respondents who did make rental income answer if they had lost money or broke even on a rental property?

(continued)

Table 2‑7. Research Questions by Question Topic (continued)


Research Questions

Group 3 (continued)


  • Public Assistance Income—new wording and instructions

  • Does the new wording (and additional instructions) help the respondent to report the amounts and type of income that we intend to be reported with this question, or do they include or exclude certain types?

  • Do the changes to the Weeks Worked series (in addition to the year change) obtain the appropriate information for that year?

  • Does question 39 set up the universe for the Weeks Worked questions properly? (Or conversely, do respondents get confused by the additional question (39b) and SKIP patterns to 39)?

  • Adding “for at least one day” to 41b is supposed to let the respondent know that we consider a week being worked even if they just worked one day in that week. Does the respondent seem to understand this concept?

aThis question uses a show card for personal visit interviews. Therefore, there are two variations of the question per version in the interviewer-administered mode that need to be tested.


    1. Test Schedule (updated May 18, 2020)

Figure 2-2. Project Schedule

Line

Task Name

Duration

Start

Finish

1

Contract award

1 day

1/30/2020

1/30/2020

2

Kick-Off

27 days

1/31/2020

3/9/2020

3

Draft kick-off agenda and materials

10 days

1/31/2020

2/13/2020

4

Final kick-off agenda and materials

3 days

2/14/2020

2/18/2020

5

Kick-Off Meeting

1 day

2/19/2020

2/19/2020

6

Draft Kick-off Meeting minutes

5 days

2/20/2020

2/26/2020

7

Census feedback on Kick-off Meeting minutes

5 days

2/27/2020

3/4/2020

8

Revised Kick-off Meeting minutes

3 days

3/5/2020

3/9/2020

9

Recruitment and Study Plans

38 days

2/20/2020

4/13/2020

10

Census feedback on draft Recruitment & Study Plans

7 days

2/20/2020

2/28/2020

11

Revised Recruitment & Study Plans

6 days

3/2/2020

3/9/2020

12

Census feedback on revised Recruitment & Study Plans

10 days

3/10/2020

3/23/2020

13

Second Revised Recruitment & Study Plans

5 days

3/24/2020

3/30/2020

14

Census feedback on second revised Recruitment &

Study Plans

5 days

3/31/2020

4/6/2020

15

Final Recruitment & Study Plans

5 days

4/7/2020

4/13/2020

16

Project Plan

299 days

2/20/2020

4/13/2021

17

Draft Project Plan

5 days

2/20/2020

2/26/2020

18

Census feedback on Draft Project Plan

5 days

2/27/2020

3/4/2020

19

Revised Project Plan

5 days

3/5/2020

3/11/2020

20

Census Bureau feedback on the revised Project Plan

4 days

3/12/2020

3/17/2020

21

Second revised Project Plan to address Census Bureau

feedback

4 days

3/18/2020

3/23/2020

22

Census Bureau feedback on the second revised Project

Plan

5 days

3/24/2020

3/30/2020

23

Revised Project Plan to address changes to Recruitment

& Study Plans

5 days

3/31/2020

4/6/2020

24

Census Bureau approval of revised Project Plan

5 days

4/7/2020

4/13/2020

25

Periodic updates to Project Plan to reflect changes

261 days

4/14/2020

4/13/2021

26

Biweekly Status Meetings

486 days

3/16/2020

1/24/2022

27

Monthly Status Reports (5th working day of each month)

509 days

2/28/2020

2/9/2022

28

February 2020

7 days

2/28/2020

3/9/2020

29

March 2020

5 days

4/1/2020

4/7/2020

30

April 2020

5 days

5/1/2020

5/7/2020

31

May 2020

5 days

6/1/2020

6/5/2020

32

June 2020

5 days

6/30/2020

7/6/2020

33

July 2020

5 days

8/3/2020

8/7/2020

34

August 2020

5 days

9/2/2020

9/8/2020

35

September 2020

5 days

10/1/2020

10/7/2020

36

October 2020

5 days

11/2/2020

11/6/2020

37

November 2020

5 days

12/1/2020

12/7/2020

38

December 2020

5 days

1/4/2021

1/8/2021

39

January 2021

5 days

2/3/2020

2/7/2020

40

February 2021

5 days

3/2/2020

3/6/2020

41

March2021

5 days

4/1/2020

4/7/2020

Figure 2-2. Project Schedule (continued)

Line

Task Name

Duration

Start

Finish

42

April 2021

5 days

5/1/2020

5/7/2020

43

May 2021

5 days

6/1/2020

6/5/2020

44

June 2021

5 days

7/1/2020

7/7/2020

45

July 2021

5 days

8/3/2020

8/7/2020

46

August 2021

5 days

9/2/2020

9/8/2020

47

September 2021

5 days

10/1/2020

10/7/2020

48

October 2021

5 days

11/2/2020

11/6/2020

49

November 2021

5 days

12/1/2020

12/7/2020

50

December 2021

5 days

1/4/2021

1/8/2021

51

January 2022

5 days

2/1/2021

2/5/2021

52

Round 1

149 days

3/12/2020

10/6/2020

53

Protocols, IRB Approval, & OMB Clearance

48 days

3/12/2020

5/18/2020

54

Draft screening & interview protocols and materials

18 days

3/12/2020

4/6/2020

55

Census feedback on draft protocols and materials

10 days

4/7/2020

4/20/2020

56

Revised introductory script, protocol with probes, and

email/text appointment messages

5 days

4/21/2020

4/27/2020

57

Revised screening instrument and recruitment

advertisements

8 days

4/21/2020

4/30/2020

58

Census review of revised introductory script, protocol

with probes, and email/text appt. messages

5 days

4/28/2020

5/4/2020

59

Further revisions to introductory script, protocol with

probes, and email/text appt. messages

2 days

5/5/2020

5/6/2020

60

Census review of revised screening instrument and

recruitment advertisements

5 days

5/1/2020

5/7/2020

61

Further revisions to screening instrument and

recruitment advertisements

2 days

5/8/2020

5/11/2020

62

Census review of revised introductory script, protocol

with probes, screening instrument and recruitment

advertisements

4 days

5/12/2020

5/15/2020

63

Final revisions to introductory script, protocol with

probes, screening instrument and recruitment

advertisements

2 days

5/18/2020

5/19/2020

64

Census approval of final protocols and materials

2 days

5/20/2020

5/21/2020

65

Spanish Translation of Protocols and Materials

25 days

5/22/2020

6/25/2020

66

Draft translations of materials

10 days

5/22/2020

6/4/2020

67

Census feedback on translations

10 days

6/5/2020

6/18/2020

68

Final translations of materials

5 days

6/19/2020

6/25/2020

69

Research on Locations for Rare Characteristics

27 days

4/10/2020

5/18/2020

70

Develop research plan for best locations

7 days

4/10/2020

4/20/2020

71

Census review of draft locations research plan

2 days

4/21/2020

4/22/2020

72

Revise locations research plan

3 days

4/23/2020

4/27/2020

73

Census approval of locations research plan

4 days

4/28/2020

5/1/2020

74

Conduct and submit research on best locations

6 days

5/4/2020

5/11/2020

75

Census review and approval of location research

6 days

5/12/2020

5/19/2020

76

Summary Template

11 days

5/7/2020

5/21/2020

77

Develop summary template

4 days

5/7/2020

5/12/2020

78

Census review and approval of template

10 days

5/13/2020

5/26/2020

79

Final summary template

3 days

5/27/2020

5/29/2020

Figure 2-2. Project Schedule (continued)

Line

Task Name

Duration

Start

Finish

80

RTI IRB approval

10 days

5/22/2020

6/4/2020

81

Estimated OMB clearance

27 days

5/22/2020

6/29/2020

82

Recruiter and Interviewer trainings

1 day

6/23/2020

6/23/2020

83

Recruit Participants and Conduct Interviews

11 wks

6/30/2020

9/14/2020

84

Completed case summaries

7 days

9/15/2020

9/23/2020

85

Recruitment methods summary report

10 days

9/15/2020

9/28/2020

86

Round 1 briefing report

10 days

9/15/2020

9/28/2020

87

Attend and present R1 Results

1 day

10/6/2020

10/6/2020

88

Submit Round 1 consent, recordings & vouchers

5 days

9/29/2020

10/5/2020

89

Round 2

171 days

10/7/2020

6/2/2021

90

Receive revised question wording and updated

recruitment requirements

15 days

10/7/2020

10/27/2020

91

Protocols

35 days

10/28/2020

12/15/2020

92

Draft screening & interview protocols and materials

5 days

10/28/2020

11/3/2020

93

Census feedback on draft protocols and materials

10 days

11/4/2020

11/17/2020

94

Revised screening & interviewing protocols and

materials

3 days

11/18/2020

11/20/2020

95

Census review of revised protocols and materials

7 days

11/23/2020

12/1/2020

96

Final revisions to protocols and materials

5 days

12/2/2020

12/8/2020

97

Census approval of final protocols and materials

5 days

12/9/2020

12/15/2020

98

Spanish Translation of Protocols

25 days

12/16/2020

1/19/2021

99

Draft Translations of materials

10 days

12/16/2020

12/29/2020

100

Census feedback on translations

10 days

12/30/2020

1/12/2021

101

Final translation of materials

5 days

1/13/2021

1/19/2021

102

Recruiter and interviewer refresher trainings

1 day

1/15/2021

1/15/2021

103

Recruit and Conduct Interviews

11 wks

1/20/2021

4/6/2021

104

Completed case summaries

7 days

4/7/2021

4/15/2021

105

Recruitment methods summary report

10 days

4/7/2021

4/20/2021

106

Briefing report

10 days

4/7/2021

4/20/2021

107

Attend and Present Results

1 day

4/28/2021

4/28/2021

108

Consolidated R1 and R2 Recommendations Report

25 days

4/29/2021

6/2/2021

109

Draft consolidated report

10 days

4/29/2021

5/12/2021

110

Census feedback on draft report

10 days

5/13/2021

5/26/2021

111

Final consolidated report

5 days

5/27/2021

6/2/2021

112

Submit Round 2 consent, recordings, vouchers

5 days

5/27/2021

6/2/2021

113

Round 3

182 days

4/29/2021

1/7/2022

114

Receive revised question wording and updated recruitment requirements

15 days

4/29/2021

5/19/2021

115

Protocol and OMB Clearance for PRCS and GQ

38 days

5/20/2021

7/12/2021

116

Draft screening & interview protocols and materials

10 days

5/20/2021

6/2/2021

117

Census feedback on draft protocols and materials

10 days

6/3/2021

6/16/2021

118

Revised screening & interviewing protocols and

materials

5 days

6/17/2021

6/23/2021

119

Census review of revised protocols and materials

5 days

6/24/2021

6/30/2021

120

Final revisions to protocols and materials and Census

approval

8 days

7/1/2021

7/12/2021



Figure 2-2. Project Schedule (continued)

Line

Task Name

Duration

Start

Finish

121

Translation of PRCS Protocols and OMB Materials

26 days

7/13/2021

8/17/2021

122

Draft Translations of materials

11 days

7/13/2021

7/27/2021

123

Census feedback on translations

10 days

7/28/2021

8/10/2021

124

Final translation of materials

5 days

8/11/2021

8/17/2021

125

Estimated OMB clearance

6 wks

7/13/2021

8/23/2021

126

Recruiter and interviewer refresher training PRCS

1 day

8/16/2021

8/16/2021

127

Recruiter and interviewer refresher training GQ

1 day

8/17/2021

8/17/2021

128

Recruit and Conduct PRCS Interviews

10 wks

8/24/2021

11/1/2021

129

Recruit and Conduct GQ Interviews

10 wks

8/24/2021

11/1/2021

130

Completed case summaries

10 days

11/2/2021

11/15/2021

131

Recruitment methods summary report

10 days

11/2/2021

11/15/2021

132

Briefing report

10 days

11/2/2021

11/15/2021

133

Attend virtually and present results

1 day

11/23/2021

11/23/2021

134

Final Round 3 Recommendations Report

20 days

11/24/2021

12/21/2021

135

Draft Round 3 recommendations report

5 days

11/24/2021

11/30/2021

136

Census feedback on draft Round 3 report

10 days

12/1/2021

12/14/2021

137

Final Round 3 report

5 days

12/15/2021

12/21/2021

138

Submit Round 3 consent, recordings & vouchers

10 days

12/1/2021

12/14/2021

139

Finalize all question wording

10 days

12/22/2021

1/4/2022

140

Close out meeting via teleconference

1 day

1/7/2022

1/7/2022

141

Project close out

1 day

1/10/2022

1/10/2022



    1. Target User Population

For Rounds One and Two, the target population is non-institutionalized English-speaking adults and monolingual Spanish-speaking adults (18+) who live stateside.

For Round Three GQs, the target population is English-speaking adults (18+) who live in institutionalized and non-institutionalized GQ facilities (e.g., prisons/jails, nursing homes, college dorms, military bases, homeless shelters) in the United States.

For Round Three PRCS, the target population is adults (18+) currently residing in Puerto Rico.

    1. Participant Inclusion Criteria or Characteristics and Sample Size

      1. English

For Round One, the RTI/RSS team will interview at least six participants from each of the distinct sub-characteristics as shown in Table 2‑8. In Round One, approximately half of the participants will be assigned to receive Version 1 of the questionnaire and approximately half will receive Version 2. In Round Two, only one version of the questionnaire will be tested (across both self-administered and interviewer-administered modes). Therefore, comparisons by question version are not needed. As a result, the RTI/RSS team will conduct interviews with four participants per sub-characteristic as shown in Table 2-8. In both rounds, participants will be assigned to the interview mode that they would be most likely to complete the ACS in based on their demographic data. The sample size for Round Two may be revised after Round One. We anticipate that some participants will meet more than one sub-characteristic within a group. Therefore, the number of unique participants does not add to the sum of the recruitment criteria.

Table 2‑8. Minimum Participants per Sub-characteristic by Round


Round

Characteristic

Sub-characteristic

1

2

Household Roster

  • Multiple families cohabitating

  • Related subfamilies

  • Unrelated subfamilies/ individuals

  • Subfamilies with children, esp. age 0 to 4

  • Children in custody arrangements

  • Foster children

  • No one related to each other

  • Active duty military (lower priority)

  • Children who live away at college (lower priority)

28

18

Septic Systems

  • Septic systems only

8

4

  • Public sewer

4

4


Table 2‑8. Minimum Participants per Sub-characteristic by Round (continued)

Home Heating Fuel

  • Natural gas most

8

4

  • Butane/propane most

8

4

  • Other fuel most

4

4

Solar Power

  • Solar panels

8

4

  • Without solar panels

4

4

SNAP

  • Prior year SNAP benefits

8

4

  • No prior year SNAP benefits

4

4

Estimated Group 1 Unique Participants

84

54

Educational attainment

  • Parents of homeschooled children

4

4

  • Age 25+ with less than high school diploma or GED

8

4

  • Parents of children age 3 to 5

8

4

Health Insurance Coverage

  • Age 65+

8

4

  • Enrolled in Medicaid

8

4

  • Enrolled in CHIP

8

4

  • Enrolled in state or federal marketplace

8

4

Disability

  • Parents of children with disabilities

8

4


  • Individuals with disabilities, esp. learning disabilities

8

4


Disability

  • Non-native English speakers

8

4

  • Age 50+

8

4

Estimated Group 2 Unique Participants

84

44

Electric Vehicles

  • Purchased a vehicle in the last 10 years

4

4

  • Own or lease a plug-in electric vehicle

8

4

  • Own or lease non-plug-in electric or hybrid vehicle

8

4

Condominium Fee

  • Homeowners with HOA fee

0

4

  • Homeowners that live in a condominium

0

4

  • Homeowners that are part of voluntary neighborhood associations

0

4

  • Renters with HOA or condo fee

0

4

Commuting Mode

  • Use ride-share (e.g., Lyft/Uber) to get to work

8

4

  • Alternate multi-passenger transportation (e.g., carpool, vanpool, slug line)

8

4




Table 2‑8. Minimum Participants per Sub-characteristic by Round (continued)


Round

Characteristic

Sub-characteristic

1

2

Income

  • Irregular workers in the prior year

8

4

  • Regular workers in the prior year

8

4

  • Did not work in the prior year

8

4

  • Retirement, self-employment, or rental income; commission/bonus/tips in prior year

8

4

  • Received SNAP or public assistance benefits in prior year

8

4

Estimated Group 3 Unique Participants

76

56



Round Three cognitive testing will be conducted within the context of GQs. As such, individual sub-characteristics are less of a concern, and recruiting a variety of institutional (nursing homes, jails) and non-institutional (college and university dormitories, homeless shelters, group homes) GQs is needed to identify potential problems respondents may have understanding and completing the questions. The RTI/RSS team will conduct 32 cognitive interviews per group for Round Three as shown in Table 2‑9. This allows for eight participants in each mode by GQ type. RTI will work with the Census Bureau to determine the most critical sub-characteristics to include in each group based on prior rounds of testing.

Table 2‑9. Minimum Round Three English Interviews by Mode and GQ Type

Mode

GQ

Total

Institutional

Non-institutional

Self-administered

8

8

16

Interviewer-administered

8

8

16

Total

16

16

32



      1. Spanish

For Round One, RTI will cognitively testing each of group of questions with 20 monolingual Spanish speakers who are age 18 or older. This will allow each version of the questionnaire to be tested with 10 Spanish speakers in each group. Table 2‑10 shows the minimum number of participants in each group by version and Hispanic origin. No specific targets will be set for any of the other sub-characteristics. However, RTI will strive to interview as diverse a set of participants as possible with respect to the sub-characteristics.

Table 2‑10. Round One Minimum Participants in Each Group by Version and Hispanic Origin


Group 1

Group 2

Group 3


Hispanic Origin

V1

V2

V1

V2

V1

V2

Total

Mexican

3

3

3

3

3

3

18

Central American

3

3

3

3

3

3

18

South American

2

2

2

2

2

2

12

Puerto Rican, Cuban, or Dominican

2

2

2

2

2

2

12

Total per Group

10

10

10

10

10

10

60



For Round Two, RTI will also conducting 20 interviews with Spanish speakers in each of the three question groups. The primary difference is that two questionnaire modes (interviewer-administered versus self-administered) will be tested instead of two versions of the questions.

For Round Three, which will be conducted in Puerto Rico, the RTI/RSS team will conduct a total of 20 interviews across each of the three question groups (60 interviews total). No targets will be set by Hispanic origin because all participants will be recruited from Puerto Rico, but RTI will work with the Census Bureau to determine appropriate sub-characteristic targets for each group based on the results of Rounds Two and Three.

    1. Participant Recruitment Methods

To ensure efficient and successful recruitment of the targeted populations, the RTI/RSS team will use a combination of online recruitment strategies and community-based methods that we have found effective in our experience over the past 2 decades. Specifically, we will use online advertising and targeted community-based advertising and interception as the main recruitment strategies to solicit participation based on the participant characteristics detailed in the Call Order.

Online Advertisements. RTI will place advertisements on www.craigslist.com, which has proven effective for recruiting participants for in-person interviews. We will also identify popular local and national online destinations frequented by affinity groups pertaining to the targeted participant characteristics. These online destinations may include listservs, forums, blogs, Facebook groups, and Reddit. For example, we will post targeted advertisement to relevant Facebook groups to recruit parents of homeschooled children, retired respondents, or individuals with learning disabilities. Online advertising will primarily be used for recruiting English-speaking interview participants. The RTI/RSS team will also seek to identify Spanish-oriented online destinations, like Latinx immigrant support groups on Facebook, for using online advertising as a supplemental strategy to recruit monolingual Spanish speakers.

Community-Based Strategies. The RTI/RSS team will also conduct community-based recruiting to find participants who meet any of the criteria, including those not met via online methods. The RTI/RSS team will advertise in community publications and post flyers at a variety of public locations, including community centers, libraries, ethnic grocery stores, local offices of social services, college dormitories, and local churches. The RTI/RSS team will also work directly with staff at community-based organization like adult literacy groups or churches that primarily serve Spanish speakers. In-person recruiting will be conducted as needed at targeted research sites. Community-based recruitment strategies are designed primarily to recruit monolingual Spanish speakers and hard-to-reach English speakers.

In Table 2‑11, the RTI/RSS team identified recruitment criteria that may be particularly challenging and suggests tailored recruitment sites to maximize the reach to participants with rare or operationally hard-to-reach characteristics.

Table 2‑11. Specific Recruitment Strategies for Hard-to-Reach Participants

Topic

Tailored Recruitment Strategy

Households with multiple families

  • Post advertisements and in-person recruitment at apartment complexes known for accommodating multiple families.

  • Post flyers or intercept at local department of social services, WIC offices, food banks, and/or stores that accept SNAP.

Households with foster children

  • Post advertisement to online forums and social media sites frequented by affinity groups of foster parents.

  • Solicit assistance from local affinity groups and advocacy organizations of foster children.

Individuals in housing units with septic systems only

  • Solicit recruitment assistance from local septic tank companies (e.g., share study information with customers, learn which communities use septic systems and could be targeted with flyers/ads).

  • Work with local health departments to identify homes with septic systems.

Individuals in households in which someone received SNAP benefits in the prior year

  • Post flyers or Intercept at local department of social services or WIC offices.




Table 2‑11. Specific Recruitment Strategies for Hard-to-Reach Participants (continued)

Topic

Tailored Recruitment Strategy

Parents/legal guardians of homeschooled children who have not attended regular school

  • Post advertisement to online forums and social media sites frequented by affinity groups of homeschooling parents.

  • Solicit assistance from online programs commonly used by homeschoolers.

  • Solicit assistance from local affinity groups and advocacy organizations of homeschooling parents.

  • Individuals enrolled in Medicaid

  • Individuals with household members enrolled in CHIP

  • Post flyers or Intercept at local department of social services or WIC offices.

  • Parents/legal guardians of children with disabilities

  • Individuals with disabilities, especially learning disability

  • Post advertisement to online forums and social media sites frequented by affinity groups of individuals with learning disabilities and parents of children with disabilities

  • Solicit assistance from local affinity groups and advocacy organizations of individuals with learning disabilities and parents of children with disabilities.

Individuals who earned retirement income, self-employment income, net rental income, or commission/bonus/tips in the prior year

  • Post advertisement to online forums and social media sites frequented by affinity groups of retirees.

  • Solicit assistance from local affinity groups and advocacy organizations of retirees.



    1. Study Design in Statistical Terms, If Applicable

Not applicable.

    1. Performance Measures

The model that RTI uses for cognitive interviewing addresses four primary features: question comprehension, retrieval of relevant information from memory, decision processes, and response processes.6 Cognitive interviewing is used to explore a person’s decision-making processes in each of these areas.7 The following points summarize how cognitive interviewing can be used to explore these four aspects of responding to questions:

  • Question comprehension studies both question intent (What does the respondent believe the question to be asking?) and the specific meanings of terms in the question.

  • Retrieval from memory examines the ability to recallability of information and the strategies used to retrieve that information (i.e., estimation strategies or counting of individual events).

  • Decision processes examine the respondent’s motivation to thoughtfully provide an accurate response, as well as issues related to desirable responding (or social desirability—the desire to respond in such a way as to make oneself look better, either through intentional deception or unconscious self-deception).

  • Response processes evaluate the ability of the participant to match their estimation (e.g., perception, behavior) with the response options available.

    1. Data Collection Methods

The cognitive interviews will be conducted in person using section-by-section probing; the probes will be mainly retrospective probes and administered at the end of each section.

Audio and Video Recording. Audio recordings are critical for ensuring that participants think aloud and that responses to verbal probing are captured. Although interviewers will be encouraged to take electronic notes in real time, it can be difficult to capture fully everything that is said. Interviewers often rely on audio recordings to fill in any gaps that were missed in the notes. Cognitive interviewers will audio record all interviews where participants agree. If the participant does not agree, the interviewer will take detailed notes during the cognitive interview. Audio recordings will be delivered to the Census Bureau after each round of interviews.

Informed Consent. Cognitive interviewers will obtain written informed consent from all participants using the consent forms provided by the Census Bureau (pending approval from RTI’s Institutional Review Board). All signed consent forms will be submitted to the Census Bureau at the end of each round of interviews.

Payment. Participants will receive $40 in cash. The RTI/RSS team will obtain a payment voucher or payment record at the time of payment to participants. Payment vouchers signed by participants will be translated to Spanish.

Case Summaries. The RTI/RSS team will prepare a case summary template for Census Bureau approval at least 10 days before cognitive testing begins. The template will capture responses to all ACS questions asked, even those not probed on, because this information often provides needed context when analyzing the data. The template will include space for responses to scripted probes and separate space for additional qualitative data like spontaneous probes and their responses, think-aloud comments, non-verbal cues, and other interviewers’ observations. For Spanish interviews, summaries will be written in English, but key concepts or thoughts will be provided Spanish with back translations in English to aid understanding.

Language Research. To prepare the Spanish materials, an important concern is to produce a translated version that works equally well for people speaking different national varieties or dialects of Spanish. We will follow a two-pronged approach. We will translate protocols using a committee approach, which complies with the recommendation provided in the Census Bureau Guidelines for Survey Translation.8 Following our approach, the translations produced by the team will be suitable for the wider U.S. Hispanic population. For respondent materials other than protocols, we will have one translator perform a direct or solo translation, followed by independent review and editing. We may propose special tailoring of the materials for Spanish so that they are most suitable for low-education immigrants (many of the U.S. monolinguals).

Interviewer Training. Cognitive interviewing training is necessary to obtain high-quality, consistent data across interviews that fully addresses the research goals. All cognitive interviewers will receive a 6-hour project-specific virtual training before Round One that will cover (1) study background and research objectives; (2) review of the ACS questions and what they are intended to measure; (3) review of the scripted probes and research questions they are intended to address; (4) informed consent, confidentiality, and Title 13 requirements; (5) logistics; and (6) writing effective case summaries. We will provide copies of all interviewer training materials to the Census Bureau. Bilingual interviewers will receive an additional 2-hour training that focuses on unique issues related to conducting interviews in Spanish. Before Rounds Two and Three, all interviewers will receive a 4-hour refresher training.

Following the trainings each round, interviewers will be required to complete a paired mock interview.

Paired mocks will be observed by the Project Director, Senior Survey Methodologist, Lead Researcher, or Senior Language Methodologist (bilingual interviewers only) to verify that interviewers are following all study-specific procedures before they begin interviewing.

Development of Interview Protocols. Before developing the cognitive interview protocols, the RTI/RSS team will work with the Census Bureau to ensure a full understanding of the intent of all questions being tested (for example, who should be included in the household roster and who should be excluded) and the particular goals or concerns to be addressed through cognitive testing. For example, a goal for the household roster may be to assess whether respondents incorrectly exclude household members not related to them. Understanding the testing goals is critical for developing probes that illicit the necessary information and level of detail.

RTI will develop protocols for the three groups of questions. Multiple versions of the protocol for a given question group may be needed to address differences in question wording by version or mode of administration. However, most probes will be similar across the different protocols within a question group to allow for comparisons across version and mode. The protocols will be revised after each round based on the findings from the prior round.

For Round One, RTI recommends showing participants the alternate versions of the questions during the respondent debriefing. Participants can be probed on whether the alternate question would have changed their answer and if so, why. This allows the Census Bureau to assess both versions of the questions with more participants.

The interview protocols will be developed in English and then translated into Spanish. The protocols will document the administration details, consent forms, and materials required for cognitive interviewing, including a list of scripted cognitive interview probes to focus on how respondents settled on a response and on their understanding of specific terms and phrases.

Remote interviewing. While the ideal method for this project is in-person interviews, we recognize we may need to rely on remote interviewing for some or all interviews due to restrictions related to the coronavirus pandemic. RTI will consistently monitor county-level COVID-19 trend data and local guidance throughout the data collection period to determine whether they feel safe sending interviewers out to do in-person interviews in a particular geography. If they do not feel safe doing so, only remote interviews will be offered for that geography. In instances where interviewers are approved to do in-person interviews, respondents will still be given the option to conduct the interview remotely.


Remote interviews may be conducted audio-only via telephone, or audio/video via Skype for Business videoconference. Remote interviews will generally follow the same protocols as in-person interviews to the maximum extent possible. However, there will be some procedural adjustments necessary for remote interviews.

    1. Respondents can sign the consent form in-person for in-person interviews. For remote interviews, they can sign on their own print-out that is then securely emailed or mailed back to RTI or via a Qualtrics form that allows for signature. RTI will provide the incentives in a way they can appropriately monitor and manage successful delivery, including cash in-person or by mail. Data Analysis Strategy

The RTI/RSS team will use a systematic process to analyze the cognitive interview and address the research questions. Survey methodologists with training and experience in questionnaire design will conduct the analysis. The process begins with the analysts reading all case summaries to familiarize themselves with the data. This step also helps prevent any one case, particularly a case the analyst conducted or observed, from becoming over-emphasized in the findings. The case summaries will be imported into an Excel spreadsheet allowing the analysts to easily sort and filter the data to compare responses with questions and probes across all participants.9 The next step is to synthesize and reduce the data to meaningful categories and themes (e.g., misinterpreted term X, underreported household count, or overreported household count). Some of these themes will be identified in advanced based on the testing goals. Other themes will emerge during analysis. Findings are then further analyzed by subgroup to determine if different types of people understand and answer the questions differently. Overall conclusions will then be drawn based on the individual findings. For questions that perform poorly, analysts will provide recommendations for improving the questions.

To ensure transparency—that study findings can be traced back to the original data collected—analysts will list the participants’ unique case ID when documenting findings. For example, “Three participants (10001DC_A, 110021NC_A, 130106_A) excluded non-relatives who had been living in their household for at least 2 months because they did not pay rent.”

    1. Documentation and Study Findings

Briefing Reports. Within 2 weeks of completing each round of pretesting, the RTI/RSS team will prepare a draft briefing report. Because of the short turnaround time, the RTI/RSS team will produce informal reports in table or bullet format. They will present the problems that have arisen and provide recommendations and justifications for recommendations. Comments or feedback on the translation changes as related to English source question issues will also be discussed, if any emerge.

Final Reports. After Rounds Two and Round Three, the RTI/RSS team will produce a draft report describing the cognitive testing findings, including direct quotes from participants when relevant. After the Census Bureau reviews and comments on the draft final reports, we will reflect the comments in the formal final reports, which will include the following sections: executive summary, introduction, methods, findings, discussion, and all relevant study materials (e.g., interview protocols). Following the Census Bureau’s approval of the final reports, they will be edited for 508 compliance.

Disclosures. All data collected from screening and interview participants for this study are protected by Title 13. RTI and RSS project staff will follow the procedures for safeguarding data as described in Sections 2.A.7 and 2.B.3.

Oral Presentations. After each round, the RTI/RSS team will present the cognitive testing findings to Census Bureau staff and stakeholders. We recognize that different teams work on different questions and will organize the presentation so team members can easily attend the parts of the presentation most critical to their work. For Rounds One and Two, the presentations will be in person at the Census Bureau Headquarters. For Round Three, RTI will use videoconference technology (e.g., Zoom) to host the oral presentation.

1 Murphy, J. J., Mayclin, D. N., Richards, A. K., & Roe, D. J. (2015, December 1-3). A multi-method approach to survey pretesting. Paper presented at: 2015 Federal Committee on Statistical Methodology Research Conference, Washington, DC.

2 Head, B. F., Dean, E., Flanigan, T., Swicegood, J., & Keating, M. D. (2015). Advertising for cognitive interviews. Social Science Computer Review, 34(3), 360–377. https://doi.org/10.1177/0894439315578240

3 Sage, A. (2014). The Facebook platform and the future of social research. In C. Hill, E. Dean, & J. Murphy (Eds.), SocialMedia, Sociality, and Survey Research. Hoboken, NJ: Wiley.

4 Antoun, C., Zhang, C., Conrad, F. G., & Schober, M. F. (2015). Comparisons of online recruitment strategies for convenience samples. Field Methods, 28(3), 231–246. https://doi.org/10.1177/1525822X15603149

5 Blair, J., & Conrad, F. (2011). Sample size for cognitive interview pretesting. Public Opinion Quarterly, 75(4), 636–658. https://doi.org/10.1093/poq/nfr035

6 Tourangeau, R. (1984). Cognitive science and survey methods. Retrieved from https://www.nap.edu/catalog/930/cognitive- aspects-of-survey-methodology-building-a-bridge-between-disciplines

7 Willis, G., DeMaio, T., & Harris-Kojetin. (1999). Is the bandwagon headed to the methodological promised land? Evaluating the validity of cognitive interviewing techniques. In M. Sirken, D. Herrmann, S. Schechter, N. Schwarz, J. Tanur & R. Tourangeau (Eds.), Cognition and survey research (pp. 133–153). New York: Wiley.

8 U.S. Census Bureau. (2004). Census Bureau guideline: Language translation of data collection instruments and supporting materials. Washington DC: U.S. Bureau of the Census.

9 For some questions, it is possible that a participant could report on the behavior of another household member because this information is relevant to the question. In these cases, RTI/RSS will not count these responses as participant answers in the final results. Where appropriate, these responses could be used in discussing how people understand the question, but these would be clearly noted as responses about others in the household.

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorRTI International
File Modified0000-00-00
File Created2021-01-14

© 2024 OMB.report | Privacy Policy