Business & Industry Webpage-Testing Plan

ombbusinessandindustrypageusability.doc

Generic Clearence for Questionnaire Pretesting Research

Business & Industry Webpage-Testing Plan

OMB: 0607-0725

Document [doc]
Download: doc | pdf

The Census Bureau plans to conduct research under the generic clearance for questionnaire pretesting research (OMB number 0607-0725). We will be conducting usability interviews to identify the best of two possible versions of the Economic area’s Business and Industry page on the census.gov website.


We will be testing two different low-fidelity prototypes, or versions, of the Business and Industry 2nd level page. (The first level page is the Census home page: www.census.gov, which we are not testing during this study.) It is possible that neither of these prototypes will be considered the “best” version, but rather that sections of each prototype may work. If this were to be the case, a new prototype based on the results of the study would be developed and tested under a separate submission for OMB clearance.


Between August and November, 2008 we will interview 20 external participants from the Washington DC metropolitan region. For this round of testing the participants we will recruit will be novice to Census data but must have a minimum of 1 year Internet experience where the participant uses the Internet at least three times a week to search for information. Participants will be recruited from the Usability Lab database which is composed of people from the metropolitan DC area who volunteered to participate after reading a Craig’s List posting or an ad in a local daily newspaper. Participants will come to the Usability Lab at the Census Bureau for the study and will be compensated $40.00 for their time.


The study will be a “within-subjects design” in which all participants will see both prototypes. Participants will be given an initial questionnaire about their Internet experience and some demographic characteristics. Then each participant will be randomly given a set of tasks for the specific prototype that they will be working with first and then another set of tasks for the second prototype that they will work with. Each task and prototype assignment to participant will be randomized.


Participants will be asked to think-aloud while they are working on the tasks, and they will also provide feedback about the prototypes during a debriefing at the conclusion of the session. Participants will be prompted to think-aloud when they fall silent. Finally, participants will be asked to complete a paper-and-pencil questionnaire designed to measure their satisfaction with the questionnaire. Subjective satisfaction ratings will be collected for such design elements as the layout of page, ease of finding information, and use of Census jargon. A copy of the initial questionnaire, the satisfaction questionnaire, and the task sets are all enclosed.


Because this is a within-subjects design, there is a need to develop very similar tasks which would test the same thing across prototypes. Thus our task set includes blocks of questions for each prototype (see a and b versions of each task in the Tasks For Low-Fidelity Prototype Study enclosure). With two prototypes, we have two sets of questions that are roughly asking the users to find the same information (and would have almost the same ideal path), but the exact answer is different. So, for instance, we ask about a reporter finding the number of bakeries in Michigan in the first version, but change it to something like a student writing a report about the number of florists in Ohio for the second. There are four possible combinations of prototypes and question blocks, so there are four different possible “conditions.” Each participant would be randomly assigned to one of those four conditions.


A major strength of a within-subjects design such as this current study (besides the fact that it essentially doubles the number of participants in a study) is that it leads to a reduction in error variance as compared to a between-subjects design. This is because there may be important individual differences between the participants that have an effect on the dependent variables between the two groups for the analogous between-subjects design (e.g., prototype 1 vs. prototype 2). A within-subjects design corrects for this potential problem because the participants are the same in the two groups. A weakness associated with this design is the potential for carry-over effects (practice or fatigue) when a participant performs in both conditions. However, in this study, we have designed the conditions to alternate which prototype and which block of questions is presented first to the participants. So not every participant will receive the same set of questions with the same prototype; also, not every participant will receive the same question or prototypes first. This crossing of initial prototype with initial question block is expected to reduce any carry-over effects that might results from the within-subjects design. Measures of respondent success are accuracy and task-completion time.


We estimate that users will spend 1 hour on average taking the study, including time spent working on the demographic and satisfaction questions, the tasks and the debriefing. Thus, the total estimated respondent burden for this test is 20 hours.


The contact person for questions regarding data collection and statistical aspects of the design of this research is listed below:


Erica Olmsted-Hawala

Center for Survey Methods Research

U.S. Census Bureau

Washington, D.C. 20233

(301) 763-4893

[email protected]


File Typeapplication/msword
File TitleThe purpose of this letter is to inform you of our plans to conduct research under the generic clearance for questionnaire pre
AuthorBureau Of The Census
Last Modified ByBureau Of The Census
File Modified2008-08-18
File Created2008-08-18

© 2024 OMB.report | Privacy Policy