Volume I
2016 National Household Education Surveys Program (NHES)
Usability Testing
OMB# 1850-0803 v.141
August 2015
Justification
The National Household Education Survey (NHES) is a data collection program of the National Center for Education Statistics (NCES) aimed at providing descriptive data on the educational activities of the U.S. population, with an emphasis on topics that are appropriate for household surveys rather than institutional surveys. Such topics have covered a wide range of issues, including early childhood care and education, children’s readiness for school, parents’ perceptions of school safety and discipline, before- and after-school activities of school-age children, participation in adult and career education, parents’ involvement in their children’s education, school choice, homeschooling, and civic involvement. NHES uses a two-stage design in which sampled households complete a screener questionnaire to enumerate household members and their key characteristics. Within-household sampling from the screener data determines which household member receives which topical survey. NHES typically fields 2 to 3 topical surveys at a time, although the number has varied across its administrations. Surveys are administered in English and in Spanish.
Beginning in 1991, NHES was administered roughly every other year as a landline random-digit-dial (RDD) survey. During a period of declining response rates in all RDD surveys, NCES decided to conduct a series of field tests to determine if a change to self-administered mailed questionnaires would improve response rates. After a 5-year hiatus in data collection for this developmental work, NCES conducted the first full-scale mail-out administration with NHES:2012, which included the Early Childhood Program Participation (ECPP) and the Parent and Family Involvement in Education (PFI) surveys. In 2016, the NHES will field the PFI and ECPP surveys along with the first administration of the Adult Training and Education Survey (ATES). This will be a two-stage mail study. In the first stage, households will be screened to determine if they contain eligible members. If eligible members are in the household, within- household sampling will be performed. Finally, topical surveys will be administered to the selected household members.
The PFI, previously conducted in 1996, 2003, 2007, and 2012, surveys families of children and youth enrolled in kindergarten through 12th grade or homeschooled for these grades, with an age limit of 20 years, and addresses specific ways that families are involved in their children’s school; school practices to involve and support families; involvement with children’s homework; and involvement in education activities outside of school. Parents of homeschoolers are asked about their reasons for choosing homeschooling and resources they used in homeschooling. Information about child, parent, and household characteristics is also collected. To minimize response burden and potential respondent confusion, both enrolled and homeschool versions of the PFI questionnaire were created for self-administration. This submission includes both PFI-Enrolled and PFI-Homeschooled instruments.
The ECPP, previously conducted in 1991, 1995, 2001, 2005, and 2012, surveys families of children ages 6 or younger who are not yet enrolled in kindergarten and provides estimates of children’s participation in care by relatives and non-relatives in private homes and in center-based daycare or preschool programs (including Head Start and Early Head Start). Additional topics addressed in ECPP interviews have included family learning activities; out-of-pocket expenses for nonparental care; continuity of care; factors related to parental selection of care; parents’ perceptions of care quality; child health and disability; and child, parent, and household characteristics.
The ATES surveys adults ages 16 to 65 who are out of high school and provides new measures of adults’ educational and occupational credentials. It identifies adults who have educational certificates, including the subject field of the certificate, its perceived labor market value, and its role in preparing for occupational credentialing; and counts adults who have an occupational certification or license, including the number of such credentials, type of work they are for, their perceived labor market value, and the role of education in preparing for these occupational credentials. To get a comprehensive picture of adult education and training, the survey also includes brief sections on adult participation in work experience programs (such as apprenticeships) and college classes.
NHES:2016 Web Experiment
NCES is planning an experiment as part of NHES:2016 to evaluate response rates for a subsample of respondents requested to complete the screener and topical instruments over the internet. The web instruments are being developed from the paper and pencil versions approved in August 2015 (OMB# 1850-0768 v.11). Unlike the paper and pencil versions, however, skip patterns in the web instruments will be invisible to the respondent. This experiment will also test “on the fly” sampling between the screener and the topical stages of data collection. The functionality of the web interface will permit immediate sampling of a household member for a topical survey, and if the screener respondent is the sampled adult respondent or a knowledgeable adult about the sampled child, the web instrument will allow him or her to continue immediately to the topical survey. At any stage during the web experiment, respondents will be able to call the Census Bureau to receive a paper and pencil version of the survey. In addition, beginning at the point of the third follow-up mailing for both the screener and the topical surveys, sampled web respondents will automatically receive a paper and pencil version of the survey and continue with paper and pencil follow-up thereafter.
The experiment is being designed to measure 1) overall web screener and web topical response rates; 2) demographic characteristics of web respondents versus pencil and paper respondents; 3) the number and type of respondents who try to answer the survey on a mobile device, such as a smart phone or tablet; 4) the number of breakoffs at the screener and topical stages; and 5) the number of screener respondents with a sampled child in their household who indicated that they are not knowledgeable about the care and education of the sampled child.
This request is to conduct usability interviews to refine the functionality of the survey platform for the 2016 web sample data collection. Usability testing has been used for the School Climate Survey, for the National Teacher and Principal Survey, and for other NCES surveys in past years.
The objective of the proposed testing is threefold. First, testing will determine design inconsistencies and usability problem areas within the application (e.g., navigation errors, presentation errors – failure to locate information in screens, or control usage problems such as improper button usage). Second, testing will exercise the application under controlled test conditions with representative users. Data such as timing calculations and item-missing data will be used to assess whether usability goals regarding an effective, efficient, and well-received user interface have been achieved. Finally, NCES can establish baseline user performance and user-satisfaction levels of the application for future usability evaluations.
The interviews should result in an application that is easy to understand for respondents and therefore less burdensome, while also yielding accurate information. The primary deliverable from this study will be the revised, final online application. A report highlighting key findings will also be prepared.
Design
Usability testing will explore the interaction that respondents have with the prototype of the NHES web survey application. This phase of testing will take place after the development team has created working interaction elements for users to test and the application has been reviewed. The usability testing will evaluate users’ performance for completing the online NHES questionnaires in terms of efficiency, accuracy, and satisfaction. Efficiency will be measured by survey completion time and interviewer observations of behaviors such as delayed item response time and respondent frustration, if any. Accuracy will be assessed based on observed participant behaviors with regard to ability to log in and out of the survey. Satisfaction will be assessed based on subjective satisfaction ratings through a modified Questionnaire for User Interaction Satisfaction (QUIS). The QUIS is a tool developed by a multi-disciplinary team of researchers in the Human-Computer Interaction Lab (HCIL) at the University of Maryland at College Park. The QUIS was designed to assess users' subjective satisfaction with specific aspects of the human-computer interface. The QUIS team successfully addressed the reliability and validity problems found in other satisfaction measures, creating a measure that is highly reliable across many types of interfaces. The QUIS survey was used in the National Teacher and Principal Survey Usability Testing
The actual tasks that participants will be asked to perform will be determined by the specific interaction elements or features created for the platform (e.g., toggling the questionnaire language from the English version to the Spanish version; re-accessing the survey after taking a break). The data collection functions will be tested by different respondent groups across survey types. Users’ success or difficulty in completing assigned tasks will be analyzed to determine which information or control elements are missing or insufficient to allow successful completion of anticipated user tasks. After each set of tasks is completed, respondents will be asked to answer some questions on the ease of using the features or navigating through the platform. They will also be asked for any comments they have about the task they just completed. All comments will be recorded and analyzed to help guide the development of the NHES web survey tool.
Some observations of behavior in the usability testing may also be noted by interviewers as supplementary information. Behavioral observations may include such things as nonverbal indicators of affect, suggesting emotional states such as frustration or engagement, as well as interactions with the task, such as ineffectual or repeated actions suggestive of misunderstanding or usability issues. Interviewers will be trained to note when the user is having difficulty with a question’s content versus question’s presentation or navigation. Interviewers will be given instructions to record behavior when the respondent navigates to selected pages or when a certain event occurs (e.g., a respondent receives logic violation text; a respondent is shown a screen with language indicating a household is ineligible). Because we do not plan to have a second observer present during interviews or testing, behavioral observations, aside from these direct instructions, will only be made if nonverbal indicators of affect are clearly demonstrated or noted by the interviewers.
Interviews are expected to last about 1 hour and will be conducted by trained interviewers. This submission includes the protocol that will be used to conduct the interviews and screen shots of selected screens from the questionnaires to be tested. It is expected that the data collection platform and interview protocol will evolve during testing. The research will be iterative, in that the survey instrument format and protocol tasks and probes may change during the testing period in response to problems identified during the interviews. There will be three rounds of 25 interviews. Each round will consist of 5 interviews of each instrument. There will be a week long break between rounds to implement any changes to the instrument or the protocols. Interviews will be audio-recorded, and NCES staff may observe up to 10 interviews through GoToMeeting.
To adequately test the web survey tools on different web platforms we will request participants to use the device on which they would mostly likely complete the survey (e.g., laptop, desktop, phone, tablet). If the participant does not have or is not willing to use her device we will provide a laptop on site. Participants that request to use their desktop will be asked to conduct the testing remotely by sharing their desktop screen through JoinMe and using a conference call phone line. Twelve interviews will be conducted remotely.
To adequately test the web survey tools, it is necessary to distribute the usability testing interviews across respondents who represent the primary experiences of the target population. We propose to conduct 75 interviews within several groups of respondents that include:
Adults ages 18 to 65 who are part of the work force (e.g., not retired, not full-time students);
Parent/guardian of a child 5 or younger not enrolled in grades K-12;
Parent/guardian of a child ages 5-18, enrolled in grades K-12;
Parent/guardian of a child ages 5-18, homeschooled in grade equivalent to K-12;
Not eligible households (households with only adults over 65 years old).
Table 1 describes the distribution of the interview type and respondent characteristics.
|
|
Recruiting and Paying Respondents
To assure that we are able to recruit participants from all desired populations and to thank them for their time and for completing the interview, each respondent will be offered $40 as has been done in previous rounds of NHES cognitive interviews. Participants will be recruited by AIR, using multiple sources, including company databases, social media/Craig’s List, and personal and professional contacts. An example recruitment e-mail is included in Attachment 1. People who have participated in usability testing, cognitive studies, or focus groups in the past 6 months and employees of the firms conducting the research will be excluded from participating. The questions used to screen respondents for participation are included in Attachment 2. The usability interview protocols and the select NHES web instrument items are included in Attachment 3. Interviews will take place in the AIR offices in the DC-Metro area (39 interviews), and San Mateo, California, office (36 interviews). Of the 75 interviews, 12 will be conducted remotely and will vary across location, language, and survey depending on availability.
Assurance of Confidentiality
Participation is voluntary, and respondents will read a confidentiality statement and sign a consent form before interviews are conducted. The confidentiality statement and consent form are provided in Attachment 1. No personally identifiable information will be maintained after the usability testing interview analyses are completed. Primary interview data will be destroyed on or before June 30, 2016. Data recordings will be stored on AIR’s secure data servers.
The interviews will be audio-recorded. Participants will be assigned a unique identifier (ID), which will be created solely for data file management and used to keep all participant materials together. The participant ID will not be linked to the participant name in any way or form. The only identification included in the audio files will be the participant ID. The recorded files will be secured for the duration of the study – with access limited to key AIR project staff – and will be destroyed after the final report is submitted.
Estimate of Hour Burden
We expect the usability interviews to last approximately one hour. Screening potential participants will require 3 minutes per screening. We anticipate it will require 12 screening interviews per eligible participant (thus an estimated 900 screenings to yield 75 participants). This will result in 45 hours of burden for the screener, and an estimated total of 120 hours of respondent burden for this study.
Table 2. Estimated response burden for NHES 2016 web usability tests
Respondents |
Number of Respondents |
Number of Responses |
Burden Hours per Respondent |
Total Burden Hours |
Recruitment Screener |
900 |
900 |
0.05 |
45 |
Usability Interviews |
75 |
75 |
1.0 |
75 |
Total |
900 |
975 |
- |
120 |
Estimate of Cost Burden
There is no direct cost to respondents.
Project Schedule
The project schedule calls for recruitment to begin as soon as OMB approval is received. Interviewing is expected to be completed within 3 months of OMB approval. The data collection instrument will be revised after the completion of each round of interviews (3 rounds in total).
Cost to the Federal Government
The cost to the federal government for this usability testing laboratory study is approximately $90,000.
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
Author | andy |
File Modified | 0000-00-00 |
File Created | 2021-01-27 |