PROGRAMME
FOR THE INTERNATIONAL
ASSESSMENT OF ADULT COMPETENCIES
(PIAAC)
2010 FIELD TEST AND 2011/2012
MAIN STUDY DATA
COLLECTION
REQUEST FOR OMB CLEARANCE
Supporting Statement Part A
Prepared by:
National Center for Education Statistics
U.S. Department of Education
Washington, DC
February 16, 2010
DRAFT
Section Page
Preface 1
A Justification 4
A.1 Importance of Information 5
A.2 Purposes and Uses of the Data 7
A.3 Improved Information Technology 8
A.4 Efforts to Identify Duplication 10
A.5 Minimizing Burden on Small Institutions 10
A.6 Frequency of Data Collection 10
A.7 Special Circumstances 10
A.8 Consultation Outside NCES 10
A.9 Payments or Gifts to Respondents 12
A.10 Assurance of Confidentiality 14
A.11 Sensitive Questions 17
A.12 Estimates of Burden 17
A.13 Total Annual Cost Burden 21
A.14 Annualized Cost to Federal Government 21
A.15 Program Changes or Adjustments 22
A.16 Plans for Tabulation and Publication 22
A.17 Display OMB Expiration Date 22
A.18 Exceptions to Certification Statement 22
B Collection of Information Employing Statistical Information 20
B.1 Respondent Universe and Response Rates 20
B.1.1 Main Study Sample Sizes and Response Rates 21
B.1.2 Field Test Sample Sizes and Response Rates 22
B.2 Procedures for Collection of Information 23
B.2.1 Main Study Sampling Methodology 24
B.2.2 Field Test Sampling Methodology 26
B.3 Maximizing Response Rates 28
B.4 Tests of Procedures 29
B.5 Individuals Consulted on Statistical Design 30
Appendixes Page
A PIAAC Field Test Screener A-1
B U.S. PIAAC Field Test Background Questionnaire B-1
C U.S. PIAAC Field Test Information and Communication Technology (ICT) Module C-1
D PIAAC Field Test Contact Letters and Brochure D-1
E PIAAC Confidentiality Agreement and Affidavit of Non-Disclosure E-1
Tables
1 Comparison of Recent Literacy Surveys Versus PIAAC 14
2 Estimates of burden for PIAAC field test 17
3 Estimates of burden for PIAAC main study 19
4 Cost for conducting the PIAAC field test and main study 21
5 PIAAC production schedule 23
6 PIAAC: Sample yield estimates for 80 PSUs and 5,000 completed cases 22
7 Initial sample sizes 23
The Programme for the International Assessment of Adult Competencies (PIAAC) is the most comprehensive international survey of adult skills ever undertaken. The survey will examine literacy in the information age and assess adult skills consistently across the 27 participating countries. It will focus on what are deemed key skills for individuals to participate successfully in the economy and society of the 21st century. This multi-cycle study is a collaboration between the governments of participating countries, the Organization for Economic Cooperation and Development (OECD), and a consortium of various international organizations, referred to as the PIAAC Consortium, led by the Educational Testing Service (ETS), including the German Institute for International Educational Research (DIPF), the German Social Sciences Infrastructure Services’ Centre for Survey Research and Methodology (GESIS-ZUMA), the University of Maastricht, the U.S. company Westat, the International Association for the Evaluation of Educational Achievement (IEA), and the Belgium firm cApStAn.
The study will assess the following adult skills required in the information age: basic reading skills, reading literacy, numeracy, and problem solving in “technology-rich environments” (the OECD term for ‘on or with a computer’). PIAAC will also measure the ability of individuals to use computer and web applications to find, gather, and use information, and to communicate with others. The study will use a “Job Requirements Approach” to ask employed adults about the types and levels of a number of specific skills used in the workplace. These include not only the use of reading and numeracy skills on the job, but also physical skills (e.g., carrying heavy loads, manual dexterity), people skills (e.g., public speaking, negotiating, working in a team), and information technology skills (e.g., using spreadsheets, writing computer code). It will ask about the requirements of the person’s main job in terms of the intensity and frequency of the use of such skills. PIAAC also breaks new ground by being the first to use computers to administer an international assessment of this kind, though some individuals will be given a paper and pencil version of the assessment.
An important element of the value of PIAAC is its collaborative and international nature. In the United States, the U.S. Department of Education’s National Center for Education Statistics (NCES) is collaborating with the U.S. Department of Labor (DoL) on PIAAC. Staff from NCES and DoL are co-representatives of the United States on PIAAC's international governing body and NCES has consulted extensively with DoL, particularly on development of the job skills section of the background questionnaire. Internationally, PIAAC has been developed collaboratively by participating countries’ representatives from both Ministries or Departments of Education and Labor and by OECD staff through an extensive series of international meetings and work groups. These international meetings and work groups, assisted by expert panels, researchers, and the PIAAC Consortium’s support staff, have developed the framework used to develop the assessment and background questionnaire, the common standards and procedures for collecting and reporting data, and guided the development of a common, international “virtual machine” (VM) software that will administer the assessment uniformly on laptops. All PIAAC countries must follow the common standards and procedures and use the same VM software when conducting the survey and assessment. As a result, PIAAC will be able to provide a reliable and comparable measure of adult skills in the adult population (age 16-65) of participating countries. PIAAC is wholly a product of international and inter-department collaboration, and as such represents compromises on the part of all participants.
Currently, the National Center for Education Statistics (NCES) has contracted with Westat to work with NCES and the PIAAC Consortium on the conduct of the study. Westat’s key tasks include instrument development (a screener to enumerate and select study participants), adaptation of the international background questionnaire and assessment for the United States, instrument translation (as necessary), sample design and selection, data collection, scoring, and the production of reports detailing the results of the field test and the main study.
U.S. PIAAC field test data collection will occur between August and November 2010. The goal is to interview and assess 1,530 adults in 25 primary sampling units (PSUs) across the country. Each participant will be administered (1) an in-person background questionnaire, (2) a brief Information and Communication Technology (ICT) module to determine whether the participant can use the computer to complete the assessment, and either (a) a paper and pencil version of the assessment (given to those who cannot use the computer and to a random selection of participants who can use the computer effectively) or (b) a computer-based assessment including an orientation module.
The U.S. PIAAC main study will occur between September 2011 and March 2012. It will include a sample of 5,000 adults in 80 PSUs. The basic survey components, i.e., a screener, an in-person background questionnaire, and a computer-based or paper assessment will remain the same. However, the instruments will be modified based on the field test experience.
In this clearance package, NCES requests OMB’s approval for: (1) recruitment and survey materials and burden time for the field test in September-November 2010; and (2) a waiver of the 60-day federal register notice for the September 2011-March 2012 main study data collection clearance, which will be submitted to OMB for review in March 2011, as soon as the final international and national versions of the PIAAC background and assessment are approved by the OECD and PIAAC Consortium. Information about the main study (including burden times, costs, and sample size) are included in this document for informational purposes only—not for OMB approval at this time. The main study collection instruments are expected to have only minor changes from those used in the field test, primarily deletions of some of the items from the field test.
Justification |
A |
Over the past two decades, there has been growing interest by national governments and other stakeholders in an international assessment of adult skills to monitor how well prepared populations are for the challenges of a knowledge-based society.
In the mid-1990s, three waves of the International Adult Literacy Survey (IALS) assessed the prose, document, and quantitative literacy of adults in a total of 22 countries, and between 2002 and 2006, the Adult Literacy and Lifeskills (ALL) Survey assessed prose and document literacy, numeracy, and problem-solving in eleven countries and one state. These surveys demonstrated the feasibility of assessing internationally how well adults perform literacy, numeracy, and problem-solving tasks in real-life situations.
PIAAC builds on previous surveys and extends international adult assessment beyond the more traditional measures of literacy and numeracy. It aims to address the growing need to collect more sophisticated information that will more closely match the needs of governments to develop a high quality workforce able to solve problems and deal with complex information that is often presented electronically on computers.
PIAAC’s measurement of competencies in problem solving and of skills used in the workplace also moves the survey beyond conventional measurements of literacy. These two features propose to help assess the extent to which adults have acquired a generic set of skills and competencies. At the same time, PIAAC looks more closely than previous surveys at the extent to which people with low literacy levels have the basic building blocks that they need to read effectively.
By directly assessing adult skills, PIAAC will enhance our understanding of the relationship of education to developing basic cognitive skills and key generic work skills. As an international cooperative venture, PIAAC provides participating countries with access to high-quality expertise in the measurement of adult skills. By sharing the costs of development and pooling resources, participating countries have access to a greater level of expertise than would otherwise be the case.
Through its involvement in the Program for the International Assessment of Adult Competencies (PIAAC), the National Center for Education Statistics (NCES) will be able to provide policy-relevant data for international comparisons of the U.S. adult population’s competencies and skills, and help inform decision-making on the part of national, state, and local policymakers, especially those concerned with economic development and workforce training. The majority of the literacy and numeracy items proposed for the PIAAC assessment are taken directly from previous international adult literacy assessments (IALS and ALL). However, PIAAC extends beyond the previous adult assessments through the addition of the problem solving in technology-rich environments component, designed to measure the cognitive skills required in the information age.
U.S. participation in PIAAC is entirely consistent with the NCES mandate. The enabling legislation of the National Center for Education Statistics [Section 406 of the General Education Provisions Act, as amended (20 U.S.C. 1221e-1)] specifies that "The purpose of the Center [NCES] shall be to collect and analyze and disseminate statistics and other information related to education in the United States and in other nations." The Educational Sciences Reform Act of 2002 (HR 3801, Part C, Sec.153) also specifies that NCES
shall collect, report, analyze, and disseminate statistical data related to education in the United States and in other nations, including—(1) collecting, acquiring, compiling (where appropriate, on a State-by-State basis), and disseminating full and complete statistics (disaggregated by the population characteristics described in paragraph (3)) on the condition and progress of education, at the preschool, elementary, secondary, postsecondary, and adult levels in the United States, including data on…(D) secondary school completions, dropouts, and adult literacy and reading skills…[and] (6) acquiring and disseminating data on educational activities and student achievement…in the United States compared with foreign nations.1
Apart from being essential for any international perspective on adult literacy and reading skills, U.S. participation fulfills both the national and international aspects of NCES' mission.
NCES conducted several major surveys of adult competencies between 1985 and 2008.
Young Adult Literacy Assessment (YALA) – In 1985, NCES extended the reading portion of the National Assessment of Educational Progress (NAEP) to include a nationally representative sample of 3,600 young adults between the ages of 21 and 25. That study came to be known as YALA. Using a combination of reading questions and questions designed to simulate literacy activities that adults encounter in daily life, YALA surveyed the extent and nature of the literacy problem among young adults. It included a background questionnaire, which collected information on family background, respondent characteristics, educational experiences, work and community experiences, and literacy practices. It was also the first literacy study to measure three distinct areas of literacy—prose, document, and quantitative.
National Adult Literacy Survey (NALS) – NALS was the first federally sponsored study to measure the literacy skills of a nationally representative sample of U.S. adults (aged 16 and older) and to determine how these skills are distributed across major subgroups of interest. Approximately 26,000 in-person interviews and literacy assessments were administered by 400 interviewers over a 6-month period, beginning in 1992.
International Adult Literacy Survey (IALS) – IALS was a large-scale, international comparative assessment designed to identify and measure a range of skills linked to the social and economic characteristics of individuals across (or within) nations. IALS provided information on the skills and attitudes of adults aged 16-65 in 22 countries between 1994 and 1998 in a number of different areas, including prose, document, and quantitative literacy.
International Adult Literacy and Lifeskills Survey – This effort included three literacy studies: The Adult Literacy and Lifeskills Survey (ALL), the Level 1 Study2, and the Adult Education and Literacy Study (AEL)3.
The ALL survey (2003) measured the literacy (prose and document) and numeracy skills of a representative sample of adults aged 16 to 65 in 11 countries. The U.S. sample included approximately 7,000 households in 60 primary sampling units. In-person interviews and literacy assessments (lasting a total of 90 minutes) were conducted with approximately 3,500 participants.
The purpose of the Level 1 Study was to examine the skills of adults with lower literacy levels. The study sample included 950 adult education students and 84 individuals from the general population. Respondents were asked to complete a background questionnaire, a set of literacy tasks, and a battery of five reading component skills. In addition, four brief language and additional cognitive measures were administered using Ordinate’s PhonePass ©, an automated testing technology that measures speaking and listening skills through respondent/telephone interaction.
In the AEL survey (2002-2003), a subset of the ALL interview and assessment instruments was administered to a representative national sample of adult participants (N=6,100) in adult education programs governed by the Adult Education and Family Literacy Act (AEFLA), Title II of the Workforce Investment Act of 1998. This approach allowed a comparison of the literacy skills of adult education program participants and the general population. Assessments were conducted in Spanish and English to compare literacy outcomes in both languages for Spanish speakers. A key component of this study was a survey of 1,200 adult education programs to provide the first comprehensive information in 10 years on the characteristics of these programs.
National Assessment of Adult Literacy (NAAL) – NAAL (2003) measured the literacy skills of a nationally representative sample of U.S. adults to determine how the distribution of skills across major subgroups had changed since the 1992 National Adult Literacy Survey. The study also provided separate estimates of literacy skills for adults in six states and for inmates of federal and state prisons. Main study data collection, with more than 18,000 respondents, included the basic assessment plus a Fluency Addition.
National Assessment of Adult Literacy (NAAL) – NAAL (2008) consisted of a field test with 1,500 respondents. Innovative assessments of functional writing and vocabulary knowledge were developed and tested.
The PIAAC field test will be the next step in a series of efforts aimed at developing adult literacy assessments by adding the new assessment domains of problem-solving in technology rich environments and reading components.
Information gathered from the field test will be used to refine the screener, background questionnaire, assessment, and training and data collection materials and procedures, before their implementation in the main study. Specifically, the purposes of the PIAAC field test are to:
Evaluate computer-assisted personal interviewing (CAPI) system applications and data processing procedures;
Test four randomly assigned sets of background questionnaire modules to determine which questions to include in the main study (to reduce respondent burden, each respondent will be administered one set of questions consisting of about a quarter of all the background questions);
Test screener and background questionnaire content, including question ordering and skip patterns;
Evaluate the reading, numeracy, and problem-solving assessment items to determine which field tested items should be selected for the main study;
Test data collection procedures, including transitions between instruments and the direct use of the automated instruments by respondents;
Evaluate the interviewer training program;
Review and analyze the field test data, including the variability and sensitivity of questions, the consistency of edit checks, and the administration time associated with each aspect of the data collection;
Test the within-household sample selection process;
Train field staff in the sampling activities;
Test the Quality Control (QC) sampling-related procedures; and
Test the flow of materials and the sample data from sample selection to the delivery of the Sample Design International File (SDIF) at the end of the data collection.
The results of the PIAAC main study will be used to:
Identify factors that are associated with adult competencies;
Extend the measurement of skills held by the working age population;
Provide a better understanding of the relationship of education to adult skills; and
Allow comparisons across countries and, as PIAAC is intended to be cyclical, over time.
Additionally, information from the PIAAC main study will be used by:
Federal policymakers and Congress to plan Federal programs aimed at improving literacy skills;
State and local officials to enhance adult education and other literacy programs;
News media to inform the public about similarities and differences between U.S. and international adult populations; and
Business and educational organizations to better understand the skills of the U.S. labor force and plan programs to address skill gaps.
Technology is a large component of the PIAAC field test and main study. The screener and the background questionnaire will be administered using a computer-assisted personal interviewing (CAPI) system. The interviewer will read the items aloud to the respondent from the screen on a laptop computer and will record all responses on the computer. The use of a computer for these questionnaires allows for automated skip patterns to be programmed into the database as well as data to be entered directly into the database for analyses. In addition, for the screener the computer will run a sampling algorithm to determine who if anyone in the household is eligible to participate in the study and select a respondent or respondents for the BQ and the Assessment.
An Information and Communication Technology (ICT) Module will screen respondents assigned to the computer-based assessment for the skills needed to complete the assessment on the computer. The ICT Module includes a screener, a tutorial, and a “core” which are administered sequentially. All respondents will be asked the screener questions. Those who respond that they are not familiar with using a computer mouse, will be offered the tutorial in using the mouse. All respondents will be administered the ICT core, a short set of questions to determine whether a respondent is able to complete the assessment on the computer. A random selection of respondents who are able to complete the assessment on the computer will be routed to the paper and pencil assessment, the rest will be routed to the computer assessment. Respondents who decline the tutorial or do not answer enough core questions correctly will be routed to the paper and pencil assessment.
Most respondents in the field test and main study will complete the assessment via computer. For those who complete the assessment on paper, an automated interviewer guide will assist the interviewer in administering the assessment. This interviewer guide will contain prompts to be read aloud to the respondent.
Prior to data collection, the CAPI system will be subject to extensive usability testing of skip patterns and responses to ensure accuracy and ease of data collection.
Additionally, the Field Management System (FMS) developed for use in the 2003 ALL will be revised, as appropriate, for the PIAAC field test and main study. The FMS is comprised of three basic modules: the Supervisor Management System, the Interviewer Management System, and the Home Office Management System. The Supervisor Management System will be used to manage data collection and case assignments and produce productivity reports. The Interviewer Management System allows interviewers to administer the automated instruments, manage case status, transmit data, and maintain information on the cases. The Home Office Management System supports the packaging and shipping of cases to the field, shipping of booklets for scoring report production, processing of cases, receipt control, and the receipt and processing of automated data, and is integrated with back-end processes for editing and analysis.
It is estimated that approximately 97% of all responses will be submitted electronically (see the List of PIAAC Instruments.doc attached with this package submission for further details).
None of the previous literacy assessments conducted in the United States, including ALL and NAAL have used computer-based assessments to measure adult skills. Moreover, PIAAC will be the first study in the United States to incorporate a technology component among the skills being measured. The international nature of the study will allow comparisons of the prevalence of these skills in the United States versus other PIAAC participating countries.
The PIAAC field test and main study will collect information from the 16–65-year-old population though households only. No business organizations of any size will be contacted to participate in the PIAAC data collection.
The PIAAC field test and main study are new data collection efforts; however, PIAAC has been envisaged as a re-occurring 10-year survey. At this point, the periodicity of the study has not been officially set.
The National Center for Education Statistics is not applying for any exceptions to the guidelines in 5CFR 1320.
In addition to the participating countries’ Education and Labor staff, the design of the PIAAC field test has involved the participation of the staff of the OECD, Educational Testing Service, the German Institute for International Educational Research, the German Social Sciences Infrastructure Services’ Centre for Survey Research and Methodology, the University of Maastricht, Westat, the IEA, and cApStAn. NCES staff were assisted by the following people outside of NCES and Westat: Jaleh Soroui, Jing Chen, and Lauren Pisani (all of the American Institutes for Research).
The 60-day notice for this clearance request submission was published in the Federal Register vol. 74, page 68597, on December 28, 2009. One public comment was received:
Public comment
Dear Docket Manager,
I came across your posting of EDICS Collection Package: (04194)1850-NEW-v.1, titled “Programme For The International Assessment Of Adult Competencies (PIAAC) 2010 Field Test And 2011/2012 Main Study Data Collection (KI)” on the US Department of Education website. The purpose of this email is to provide a comment pertaining to the expertise of this vendor in providing mobile data collection technology to enable more efficient, accurate and quick collection of that data via electronic means using Tablet PCs and Digital Pens. Such methods have been employed by institutions like The University of North Carolina-Chapel Hill for the Study of Latinos, a 16,000 subject epidemiological survey to collect more accurate data, efficiently and easily. In addition, agencies such as the USDA have utilized such technologies to improve the efficiency of their survey and inspection processes. Case studies of each of these along with the vendor’s capability statement is attached and more information is available at: www.fieldinspectiondata.com
In conclusion, this vendor would like to propose that the program officer consider a mobile electronic data collection component to this project to allow for quicker and more accurate data collection, which will result in lower costs and higher quality results. Thank you for your consideration,
Best regards,
Gautham Pandiyan
Mi-Co
4601 Creekstone Drive #130
Durham, NC 27703
p: 919-485-4819 extn:1973
f: 919-485-0621
c: 919-259-4124
NCES Response
Mr. Pandiyan,
The program officer for (04194)1850-NEW-v.1 : PIAAC Data Collection Project would like to thank you for bringing the technology available to our attention. The current data collection software was developed by the international consortium implementing the project for OECD (Organization for Economic Cooperation and Development) and all countries must use that software and concomitant hardware specifications to ensure comparability in the study. The United States, as a participating country, is required to follow the internationally agreed upon software and hardware specifications.
Sincerely,
Kashka Kubzdela
NCES, U.S. Dept. of Education
As part of the planned efforts to meet PIAAC response rate goals, NCES proposes giving field test and main study respondents a payment, as was done in ALL and NAAL, to compensate for their time answering the background questionnaire items and taking the assessment. NCES proposes to provide such a payment as an incentive to participants because (a) in recent years in-person household-based surveys have seen significant declines in response rates, (b) research indicates that incentives play an important role in gaining respondent cooperation in such household surveys, and (c) PIAAC places a greater response burden on respondents than ALL or NAAL did and, hence, is at greater risk of respondent breaking off the questionnaire or assessment before both are completed.
(a) Many in-person household-based surveys have experienced decreasing response rates in recent years. For example, the National Health Interview Survey (NHIS), a one-hour interview, experienced a response rate decline of 12 percent from 1997 to 2007. The response rate for the National Survey on Drug Use and Health (NSDUH), decreased 5 percent between 2002 and 2007, and Round 1 of the Medical Expenditure Panel Survey (MEPS), which consists of a two-hour interview, sustained a response rate decline of 5 percent from 2001 to 2007.
In addition, the National Household Education Surveys (NHES) Program, which has collected information on important educational issues through telephone surveys of households in the United States since 1991, had response rates greater than 80 percent in 1991 and 1993, but in 1995 and 1996, they fell to 73 and 70 percent, respectively; in 2001 and 2003, they declined to 68 and 62 percent, respectively; and in 2007, they declined to 53 percent.
(b) Research indicates that incentives play an important role in gaining respondent cooperation, especially in surveys that ask respondents to give several hours of their time and undertake a complex and often difficult assessment. A meta-analysis of 39 studies experimenting with incentives in telephone and in-person surveys from 1970 to 1997 (Singer, Van Hoewyck, Gebler, Raghunuthan and McGonagle, 1999) found that incentives have a significant positive effect on response rates for both types of surveys. More specifically, they found that each dollar of an incentive paid resulted in approximately a third of a percentage point difference in response rate between the no incentive and the incentive conditions. Similar results were found for studies that had a low-incentive condition and a high incentive condition. The effects found by the authors were linear, and therefore they concluded that “within the limits of incentives and response rates occurring in these experiments, more money results in higher response rates.”
More specifically to literacy studies, a study was conducted to ascertain the effect of monetary incentives on response rates, among other variables (Mohadjer, Berlin, Rieger, Waskberg, Rock, Yamamoto, Kirsch, and Kolstad, 1997). The study included experiments with incentives in the National Adult Literacy Survey Field Test and Main Study. In both experiments, incentives produced a significant increase in response rate, most effectively in groups with low educational attainment and minority populations who are usually underrepresented in such studies. This effect would improve the distribution of these groups in the sample and therefore provide a better representation of the study’s target population.
(c) Several factors will make the respondent burden in PIAAC greater than that of ALL and NAAL. First, the PIAAC study will last 30 minutes more on average than the NAAL and ALL interviews. Second, all PIAAC respondents will take the Information and Communication Technology Module, which will: (1) inquire about basic computer skills, (2) attempt to teach these skills if needed via a self-administered tutorial, and (3) test these skills immediately after. Respondents who do not possess basic computer skills might find the ICT Module taxing, frustrating or intimidating. Third, respondents who take the computer-based Direct Assessment, which will be the majority, will first have to take an Orientation Module to learn to navigate the computer-based Direct Assessment. Even respondents who are very accustomed to computers may find the process of completing the assessment tasks on a computer complex and unfamiliar. Fourth, respondents who complete the paper Direct Assessment will complete two booklets: a literacy or numeracy booklet and a Reading Components booklet. Although not as cognitively demanding as the computer-based Direct Assessment, the Reading Components booklet will certainly add to the overall length of the interview. In summary, the length of the survey and the cognitive effort required from the respondent due to the mode of assessment administration both warrant a higher respondent incentive than that offered in ALL and NAAL.
Table 1 shows a comparison of the tasks and burden for two recent literacy surveys, ALL and NAAL, compared to PIAAC.
Survey |
Year |
Tasks |
Mode |
Burden (time) |
Incentive |
ALL |
2003 |
Screener Background Questionnaire Literacy/Numeracy Assessment (Paper) |
In-person CAPI Screener/BQ Paper Assessment |
1.5 hours |
$30 |
NAAL |
2003 |
Screener Background Questionnaire Literacy/Numeracy Assessment (Paper) |
In-person CAPI Screener/BQ Paper Assessment |
1.5 hours |
$30 |
PIAAC |
2010-2011 |
Screener Background Questionnaire, including Job Skills Assessment (JRA) ICT Module Literacy/Numeracy/ Problem Solving Assessment (Computer/Paper) Reading Components Assessment (Paper) |
In-person CAPI Screener/BQ Self-administered computer-based ICT tutorial and Core Self-administered computer-based assessment/ paper assessment |
2 hours |
$35 vs. $50 experiment |
In 2003, OMB approved a $30 incentive for the 1.5 hour ALL interview, or $10 for each half hour of the respondent’s time. The administration time for the PIAAC field test and main study interview is estimated to average two hours. If, following ALL, $10 is paid for each half hour of the PIAAC study, the incentive would be $40, which is $46.04 adjusted for 2010 (field test) inflation and $47.44 adjusted for 2011-2012 inflation (main study). As part of the PIAAC 2010-11 field test, NCES proposes to examine the impact on response rates of compensating study respondents, who answer the background questionnaire and complete the ICT module and the assessment, at the $35 versus $50 levels. Sample members will be randomly assigned to the $35 or $50 amount at the segment level (clusters within PSUs). Assignment at the HU level will not be employed to avoid offering different incentive amounts to respondents in the same neighborhoods and in households with two sampled adults. Power analysis indicates that we should be able to detect a five percent difference in response rates between the $35 and $50 incentive levels (at alpha=0.1).
The PIAAC field test and main study will conform to all relevant federal regulations—specifically, the Privacy Act of 1974 (5 U.S.C. 552a), the Education Sciences Reform Act of 2002 (20 U.S.C. 9573), the Family Educational and Privacy Rights Act (20 U.S.C. 1232g), and the NCES Statistical Standards and Policies. The plan for maintaining confidentiality includes: (a) all personnel signing Westat and PIAAC confidentiality agreements; and (b) obtaining notarized NCES nondisclosure affidavits from all personnel who will have access to individual identifiers (see Appendix E). The protocols for satisfying the confidentiality plans for the PIAAC field test have been arranged with the Institute of Education Sciences (IES) Disclosure Review Board (DRB). However, since the DRB policy requires performing additional statistical disclosure control procedures to the PIAAC main study data prior to delivering the data to the PIAAC Consortium, NCES will work closely with the DRB and the Consortium to map out the details of the disclosure analysis plan for masking the main study data, which will occur at the end of data collection. NCES will need DRB approval of the disclosure analysis report prior to any data released outside the United States.
The physical and/or electronic transfer of PII (particularly first names and addresses) will be limited to the extent necessary to perform project requirements. This limitation includes both internal transfers (e.g., transfer of information between agents of Westat, including subcontractors and/or field workers) and external transfers (e.g., transfers between Westat and NCES, or between Westat and another government agency or private entity assisting in data collection). Note, Westat will not transfer PIAAC files (whether or not they contain PII or direct identifiers) of any type to any external entity without the express, advanced approval of NCES.
For PIAAC, the only transfer of PII outside of Westat facilities is the automated transmission of case-reassignments and complete cases between Westat and its field interviewing staff. The transmission of this information is secure, using approved methods of encryption. Note, all field interviewer laptops are encrypted using full-disk encryption in compliance with FIPS 140-2 to preclude disclosure of PII should a laptop be lost or stolen.
In accordance with NCES Data Confidentiality and Security Requirements, Westat will transfer data to Pearson, the scoring subcontractor, for scoring in a manner that protects this information from disclosure or loss. These hard-copy data will not include any PII. Specifically, for electronic files, direct identifiers will not be included (a Westat-assigned study identifier will be used to uniquely identify cases), and these files will be encrypted according to NCES standards (128 bit or higher SSL). If these are transferred on media, such as CD or DVD, they will be encrypted in compliance with FIPS 140-2.
All PIAAC data files constructed to conduct the study will be maintained in secure network areas at Westat. These files will be subject to Westat’s regularly scheduled backup process. Backups are stored in secure facilities on site as well as off site. These data are stored and maintained in secure network and database locations where access is limited to those Westat staff who are specifically authorized access. Access is only granted once a staff member is assigned to the project and has completed the NCES Affidavit of Non-disclosure. Identifiers are maintained in files required to conduct survey operations that are physically separate from other research data and that are accessible only to sworn agency and contractor personnel. In consultation with NCES, these data files will be destroyed at the end of the project or delivered to NCES.
Also included in the plan is: (1) training personnel regarding the meaning of confidentiality, particularly as it relates to handling requests for information and providing assurance to respondents about the protection of their responses; (2) controlling and protecting access to computer files under the control of a single database manager; (3) building-in safeguards concerning status monitoring and receipt control systems; and (4) having a secured and operator-manned in-house computing facility.
All information identifying the individual respondents will be kept confidential, in compliance with the law (ESRA U.S. C. 9573), which states that:
( c ) (2) “No person may
(i) use any individually identifiable information furnished under the provisions of this section for any purpose other than a research, statistics, or evaluation purpose under this subchapter;
(ii) make any publication whereby the data furnished by any particular person under this subchapter can be identified; or
(iii) permit anyone other than the individuals authorized by the Director to examine the individual reports.”
The laws pertaining to the collection and use of personally identifiable information are clearly communicated in correspondence with participants, per NCES requirements. A study introductory letter and brochure will be sent to households describing the voluntary nature of this survey. Study materials sent to households will describe the study and convey the extent to which respondents and their responses will be kept confidential (see supporting materials in Appendix D of the accompanying documentation). Materials will carry a statement addressing confidentiality as follows:
The National Center for Education Statistics is authorized to conduct this study under the Education Sciences Reform Act of 2002 (Public Law 107-279, Section 153). Under that law, the data provided by you may be used only for statistical purposes and may not be disclosed, or used, in identifiable form for any other purpose except as required by law (Public Law 107-279, Section 183). Individuals are never identified in any reports. All reported statistics refer to the United States as a whole or to national subgroups.
Westat will deliver data files, accompanying software, and documentation to NCES at the end of the field test and main study. Neither names nor addresses will be included on any data file.
The screener and background questionnaire for the PIAAC field test and the main study will include questions about race/ethnicity and household income. These questions are considered standard practice in survey research and will conform to all existing laws regarding sensitive information.
For the PIAAC field test and main study, the estimated burden to respondents is calculated as two hours per respondent, including the estimated time required to answer the screener (5 minutes), background questionnaire questions (45 minutes), complete the ICT Module and the orientation module (10 minutes), and the assessment (60 minutes).
PIAAC participants will be informed of this burden through the following statement, which will be printed on a hand card given to them when they agree to participate in PIAAC as well as will appear on the first screen of the computer program (for those who get the computer-based assessment) and on the cover of the paper booklets (for those who get the paper and pencil version of the assessment:
According to the Paperwork Reduction Act of 1995, an agency is not allowed to collect information unless it displays a valid OMB control number and no persons are required to respond to a collection of information unless such collection displays a valid OMB control number. The valid OMB control number for this information collection is 1850-XXXX. The time required to complete this information collection is estimated to average 2 hours per response, including the time to review instructions and complete the information collection. If you have any comments concerning the accuracy of the time estimate(s) or suggestions for improving this form, please write to: U.S. Department of Education, Washington, D.C. 20202-4537. If you have any comments concerning the accuracy of the time estimate(s) or suggestions for improving this research protocol, please write to: U.S. Department of Education, Washington, D.C. 20202-4537.
The expected total number of assessment respondents for the field test is approximately 1,5304, with a total burden time of 1,858 hours and a conservative response rate estimate of 44 percent5 on account of the short three-month data collection period (see Table 2). Note that the purpose of this field test is not to pre-test cooperation/response rates; this has been successfully done in previous adult literacy surveys. In the three months of field testing, Westat expects to be able to incorporate most of the main study approaches to achieve high response rate.
The sample sizes in Table 2 take into account both eligible nonresponse and ineligible units; the overall response rates are calculated using only the eligible sample units.6 In the first row, 3,842 is the number of expected occupied households (HHs), which is computed as the total number of sampled dwelling units (DUs) multiplied by the occupancy rate (4,478 * .858). The number 2,497 is the number of respondents that go through the screener, which is computed as the number of expected occupied HHs multiplied by the screener response rate (3842*.65). In the second row, the number 2,250 is the number of sampled persons, computed as the product of (a) the number of completed screeners (2,497), (b) the proportion of HHs having at least one eligible person 16-65 (.85), and (c) an adjustment for the proportion of HHs with 2 sample persons selected (1.06). The 1,800 in this row is the number of sampled persons who completed the background questionnaire (BQ), which is the product of 2,250 * the BQ response rate (.80). In the fourth row, 1,530 is the number of expected assessments to be completed, which is the product of 1,800 (the sample size) and the assessment response rate (.85).
Data collection instrument |
Sample size |
Expected Response rate |
Number of Respond-ents |
Number of Responses |
Burden per Respondent (Minutes) |
Total burden hours |
PIAAC Field Test Screener |
3,842 (households) |
65% |
2,497 (households) |
2,497 |
5 min |
208 |
U.S. PIAAC Field Test Background Questionnaire |
2,250 |
80% |
1,800 |
1,800 |
45 min |
1,350 |
U.S. PIAAC Field Test Information and Communication Technology (ICT) Module and Orientation Module |
1,800 |
100% |
1,800 |
1,800 |
10 min |
300 |
U.S. PIAAC Assessment (Literacy, Numeracy, Problem-solving in a Technology-rich Environment, and/or Reading Components)7 |
1,800 |
85% |
1,530 |
1,530 |
60 min |
1,530 |
Total |
NA |
NA |
NA |
6,097 |
NA |
1,8587 |
NOTE: See table 7 in Part B for details on the sample yield estimates (e.g., only 85% of households that take the FT Screener are expected to be eligible to take the FT Background Questionnaire, but 6% of those households are expected to have two eligible adults).
Table 3 presents the estimates of burden for the PIAAC Main Study. The intended total number of respondents for the main study is 5,000, with a total burden time of 6,032 hours and an expected overall response rate of about 65%. In the first row, 8,535 is the number of expected occupied HHs, computed as the total number of sampled DUs multiplied by the occupancy rate (9,947 * .858). The number 7,681 is the number of respondents that go through the screener, which is computed as the number of expected occupied HHs multiplied by the screener response rate (8,535*.9). In the second row, 6,921 is the number of sampled persons, computed as the product of (a) the number of completed screeners (7,681), (b) the proportion of HHs having at least one eligible person 16-65 (.85), and (c) an adjustment for the proportion of HHs with 2 sample persons selected (1.06). The 5,883 in this row is the number of sampled persons who completed the BQ, which is the product of 6,921 * the BQ response rate (.85). In the fourth row, 5,000 is the number of expected assessments to be completed, the product of 5,883 (the sample size) and the assessment response rate (.85).
Data collection instrument |
Sample size |
Expected Response rate |
Number of Respond-ents |
Number of Responses |
Burden per Respondent (Minutes) |
Total burden hours |
PIAAC Screener |
8,535 (households) |
90% |
7,681 (households) |
7,681 |
5 min |
640 |
U.S. PIAAC Background Questionnaire |
6,921 |
85% |
5,883 |
5,883 |
45 min |
4,412 |
U.S. PIAAC Information and Communication Technology (ICT) Module and Orientation Module |
5,883 |
100% |
5,883 |
5,883 |
10 min |
980 |
U.S. PIAAC Assessment (Literacy, Numeracy, Problem-solving in a Technology-rich Environment, and/or Reading Components)8 |
5,883 |
85% |
5,000 |
5,000 |
60 min |
5,000 |
Total |
NA |
NA |
NA |
19,445 |
NA |
6,0328 |
NOTE: See table 6 in Part B for details on the sample yield estimates (e.g., only 85% of households that take the FT Screener are expected to be eligible to take the Background Questionnaire, but 6% of those households are expected to have two eligible adults).
The estimated dollar cost of the field test for participants is $65,025 (based on 765 assessment completers receiving $35 and the remaining 765 completers receiving $50). The total cost for the main study for participants who complete the assessment is $250,000.
Other than the burden associated with completing these pre-assessment activities and questionnaires (estimated above in Section A.12), the study imposes no additional cost on respondents nor has any record-keeping requirement.
The total annualized cost to the federal government for conducting the PIAAC field test is $4,106,773. The total cost to the federal government for conducting the PIAAC field test and main study is estimated to be $15,922,930, spread over a six-year period. This estimate includes all direct and indirect costs of preparing for and conducting the field test ($4,106,773) and the main study ($11,816,157). The components of these costs are presented in the table below.
The PIAAC field test and main study are new data collection efforts.
NCES will produce a report for the field test and main study design, sampling, data collection, weighting, and missing value imputation activities. There are currently no plans to conduct statistical analyses of the field test dataset but a full analysis of the main study data will be conducted.
Electronic versions of each publication will be made available on the NCES website. The expected data collection dates and a tentative reporting schedule are shown on Table 3 on the following page.
The OMB expiration date will be displayed on all data collection materials.
No exceptions to the certifications are requested.
December 18, 2009 |
Submit to NCES/RIMS PIAAC clearance package for the field test and a 60-day notice waiver for main study collection instruments clearance. |
February-May 2010 |
Field test data collection preparation (prepare maps/dwelling unit locating materials). |
March-July 2010 |
Recruit field test data collectors. |
April-August 2010 |
Finalize data collection manuals, forms, systems, laptops, and interview/assessment materials for field test. |
September-November 2010* |
Collect field test data (three months). |
November 2010 |
Receive Preliminary Field Test Report from Westat. |
January 2011 |
NCES receives field test raw data from Westat for delivery to the international consortium. |
January 2011 |
Receive Final Field Test Report from Westat. |
January 2011 |
Receive Field Test Item Analysis Report from international consortium. |
February 2011** |
Receive final international and national versions of PIAAC main study instruments. |
March 2011 |
Submit main study documents to OMB for clearance. |
June-July 2011 |
Finalize data collection manuals, forms, systems, laptops, and interview/assessment materials for the main study. |
August 2011-March 2012 |
Collect main study data. |
July 2012 |
NCES receives main study raw data from Westat for delivery to the international consortium. |
April 2013 |
Receive preliminary main study country analysis results from international consortium. |
June - December 2013 |
Produce main study General Audience Report, Survey Report, and Technical Report for the United States. |
* U.S. will conduct the field test later than other countries due to the 2010 Census moratorium.
** The main study period between the receipt of the national version, OMB submission and approval, and the onset of data collection is very compressed, as it is driven by the PIAAC Consortium’s current schedule. This schedule may be subject to change following the field test.
1 See http://www.ed.gov/policy/rschstat/leg/ PL107-279.pdf for the full Education Sciences Reform Act.
2 The Level 1 Study is also known as the Tipping Points and Five Classes of Adult Literacy Learners study.
3 AEL is also known as the Adult Education Program Survey (AEPS).
4 The Consortium assessment target sample size of 1,500 was inflated by 2 percent–from 1,500 to 1,530—so that the minimum sample size of 1,300 that pass the ICT Core is achieved under the assumption that 85 percent will pass the ICT Core.
5 The expected response rates and eligibility rates are conservatively estimated in order to calculate the field test sample size that is operationally essential to have a sufficient number of cases available to work in the field, given that there will be no time to release new sample cases. The response rates are not related to the quality of the non-probability based field test sample. The overall response rate is the product of the individual response rates at each stage of data collection (screener response rate * BQ response rate * assessment response rate). See Part B, Table 7 for response rates at each stage of data collection.
6 The overall response rate, as the product of the individual response rates for each stage of data collection, reflects the proportion of the eligible population covered by the participating persons. This is in contrast to a completion rate, which is typically computed as the total expected completed assessments divided by the total sample (which includes ineligibles).
7 Assessments are exempt from Paperwork Reduction Act reporting and are therefore not included in the burden calculation for this collection.
8 Assessments are exempt from Paperwork Reduction Act reporting and are therefore not included in the burden calculation for this collection.
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
File Title | PIAAC OMB Clearance Part A 12-15-09 |
Author | Michelle Amsbary |
File Modified | 0000-00-00 |
File Created | 2021-02-03 |