PIAAC 2011-12 MS Part A

PIAAC 2011-12 MS Part A.docx

Program for the International Assessment of Adult Competencies (PIAAC) 2011-2012 Main Study Data Collection

OMB: 1850-0870

Document [docx]
Download: docx | pdf




PROGRAM FOR THE INTERNATIONAL
ASSESSMENT OF ADULT COMPETENCIES (PIAAC)
2011-2012
MAIN STUDY DATA COLLECTION





REQUEST FOR OMB CLEARANCE

OMB# 1850-0870 v.2


Supporting Statement Part A








Prepared by:


National Center for Education Statistics

U.S. Department of Education

Washington, DC







March 1, 2011


Table of Contents


Section Page


Preface 1


A Justification 4


A.1 Importance of Information 5

A.2 Purposes and Uses of the Data 7

A.3 Improved Information Technology 8

A.4 Efforts to Identify Duplication 9

A.5 Minimizing Burden on Small Institutions 9

A.6 Frequency of Data Collection 9

A.7 Special Circumstances 10

A.8 Consultation Outside NCES 10

A.9 Payments or Incentives to Respondents 12

A.10 Assurance of Confidentiality 16

A.11 Sensitive Questions 18

A.12 Estimates of Burden 19

A.13 Total Annual Cost Burden 20

A.14 Annualized Cost to Federal Government 20

A.15 Program Changes or Adjustments 21

A.16 Plans for Tabulation and Publication 21

A.17 Display OMB Expiration Date 22

A.18 Exceptions to Certification Statement 22


B Collection of Information Employing Statistical Information 24


B.1 Respondent Universe and Response Rates 24

B.2 Procedures for Collection of Information 26

B.3 Maximizing Response Rates 28

B.4 Tests of Procedures 30

B.5 Individuals Consulted on Statistical Design 30




Contents (continued)

Appendixes Page


A PIAAC Main Study Screener A-1

[Unchanged since approval in May 2010 (OMB# 1850-0870 v.1)]


B U.S. PIAAC Main Study Background Questionnaire B-1


C U.S. PIAAC Main Study Core Task C-1

[Unchanged since approval in May 2010 (OMB# 1850-0870 v.1)]


D PIAAC Main Study Contact Letters and Brochure D-1

[Unchanged since approval in May 2010 (OMB# 1850-0870 v.1)]


E PIAAC Confidentiality Agreement and Affidavit of Non-Disclosure E-1


F List of Amendments Made to the Field Test Background Questionnaire F-1


G Design and Analysis of Field Test Incentive Experiment G-1



Tables


1 Comparison of Recent Literacy Surveys Versus PIAAC 14


2 Estimates of burden for PIAAC main study 19


3 Cost for conducting the PIAAC field test and main study 21


4 PIAAC production schedule 23


5 PIAAC Main Study: Sample yield estimates for 80 PSUs and 5,000 completed cases 21

Preface


The Office of Management and Budget (OMB) approved in May 2010 (OMB# 1850-0870 v.1) the Program for the International Assessment of Adult Competencies (PIAAC) 2010 Field Test and a waiver of the 60-day federal register notice for the clearance of the PIAAC 2011-2012 Main Study Data Collection. This submission is a request for OMB’s approval of the final versions of the 2011-2012 Main Study non-cognitive data collection instruments. A description of the modifications that have been made to the previously approved field test instruments is attached (see appendix F). The Supporting Statement Parts A and B are the same as those approved in May 2010, with the following changes: field test incentive experiment results in section A.9 and in Part B, the requested respondent burden in section A.12 (switching from field test to main study burden), the timeline in section A.18, and the description of the sample in section B.1 have been updated to accurately reflect minor modifications since the field test.


The U.S. PIAAC field test data collection occurred between September and November 2010, with 1,510 adults interviewed and assessed in 22 primary sampling units (PSUs) across the country. Each participant was administered (1) an in-person background questionnaire, (2) a brief Information and Communication Technology (ICT) module to determine whether the participant can use the computer to complete the assessment, and either (a) a paper and pencil version of the assessment, or (b) a computer-based assessment including an orientation module. The U.S. PIAAC main study will occur between August 2011 and March 2012. It will include a sample of 5,000 adults in 80 PSUs. The basic survey components, i.e., a screener, an in-person background questionnaire, and a computer-based or paper assessment remain the same as in the field test. However, the ICT module used in the field test will no longer be used (as explained in A.3) and the instruments have been modified somewhat based on the field test experience (as explained in appendix F).


The following material in this Preface is provided as background and context. (Note that the following material is the same as appeared in the previous request for OMB approval.)


The Program for the International Assessment of Adult Competencies (PIAAC) is the most comprehensive international survey of adult skills ever undertaken. The survey will examine literacy in the information age and assess adult skills consistently across the 26 participating countries. It will focus on what are deemed key skills for individuals to participate successfully in the economy and society of the 21st century. This multi-cycle study is a collaboration between the governments of participating countries, the Organization for Economic Cooperation and Development (OECD), and a consortium of various international organizations, referred to as the PIAAC Consortium, led by the Educational Testing Service (ETS), including the German Institute for International Educational Research (DIPF), the German Social Sciences Infrastructure Services’ Centre for Survey Research and Methodology (GESIS-ZUMA), the University of Maastricht, the U.S. company Westat, the International Association for the Evaluation of Educational Achievement (IEA), and the Belgium firm cApStAn.


The study will assess the following adult skills required in the information age: basic reading skills, reading literacy, numeracy, and problem solving in “technology-rich environments” (the OECD term for ‘on or with a computer’). PIAAC will also measure the ability of individuals to use computer and web applications to find, gather, and use information, and to communicate with others. The study will use a “Job Requirements Approach” to ask employed adults about the types and levels of a number of specific skills used in the workplace. These include not only the use of reading and numeracy skills on the job, but also physical skills (e.g., carrying heavy loads, manual dexterity), people skills (e.g., public speaking, negotiating, working in a team), and information technology skills (e.g., using spreadsheets, writing computer code). It will ask about the requirements of the person’s main job in terms of the intensity and frequency of the use of such skills. PIAAC also breaks new ground by being the first to use computers to administer an international assessment of this kind, though some individuals will be given a paper and pencil version of the assessment.


An important element of the value of PIAAC is its collaborative and international nature. In the United States, the U.S. Department of Education’s National Center for Education Statistics (NCES) is collaborating with the U.S. Department of Labor (DoL) on PIAAC. Staff from NCES and DoL are co-representatives of the United States on PIAAC's international governing body and NCES has consulted extensively with DoL, particularly on development of the job skills section of the background questionnaire. Internationally, PIAAC has been developed collaboratively by participating countries’ representatives from both Ministries or Departments of Education and Labor and by OECD staff through an extensive series of international meetings and work groups. These international meetings and work groups, assisted by expert panels, researchers, and the PIAAC Consortium’s support staff, have developed the framework used to develop the assessment and background questionnaire, the common standards and procedures for collecting and reporting data, and guided the development of a common, international “virtual machine” (VM) software that will administer the assessment uniformly on laptops. All PIAAC countries must follow the common standards and procedures and use the same VM software when conducting the survey and assessment. As a result, PIAAC will be able to provide a reliable and comparable measure of adult skills in the adult population (age 16-65) of participating countries. PIAAC is wholly a product of international and inter-department collaboration, and as such represents compromises on the part of all participants.


Currently, the National Center for Education Statistics (NCES) has contracted with Westat to work with NCES and the PIAAC Consortium on the conduct of the study. Westat’s key tasks include instrument development (a screener to enumerate and select study participants), adaptation of the international background questionnaire and assessment for the United States, instrument translation (as necessary), sample design and selection, data collection, scoring, and the production of reports detailing the results of the field test and the main study.



Justification

A

Over the past two decades, there has been growing interest by national governments and other stakeholders in an international assessment of adult skills to monitor how well prepared populations are for the challenges of a knowledge-based society.


In the mid-1990s, three waves of the International Adult Literacy Survey (IALS) assessed the prose, document, and quantitative literacy of adults in a total of 22 countries, and between 2002 and 2006, the Adult Literacy and Lifeskills (ALL) Survey assessed prose and document literacy, numeracy, and problem-solving in eleven countries and one state. These surveys demonstrated the feasibility of assessing internationally how well adults perform literacy, numeracy, and problem-solving tasks in real-life situations.


PIAAC builds on previous surveys and extends international adult assessment beyond the more traditional measures of literacy and numeracy. It aims to address the growing need to collect more sophisticated information that will more closely match the needs of governments to develop a high quality workforce able to solve problems and deal with complex information that is often presented electronically on computers.


PIAAC’s measurement of competencies in problem solving and of skills used in the workplace also moves the survey beyond conventional measurements of literacy. These two features propose to help assess the extent to which adults have acquired a generic set of skills and competencies. At the same time, PIAAC looks more closely than previous surveys at the extent to which people with low literacy levels have the basic building blocks that they need to read effectively.


By directly assessing adult skills, PIAAC will enhance our understanding of the relationship of education to developing basic cognitive skills and key generic work skills. As an international cooperative venture, PIAAC provides participating countries with access to high-quality expertise in the measurement of adult skills. By sharing the costs of development and pooling resources, participating countries have access to a greater level of expertise than would otherwise be the case.



A.1 Importance of Information

Through its involvement in the Program for the International Assessment of Adult Competencies (PIAAC), the National Center for Education Statistics (NCES) will be able to provide policy-relevant data for international comparisons of the U.S. adult population’s competencies and skills, and help inform decision-making on the part of national, state, and local policymakers, especially those concerned with economic development and workforce training. The majority of the literacy and numeracy items proposed for the PIAAC assessment are taken directly from previous international adult literacy assessments (IALS and ALL). However, PIAAC extends beyond the previous adult assessments through the addition of the problem solving in technology-rich environments component, designed to measure the cognitive skills required in the information age.


U.S. participation in PIAAC is entirely consistent with the NCES mandate. The enabling legislation of the National Center for Education Statistics [Section 406 of the General Education Provisions Act, as amended (20 U.S.C. 1221e-1)] specifies that "The purpose of the Center [NCES] shall be to collect and analyze and disseminate statistics and other information related to education in the United States and in other nations." The Educational Sciences Reform Act of 2002 (HR 3801, Part C, Sec.153) also specifies that NCES


shall collect, report, analyze, and disseminate statistical data related to education in the United States and in other nations, including—(1) collecting, acquiring, compiling (where appropriate, on a State-by-State basis), and disseminating full and complete statistics (disaggregated by the population characteristics described in paragraph (3)) on the condition and progress of education, at the preschool, elementary, secondary, postsecondary, and adult levels in the United States, including data on…(D) secondary school completions, dropouts, and adult literacy and reading skills…[and] (6) acquiring and disseminating data on educational activities and student achievement…in the United States compared with foreign nations.1

Apart from being essential for any international perspective on adult literacy and reading skills, U.S. participation fulfills both the national and international aspects of NCES' mission.


NCES conducted several major surveys of adult competencies between 1985 and 2008.


  • Young Adult Literacy Assessment (YALA) – In 1985, NCES extended the reading portion of the National Assessment of Educational Progress (NAEP) to include a nationally representative sample of 3,600 young adults between the ages of 21 and 25. That study came to be known as YALA. Using a combination of reading questions and questions designed to simulate literacy activities that adults encounter in daily life, YALA surveyed the extent and nature of the literacy problem among young adults. It included a background questionnaire, which collected information on family background, respondent characteristics, educational experiences, work and community experiences, and literacy practices. It was also the first literacy study to measure three distinct areas of literacy—prose, document, and quantitative.

  • National Adult Literacy Survey (NALS) – NALS was the first federally sponsored study to measure the literacy skills of a nationally representative sample of U.S. adults (aged 16 and older) and to determine how these skills are distributed across major subgroups of interest. Approximately 26,000 in-person interviews and literacy assessments were administered by 400 interviewers over a 6-month period, beginning in 1992.

  • International Adult Literacy Survey (IALS) – IALS was a large-scale, international comparative assessment designed to identify and measure a range of skills linked to the social and economic characteristics of individuals across (or within) nations. IALS provided information on the skills and attitudes of adults aged 16-65 in 22 countries between 1994 and 1998 in a number of different areas, including prose, document, and quantitative literacy.

  • International Adult Literacy and Lifeskills Survey – This effort included three literacy studies: The Adult Literacy and Lifeskills Survey (ALL), the Level 1 Study2, and the Adult Education and Literacy Study (AEL)3.

The ALL survey (2003) measured the literacy (prose and document) and numeracy skills of a representative sample of adults aged 16 to 65 in 11 countries. The U.S. sample included approximately 7,000 households in 60 primary sampling units. In-person interviews and literacy assessments (lasting a total of 90 minutes) were conducted with approximately 3,500 participants.

The purpose of the Level 1 Study was to examine the skills of adults with lower literacy levels. The study sample included 950 adult education students and 84 individuals from the general population. Respondents were asked to complete a background questionnaire, a set of literacy tasks, and a battery of five reading component skills. In addition, four brief language and additional cognitive measures were administered using Ordinate’s PhonePass ©, an automated testing technology that measures speaking and listening skills through respondent/telephone interaction.

In the AEL survey (2002-2003), a subset of the ALL interview and assessment instruments was administered to a representative national sample of adult participants (N=6,100) in adult education programs governed by the Adult Education and Family Literacy Act (AEFLA), Title II of the Workforce Investment Act of 1998. This approach allowed a comparison of the literacy skills of adult education program participants and the general population. Assessments were conducted in Spanish and English to compare literacy outcomes in both languages for Spanish speakers. A key component of this study was a survey of 1,200 adult education programs to provide the first comprehensive information in 10 years on the characteristics of these programs.

  • National Assessment of Adult Literacy (NAAL) – NAAL (2003) measured the literacy skills of a nationally representative sample of U.S. adults to determine how the distribution of skills across major subgroups had changed since the 1992 National Adult Literacy Survey. The study also provided separate estimates of literacy skills for adults in six states and for inmates of federal and state prisons. Main study data collection, with more than 18,000 respondents, included the basic assessment plus a Fluency Addition.

  • National Assessment of Adult Literacy (NAAL) – NAAL (2008) consisted of a field test with 1,500 respondents. Innovative assessments of functional writing and vocabulary knowledge were developed and tested.


A.2 Purposes and Uses of the Data

The PIAAC will be the next step in the series of efforts aimed at developing adult literacy assessments (described in A.1). Specifically, it adds the new assessment domains of problem-solving in technology rich environments and reading components.


The results of the PIAAC main study will be used to:


  • Identify factors that are associated with adult competencies;

  • Extend the measurement of skills held by the working age population;

  • Provide a better understanding of the relationship of education to adult skills; and

  • Allow comparisons across countries and, as PIAAC is intended to be cyclical, over time.

Additionally, information from the PIAAC main study will be used by:


  • Federal policymakers and Congress to plan Federal programs aimed at improving literacy skills;

  • State and local officials to enhance adult education and other literacy programs;

  • News media to inform the public about similarities and differences between U.S. and international adult populations; and

  • Business and educational organizations to better understand the skills of the U.S. labor force and plan programs to address skill gaps.


A.3 Improved Information Technology

Technology is a large component of the PIAAC main study. The screener and the background questionnaire (BQ) will be administered using a computer-assisted personal interviewing (CAPI) system. The interviewer will read the items aloud to the respondent from the screen on a laptop computer and will record all responses on the computer. The use of a computer for these questionnaires allows for automated skip patterns to be programmed into the database as well as data to be entered directly into the database for analyses. In addition, for the screener, the computer will run a sampling algorithm to determine who, if anyone, in the household is eligible to participate in the study, and it will select a respondent or respondents for the BQ and the Assessment.


Although PIAAC is designed to be an adaptive, computer-administered assessment of adult skills, not all sampled adults may be able to use a computer. Thus, the BQ includes two questions about computer usage that the virtual machine (VM) will use to route respondents either to the adaptive, computer-based (CBA) assessment or to the paper-based (PBA) assessment, depending on their self-reported computer usage. Sampled adults who are routed to the CBA will be asked to complete a Core Task (which replaces the “Information and Communication Technology (ICT) Module” in the field test).4 The Core Task’s short series of cognitive items will serve two purposes: (1) to screen respondents for the ICT skills needed to complete the assessment on the computer, and (2) to provide a simple measure of the respondent’s literacy and numeracy skills that will serve to route respondents to an initial set of literacy and numeracy items at appropriate level for them. Respond­ents who report not using computers in the BQ and respondents who do not answer enough questions correctly in the Core Task will be routed to the paper-and-pencil assessment.


Most respondents in the main study will complete the assessment via computer. For those who are routed to complete the assessment on paper, an automated interviewer guide will assist the interviewer in administering the assessment. This interviewer guide will contain prompts to be read aloud to the respondent.


Additionally, the Field Management System (FMS) tested during the PIAAC field test will be used for the main study. The FMS is comprised of three basic modules: the Supervisor Management System, the Interviewer Management System, and the Home Office Management System. The Supervisor Management System will be used to manage data collection and case assignments and produce productivity reports. The Interviewer Management System allows interviewers to administer the automated instruments, manage case status, transmit data, and maintain information on the cases. The Home Office Management System supports the packaging and shipping of cases to the field, shipping of booklets for scoring report production, processing of cases, receipt control, and the receipt and processing of automated data, and is integrated with back-end processes for editing and analysis.


It is estimated that approximately 97 percent of all responses will be submitted electronically (see the List of PIAAC Instruments.doc attached with this package submission for further details).



A.4 Efforts to Identify Duplication

None of the previous literacy assessments conducted in the United States, including ALL and NAAL, has used computer-based assessments to measure adult skills. Moreover, PIAAC will be the first study in the United States to incorporate a technology component among the skills being measured. The international nature of the study will allow comparisons of the prevalence of these skills in the United States versus other PIAAC participating countries.



A.5 Minimizing Burden on Small Institutions

The PIAAC main study will collect information from the 16–65-year-old population though households only. No business organizations of any size will be contacted to participate in the PIAAC data collection.



A.6 Frequency of Data Collection

The PIAAC main study is a new data collection effort; however, PIAAC has been envisaged as a re-occurring 10-year survey. At this point, the periodicity of the study has not been officially set.



A.7 Special Circumstances

The National Center for Education Statistics is not applying for any exceptions to the guidelines in 5CFR 1320.



A.8 Consultation Outside NCES

In addition to the participating countries’ Education and Labor staff, the design of the PIAAC main study has involved the participation of the staff of the OECD, Educational Testing Service, the German Institute for International Educational Research, the German Social Sciences Infrastructure Services’ Centre for Survey Research and Methodology, the University of Maastricht, Westat, the IEA, and cApStAn. In the United States, NCES has consulted about PIAAC with the U.S. Department of Education’s Office of Vocational and Adult Education (OVAE), the Institute of Education Sciences’ Office of Research, the U.S. Department of Labor’s Employment and Training Administration (ETA), and, through the Literacy Research Convening group, representatives from the National Institutes for Health (NIH), the Health and Human Services Administration (HSS), the U.S. Department of Justice’s Bureau of Prisons, the Treasury Department, and the National Science Foundation (NSF). Within NCES, PIAAC staff have worked closely with the staff of both the Assessment Division (especially the staff of NAAL) and the Postsecondary, Adult, and Career Education Division (PACE). NCES staff were also assisted by the following people outside of NCES and Westat: Jaleh Soroui, Jing Chen, Lauren Pisani, and Timothy Werwath (all of the American Institutes for Research).



A.9 Payments or Incentives to Respondents

As part of the planned efforts to meet PIAAC response rate goals, NCES proposes giving main study respondents a payment, as was done in ALL and NAAL, to thank participants for their time answering the background questionnaire items and taking the assessment. NCES proposes to provide such a payment as an incentive to participants because (a) in recent years in-person household-based surveys have seen response rates decline, (b) research indicates that incentives play an important role in gaining respondent cooperation in such household surveys, and (c) PIAAC places a greater response burden on respondents than ALL or NAAL did and, hence, is at greater risk of respondent breaking off the questionnaire or assessment before both are completed.


(a) Many in-person household-based surveys have experienced decreasing response rates in recent years. For example, the National Health Interview Survey (NHIS), a one-hour interview, experienced a response rate decline of 12 percent from 1997 to 2007. The response rate for the National Survey on Drug Use and Health (NSDUH), decreased 5 percent between 2002 and 2007, and Round 1 of the Medical Expenditure Panel Survey (MEPS), which consists of a two-hour interview, sustained a response rate decline of 5 percent from 2001 to 2007.


In addition, the National Household Education Surveys (NHES) Program, which has collected information on important educational issues through telephone surveys of households in the United States since 1991, had response rates greater than 80 percent in 1991 and 1993, but in 1995 and 1996, they fell to 73 and 70 percent, respectively; in 2001 and 2003, they declined to 68 and 62 percent, respectively; and in 2007, they declined to 53 percent.


(b) Research indicates that incentives play an important role in gaining respondent cooperation, especially in surveys that ask respondents to give several hours of their time and undertake a complex and often difficult assessment. A meta-analysis of 39 studies experimenting with incentives in telephone and in-person surveys from 1970 to 1997 (Singer, Van Hoewyck, Gebler, Raghunuthan and McGonagle, 1999) found that incentives have a significant positive effect on response rates for both types of surveys. More specifically, they found that each dollar of an incentive paid resulted in approximately a third of a percentage point difference in response rate between the no incentive and the incentive conditions. Similar results were found for studies that had a low-incentive condition and a high incentive condition. The effects found by the authors were linear, and therefore they concluded that “within the limits of incentives and response rates occurring in these experiments, more money results in higher response rates.”


More specifically to literacy studies, a study was conducted to ascertain the effect of monetary incentives on response rates, among other variables (Mohadjer, Berlin, Rieger, Waksberg, Rock, Yamamoto, Kirsch, and Kolstad, 1997). The study included experiments with incentives in the National Adult Literacy Survey Field Test and Main Study. In both experiments, incentives produced a significant increase in response rate, most effectively in groups with low educational attainment and minority populations who are usually underrepresented in such studies. This effect would improve the distribution of these groups in the sample and therefore provide a better representation of the study’s target population.


More recently, in 2008, a research experiment was conducted for the MEPS at the request of OMB, as sponsored by the Agency for Healthcare Research and Quality and the Centers for Disease Control & Prevention. Incentive payments of $30, $50 and $70 were compared among close to 10,000 households in the MEPS 2008 sample panel. The experiment was carried out in five rounds of data collection and the comparable comparison to PIAAC is for the first round. The MEPS is at a similar burden level as PIAAC. In terms of response rates, in the first round, the two higher incentive payment groups had significantly higher response rates than the $30 payment group. Likewise, there was a simultaneous drop in refusal rates, in which the two higher incentive payments had significantly lower refusal rates than the $30 payment group. In round 1, the difference between the $50 and $70 was not significant, and our understanding is that OMB approved the $50 incentive payment.


(c) Several factors will make the respondent burden in PIAAC greater than that of ALL and NAAL. First, the PIAAC study will last 30 minutes more on average than the NAAL and ALL interviews. Second, all PIAAC respondents will take the Core Task, which will test basic computer skills needed to complete the Direct Assessment on the computer and basic literacy and numeracy. Respondents who do not possess basic computer skills might find the Core Task taxing, frustrating or intimidating. Third, respondents who take the computer-based Direct Assessment, which will be the majority, will first have to take an Orientation Module to learn to navigate the computer-based Direct Assessment. Even respondents who are very accustomed to computers may find the process of completing the assessment tasks on a computer complex and unfamiliar. Fourth, respondents who complete the paper Direct Assessment will complete three booklets: a Core Task booklet, a literacy or numeracy main booklet, and a Reading Components booklet. Although not as cognitively demanding as the computer-based Direct Assessment, the Reading Components booklet will certainly add to the overall length of the interview. In summary, the length of the survey and the cognitive effort required from the respondent due to the mode of assessment administration both warrant a higher respondent incentive than that offered in ALL and NAAL.


Table 1 shows a comparison of the tasks and burden for two recent literacy surveys, ALL and NAAL, compared to PIAAC.


Table 1. Comparison of Recent Literacy Surveys Versus PIAAC


Survey

Year

Tasks

Mode

Burden (time)

Incentive

ALL

2003

Screener

Background Questionnaire

Literacy/Numeracy Assessment (Paper)

In-person

CAPI Screener/BQ

Paper Assessment

1.5 hours

$30

NAAL

2003

Screener

Background Questionnaire

Literacy/Numeracy

Assessment (Paper)

In-person

CAPI Screener/BQ

Paper Assessment

1.5 hours

$30

PIAAC

2010-2011

Screener

Background Questionnaire, including Job Skills Assessment (JRA)

Core Task

Literacy/Numeracy/

Problem Solving Assessment

(Computer/Paper)

Reading Components Assessment (Paper)

In-person

CAPI Screener/BQ

Self-administered computer-based Core Task

Self-administered computer-based assessment/

paper assessment

2 hours

$50 (proposed)


NCES proposes giving main study respondents a payment of $50 in appreciation for the time spent answering the background questionnaire items and completing the Core Task and the assessment. In 2003, OMB approved a $30 incentive for the 1.5 hour ALL interview, or $10 for each half hour of the respondent’s time. The administration time for the PIAAC main study interview is estimated to average two hours. If, following ALL, $10 is paid for each half hour of the PIAAC study, the incentive would be $40, which is $47.44 adjusted for 2011-2012 inflation (main study). The proposal is to round up the amount to $50 to make it easier to administer and more salient to respondents.


The PIAAC field test included an experiment to evaluate the impact of increasing the incentive amount from $35 (equivalent to 2003 ALL and NAAL incentives when accounting for inflation) to $50 to account for the added burden of a longer interview and assessment than past literacy surveys and the increased complexity of the PIAAC computer-based assessment.


The incentive experiment was conducted at the segment level (clusters of dwelling units (DUs) within Primary sampling units (PSUs)). The experiment was not conducted at the DU level because such designs have an increased chance of introducing error in administering the incentives to the respondents, and because of the risk of spreading information about different incentive amounts in close neighborhoods.


Incentive payments were randomly assigned to each segment, as described in section G.1 of Appendix G. By doing so, each interviewer was assigned both incentive amounts to minimize any interviewer impact. The achieved response rates for each stage of data collection, and the overall response rate for each incentive group are provided below.


The initial overall(unweighted) response rate in the $50 group was 5.2 percent higher than the response rate for the $35 group. The Screener response rate among DUs in the $50 group was 3.2 percent higher than the response rate for the $35 group. The Background Questionnaire (BQ) response rate among respondents in the $50 group was 4.0 percent higher than the response rate for the $35 group. The assessment response rate was the same for both groups (96.5 percent).


These rates, however, do not take into account (1) the fact the field test sample was purposefully selected from areas with high computer literacy5 and (2) the fact that not all persons selected into the sample became aware of the incentive offered to them (even though advanced letters were mailed to all households explaining the incentive).6 In order to account for the fact the field test sample was selected from areas with high computer literacy, weights were assigned to the sample cases so that the total sample would reflect the population distribution of the United States according to the percentages of the following variables: “less than a high school education,” “average earnings below 150 percent of the poverty line,” and “Black or Hispanic.” In order to account for the fact that not all persons selected into the sample became aware of the incentive offered to them, an “experiment response rate” was calculated using the cases that remain in the experiment once those unaware of the incentive were dropped. The remaining cases consisted of all completes7, refusals, and partial complete or breakoffs. Thus the experiment response rate = completes / [completes + refusals + partial complete or breakoffs]. Sample cases that were never contacted were excluded from the analysis since the incentive payment did not have any effect on their response status.


The (weighted) experiment response rate is the appropriate statistic for assessing the incentive experiment; however, to avoid the potential for confusion having two different sets of field test response rates in various documents, it was deemed best to analyze the field test data using the complement of the experiment response rate, referred to as the refusal rate and defined as:


refusal rate = 1 – experiment response rate

“ = [refusals + partial complete or breakoffs]

[completes + refusals + partial complete or breakoffs]


The refusal rates for the two incentive levels in the field test, after accounting for the field test design, differed as follows:


  • The overall weighted refusal rate in the $50 group was 6.8 percent lower than the weighted refusal rate for the $35 group.

  • The Screener weighted refusal rate among DUs in the $50 group was 0.6 percent lower than the response rate for the $35 group.

  • The Background Questionnaire (BQ) weighted refusal rate among respondents in the $50 group was 6.2 percent lower than the weighted refusal rate for the $35 group.

The statistical analysis described in the remainder of this report concluded that the difference in the overall refusal rate between the two incentive amounts is significant at the 0.05 level. That is, there was enough evidence to show that the $50 incentive amount had a significantly lower refusal rate when compared to the $35 incentive amount. The screener refusal rate and BQ refusal rate were also tested individually. The difference between the screener refusal rates for the two incentive levels was not significant; however the difference between the BQ refusal rates for the two incentive levels was significant at the 0.05 level.


Appendix G provides details on the design and analysis of the experiment.



A.10 Assurance of Confidentiality

The PIAAC main study will conform to all relevant federal regulations—specifically, the Privacy Act of 1974 (5 U.S.C. 552a), the Education Sciences Reform Act of 2002 (20 U.S.C. § 9573), the Family Educational and Privacy Rights Act (20 U.S.C. § 1232g), and the NCES Statistical Standards and Policies. The plan for maintaining confidentiality includes: (a) all personnel signing Westat and PIAAC confidentiality agreements; and (b) obtaining notarized NCES nondisclosure affidavits from all personnel who will have access to individual identifiers (see Appendix E). The protocols for satisfying the confidentiality plans for the PIAAC field test have been arranged with the Institute of Education Sciences (IES) Disclosure Review Board (DRB). However, since the DRB policy requires performing additional statistical disclosure control procedures to the PIAAC main study data prior to delivering the data to the PIAAC Consortium, NCES will work closely with the DRB and the Consortium to map out the details of the disclosure analysis plan for masking the main study data, which will occur at the end of data collection. NCES will need DRB approval of the disclosure analysis report prior to any data released outside the United States.


The physical and/or electronic transfer of PII (particularly first names and addresses) will be limited to the extent necessary to perform project requirements. This limitation includes both internal transfers (e.g., transfer of information between agents of Westat, including subcontractors and/or field workers) and external transfers (e.g., transfers between Westat and NCES, or between Westat and another government agency or private entity assisting in data collection). Note, Westat will not transfer PIAAC files (whether or not they contain PII or direct identifiers) of any type to any external entity without the express, advanced approval of NCES.


For PIAAC, the only transfer of PII outside of Westat facilities is the automated transmission of case-reassignments and complete cases between Westat and its field interviewing staff. The transmission of this information is secure, using approved methods of encryption. Note, all field interviewer laptops are encrypted using full-disk encryption in compliance with FIPS 140-2 to preclude disclosure of PII should a laptop be lost or stolen.


In accordance with NCES Data Confidentiality and Security Requirements, Westat will transfer data to Pearson, the scoring subcontractor, for scoring in a manner that protects this information from disclosure or loss. These hard-copy data will not include any PII. Specifically, for electronic files, direct identifiers will not be included (a Westat-assigned study identifier will be used to uniquely identify cases), and these files will be encrypted according to NCES standards (128 bit or higher SSL). If these are transferred on media, such as CD or DVD, they will be encrypted in compliance with FIPS 140-2.


All PIAAC data files constructed to conduct the study will be maintained in secure network areas at Westat. These files will be subject to Westat’s regularly scheduled backup process. Backups are stored in secure facilities on site as well as off site. These data are stored and maintained in secure network and database locations where access is limited to those Westat staff who are specifically authorized access. Access is only granted once a staff member is assigned to the project and has completed the NCES Affidavit of Non-disclosure. Identifiers are maintained in files required to conduct survey operations that are physically separate from other research data and that are accessible only to sworn agency and contractor personnel. In consultation with NCES, these data files will be destroyed at the end of the project or delivered to NCES.


Also included in the plan is: (1) training personnel regarding the meaning of confidentiality, particularly as it relates to handling requests for information and providing assurance to respondents about the protection of their responses; (2) controlling and protecting access to computer files under the control of a single database manager; (3) building-in safeguards concerning status monitoring and receipt control systems; and (4) having a secured and operator-manned in-house computing facility.


All information identifying the individual respondents will be kept confidential, in compliance with the law (ESRA U.S. C. § 9573), which states that:


(c) (2) “No person may

(i) use any individually identifiable information furnished under the provisions of this section for any purpose other than a research, statistics, or evaluation purpose under this subchapter;

(ii) make any publication whereby the data furnished by any particular person under this subchapter can be identified; or

(iii) permit anyone other than the individuals authorized by the Director to examine the individual reports.”

The laws pertaining to the collection and use of personally identifiable information are clearly communicated in correspondence with participants, per NCES requirements. A study introductory letter and brochure will be sent to households describing the voluntary nature of this survey. Study materials sent to households will describe the study and convey the extent to which respondents and their responses will be kept confidential (see supporting materials in Appendix D of the accompanying documentation). Materials will carry a statement addressing confidentiality as follows:


The National Center for Education Statistics is authorized to conduct this study under the Education Sciences Reform Act of 2002 (20 U.S.C. § 9543). Under that law, the data provided by you may be used only for statistical purposes and may not be disclosed, or used, in identifiable form for any other purpose except as required by law (20 U.S.C. § 9573). Individuals are never identified in any reports. All reported statistics refer to the United States as a whole or to national subgroups.

Westat will deliver data files, accompanying software, and documentation to NCES at the end of the main study. Neither names nor addresses will be included on any data file.



A.11 Sensitive Questions

The screener and background questionnaire for the PIAAC main study will include questions about race/ethnicity and household income. These questions are considered standard practice in survey research and will conform to all existing laws regarding sensitive information.



A.12 Estimates of Burden

For the PIAAC main study, the estimated burden to respondents is calculated as two hours per respondent, including the estimated time required to answer the screener (5 minutes), background questionnaire questions (45 minutes), complete the Core Task and the orientation module (10 minutes), and the assessment (60 minutes).


Table 2. Estimates of burden for PIAAC main study


Data collection instrument

Sample size

Expected Response rate

Number of Respond-ents

Number of Responses

Burden per Respondent (Minutes)

Total burden hours

PIAAC Screener

8,169 (households)

86.7%

7,083 (households)

7,083

5 min

590

U.S. PIAAC Background Questionnaire

6,371

80%

5,097

5,097

45 min

3,823

U.S. PIAAC Core Task and Orientation Module

5,097

100%

5,097

5,097

10 min

850

U.S. PIAAC Assessment (Literacy, Numeracy, Problem-solving in a Technology-rich Environment, and/or Reading Components)8

5,097

98.1%

5,000

5,000

60 min

5,000

Total

NA

NA

7,083

17,277

NA

5,263

NOTE: See table 5 in Part B for details on the sample yield estimates (e.g., only 85 percent of households that take the FT Screener are expected to be eligible to take the Background Questionnaire, but 6 percent of those households are expected to have two eligible adults).


Table 2 presents the estimates of burden for the PIAAC Main Study. The intended total number of assessment respondents for the main study is 5,000, with a total burden time (excluding the assessment) of 5,263 hours and an expected overall response rate of about 68 percent. In the first row, 8,169 is the number of expected occupied households, computed as the total number of sampled dwelling units multiplied by the occupancy rate (9,610 * .85). The number 7,083 is the number of households that go through the screener, which is computed as the number of expected occupied households multiplied by the screener response rate (8,169*.867). In the second row, 6,371 is the number of sampled persons, computed as the product of (a) the number of completed screeners (7,083), (b) the proportion of households having at least one eligible person 16-65 (.849), and (c) an adjustment for the proportion of HHs with 2 sample persons selected (1.06). The 5,097 in this row is the number of sampled persons who completed the BQ, which is the product of 6,371 * the BQ response rate (.80). In the fourth row, 5,000 is the number of expected assessments to be completed, the product of 5,097 (the sample size) and the assessment response rate (.981).


At an estimated cost per respondent who completes the assessment of $50 dollars, the total cost for the main study for participants who complete the assessment is $250,000.



A.13 Total Annual Cost Burden

Other than the burden associated with completing these pre-assessment activities and questionnaires (estimated above in Section A.12), the study imposes no additional cost on respondents nor has any record-keeping requirement.



A.14 Annualized Cost to Federal Government

The total cost to the federal government, including all direct and indirect costs of preparing for and conducting the PIAAC main study is estimated to be $11,816,157. The components of these costs are presented in table 3.


Table 3. Cost for conducting the PIAAC field test and main study


Shape1



A.15 Program Changes or Adjustments

There is an increase in burden, because the last approval was for the PIAAC field test and this request is for the PIAAC 2011/12 full scale data collection.



A.16 Plans for Tabulation and Publication

NCES will produce a report for the main study design, sampling, data collection, weighting, and missing value imputation activities. A full analysis of the main study data will be conducted.


Electronic versions of each publication will be made available on the NCES website. The expected data collection dates and a tentative reporting schedule are shown on Table 4 on the following page.


Table 4. PIAAC Main Study production schedule


February 2011*

Receive final international and national versions of PIAAC main study instruments.

March 2011

Submit main study documents to OMB for clearance.

June-July 2011

Finalize data collection manuals, forms, systems, laptops, and interview/assessment materials for the main study.

August 2011-March 2012

Collect main study data.

July 2012

NCES receives main study raw data from Westat for delivery to the international consortium.

April 2013

Receive preliminary main study country analysis results from international consortium.

June - December 2013

Produce main study General Audience Report, Survey Report, and Technical Report for the United States.

* The main study period between the receipt of the national version, OMB submission and approval, and the onset of data collection is very compressed, as it is driven by the PIAAC Consortium’s current schedule.



A.17 Display OMB Expiration Date

The OMB expiration date will be displayed on all data collection materials.



A.18 Exceptions to Certification Statement

No exceptions to the certifications are requested.


1 See http://www.ed.gov/policy/rschstat/leg/ PL107-279.pdf for the full Education Sciences Reform Act.

2 The Level 1 Study is also known as the Tipping Points and Five Classes of Adult Literacy Learners study.

3 AEL is also known as the Adult Education Program Survey (AEPS).

4 The Core Task is a revised version of the field test’s ICT Module which included an ICT screener, an ICT tutorial, and an ICT core. The ICT screener and the ICT tutorial are not part of the main study instrument. They have been replaced by a short series of cognitive items that serve the purpose of routing the respondent to the appropriate version of the assessment. The total estimated length of this module remains unchanged.

5 The PSUs for the Field Test was selected as a non-probability sample, chosen with the following goals: Satisfy the demographic requirement of the psychometric testing; and optimize the ICT Core passing rate to achieve 1,300 completed assessments who passed the ICT Core instrument.

6 Some selected persons were unaware of the incentive amount on account of a language problem, refusal by gatekeeper or another person to inform them, learning/mental disability, reading/writing difficulty, impairments (hearing, blindness/vision, speech), disabilities (physical, other), other unusual circumstances, no contact before maximum number of calls reached temporarily absent, vacant/not DU/under construction, and death.

7 The category of ‘complete’ cases includes screeners that were completed but did not have a person in the target population (16-65 year olds) in the household.

8 Assessments are exempt from Paperwork Reduction Act reporting and are therefore not included in the burden calculation for this collection.

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File TitlePIAAC OMB Clearance Part A 12-15-09
AuthorMichelle Amsbary
File Modified0000-00-00
File Created2021-02-01

© 2024 OMB.report | Privacy Policy