Fast Track for FY22 PLS Cognitive Testing Justification

Fast Track for FY22 PLS Cognitive Testing Justification.docx

IMLS Generic Clearance To Conduct Pre-Testing of Surveys

Fast Track for FY22 PLS Cognitive Testing Justification

OMB: 3137-0125

Document [docx]
Download: docx | pdf


Request for Approval under the “Generic Clearance To Conduct Pre-Testing of Surveys” (OMB Control Number: 3137-0125)

)Cognitive Testing for FY 2022 Public Libraries Survey

  1. Submittal-Related Information

This material is being submitted under the Institute for Museum and Library Services (IMLS) Generic Clearance to Conduct Pre-Testing of Surveys (OMB Control No. 3137-0125), which provides for IMLS to conduct various procedures (such as pilot tests, cognitive interviews, and usability studies) to test various types of survey operations, questionnaire designs, and electronic data collection instruments.

This package requests approval to conduct 20 cognitive interviews to test 12 new items related to overdue fines and partnerships with other organizations for the FY 2022 Public Libraries Survey (PLS).

  1. Background and Study Rationale

The PLS is a voluntary survey that collects descriptive data on the universe of public libraries in the United States and the outlying areas through a network of state data coordinators (SDCs) located in the state library administrative agencies (SLAAs). The purpose of the PLS data collection is to provide state and federal policymakers and other interested parties with information about public libraries in the United States. The PLS is a national census of public library systems and their service outlets including descriptive data for each State and for each public library system. The data allow for comparisons among libraries of similar size on variables such as size of collection, total number of staff, and total operating expenditures. IMLS’ data catalog and visualization tools, accessible through the IMLS website, facilitate these comparisons.

The Library Statistics Working Group (LSWG) is a panel of SDCs, chief officers of SLAAs, and library researchers that advises IMLS about data collections and products related to the field. The LSWG routinely identifies additional topics or survey items to add to (or retire from) the PLS. In the current cycle, the LSWG recommended development of new survey items related to overdue fines and partnerships with other organizations.

Prior to launching the FY 2022 PLS, we will conduct cognitive interviews with 20 library administrators to test new survey items. The purpose of the testing is to uncover potential item issues, assess overall understanding of the content measured, and make improvements to the items prior to implementation.

  1. Recruitment and Data Collection

Sampling and Recruitment

The IMLS contractor, American Institutes for Research (AIR), will use convenience sampling to identify and recruit library administrators for the study. AIR will work with the LSWG to identify a sample of library administrators to recruit from LSWG members’ respective states. LSWG members will provide the names and contact information for volunteers interested in participating. If necessary, AIR will use the FY 2019 PLS universe (the most recent available) to identify a sample of libraries from which to recruit additional participants in an effort to meet recruitment goals. AIR will conduct internet searches of the libraries in the sample to identify library administrators and their contact information to invite them to participate.

AIR will recruit participants on a rolling basis using a mixed-mode strategy, consisting of email invitations and telephone calls to non-respondents. All library administrators who are willing and able to participate will be scheduled for a cognitive interview until we have completed 20 interviews.

Design

We will conduct two rounds of cognitive interviews with 10 interviews per round. The objective of the cognitive interviews is to identify and correct problems of ambiguity or misunderstanding in proposed question wording for new survey items and gather contextual information to improve the items. The cognitive interviews should result in questions that are easier to understand and follow, and therefore, less burdensome for participants, while also yielding more accurate information.

Cognitive interviews are intensive, one-on-one interviews in which the participant is asked to read the question out loud and “think aloud” as he or she answers survey questions. The think aloud method provides insight into (a) participants’ comprehension of the survey items, (b) potential wording issues, (c) motivational or sensitivity problems not apparent to the developers, and (d) inconsistencies between the items and the traits they were intended to measure. Based on participants’ verbalizations, interviewers will then use “verbal probing” to clarify points that are not evident from the think-aloud method and to explore additional aspects of interest a priori.

Interviewers for this data collection will use verbal probes to:

  • verify participants’ interpretation of the question;

  • check participants’ understanding of the meaning of specific terms or phrases;

  • identify concepts that the participant did not think were covered by the question but that methodologists and subject matter experts consider relevant; and

  • as appropriate, determine what data sources participants would consult to answer the question.

Data Collection Procedures and Format

All cognitive interviews will be conducted virtually using the web conferencing platform Zoom, and the sessions will be recorded with the respondent’s verbal consent. Interviewers will show participants the items on the screen and follow a semi-structured guide (see Appendix A for the interview protocol) throughout the session. AIR intends to test 12 survey items (see Appendix B) during each interview. The research will be iterative, in that survey item wording and format design may change during the testing period in response to problems identified during the interviews. However, it is not anticipated that the survey items or format will change substantially.

All interviews will last 75 minutes and will be conducted in two rounds, with 10 interviews per round. We will use the results from round 1 to revise the items prior to round 2 testing. Results from round 2 will then inform item revisions for the FY 2022 PLS.

Analysis Plans

After each round of testing, we will conduct a notes-based analysis to identify themes and patterns in the data for each survey item and summarize results. The item development team will use the results to inform item revisions. At the conclusion of the testing, we will prepare a final report summarizing the testing and results.

  1. Consultations Outside the Agency

IMLS and AIR consult with the LSWG for input on content for the survey. After round 1 testing and item refinement, AIR will make the items available to the network of SDCs to provide voluntary input on their format, utility, and relevance.

  1. Justification for Sensitive Questions

Throughout the item and debriefing question development processes, effort has been made to avoid asking for information that might be considered sensitive.

  1. Paying Respondents

Based on feedback from LSWG members, it is not necessary to offer incentives to encourage participation.

  1. Assurance of Confidentiality

Participation is voluntary. Prior to beginning each session, the interviewer will read a verbal script explaining the purpose of the study; voluntary nature of the interviews; and that their answers may be used for research purposes only and may not be disclosed, or used, in identifiable form for any other purpose except as required by law. The interviews will be recorded for research purposes only. All materials and recordings will be stored on AIR’s secure data servers for the duration of the study. Audio and video recordings will be destroyed after the final report is submitted.

  1. Estimate of Hourly Burden

We will conduct 20 cognitive interviews with library administrators, each lasting 75 minutes (1.25 hours) for a total burden of 25 hours.

Estimated Hourly Burden for Cognitive Interviews


Number of Respondents

Responses per Respondent

Average Response Burden (in hours)

Total Burden Hours

Value of Time

Cognitive Interviews

20

1

1.25

25

$764



Value of time uses wage rate date from the Bureau of Labor Statistics, May 2020 Occupational Employment and Wage Estimates, United States” accessed online at: https://www.bls.gov/oes/current/oes254022.htm (Access date: 2 July 2021). 25-4022 Librarians and Media Collections Specialists had a median wage of $30.56/hour.

  1. Federal Cost

The estimated annual cost to the Federal government is $145,000.



2

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorSgro, Gina
File Modified0000-00-00
File Created2023-08-27

© 2024 OMB.report | Privacy Policy