IMLSPAC_PRA part_B 2.20.09

IMLSPAC_PRA part_B 2.20.09.doc

The Study of Free Access to Computers and the Internet in Public Libraries

OMB: 3137-0078

Document [doc]
Download: doc | pdf

Institute of Museum and Library Services

The Study of Free Access to Computers and the Internet in Public Libraries

Supporting Statement for PRA Submission

ICR Reference No. 200901-3137-001

B. Collection of information employing statistical methods

The following offers a detailed explanation of the statistical methodology of data collection and analysis for a national telephone survey and 4 case studies at US public libraries. All methods adhere to the Office of Management and Budget’s Standards and Guidelines for Statistical Surveys (2006).

The mixed-method approach of the proposed study will yield statistically generalizable data from people nationwide who represent socio-demographically diverse sections of the population, as well as case studies of best practice libraries in which rich, story-driven data will be obtained from key stakeholders to provide deeper insights into possible causes for the results found in the statistical analysis.

B1. Respondent universe and sampling methods

Generally, the respondent universe represents persons in the U.S. who are aged 14 and over and have used public access computers in U.S. public libraries at least once in the past year (from the day of the survey screening). Quantitative data will be collected through a nationwide telephone survey. In addition 4 public libraries will be selected for case studies.

B1.1 Telephone survey

Telephone Contact, Inc. (TCI), will conduct a national telephone survey of public access computer users with an oversampling of low income respondents.

The target population will be persons age 14 or older who have used public access computers (PAC) in public libraries or library Internet connections in the past year. The goal will be to contact a random sample of households by telephone in order to find 1,130 PAC users who are age 14 or older and willing to complete the survey. This will include oversampling of low income telephone exchanges so that at least half of the interviews come from respondents whose household income is less than or equal to 200% of the federal poverty level. We estimated the starting sample of households assuming that 10% of respondents would say yes to the primary screening question1 and also taking into consideration survey nonresponse. This sample size is consistent with other library use studies, including the recent IMLS Interconnections study2 (Griffiths & King, 2008).

We will use a dual frame approach with the first sample frame being a list assisted random digit dial (RDD) sample with a geographic oversample of telephone exchanges that primarily service low income neighborhoods. The goal is to complete 890 interviews using the RDD sample frame. Since the RDD frame captures a negligible fraction of cell-phone-only households, a second cell phone exchange sampling frame will be used with a goal of 160 completed interviews. In addition, the telephone survey will include a non-response study described in section B2.1 that will yield an additional 80 interviews for an overall total of 1,130 PAC users.

Households will be screened to establish the eligibility of residents aged 14 and over. One eligible person will be randomly selected (using the ‘last birthday’ method) for interviewing when two or more residents are eligible within a household. Qualified respondents are those who in the past year have used a public access computer, are aged 14 or older, and are residents of the household. By virtue of the screening protocol, limited demographic information will be collected from persons who do not use public access computers to allow for weighting to match key census demographic control totals.

Finally, we note that for the cell phone sample, we will not screen out cell phones that are associated with a landline household. We do this for two reasons. First, there is evidence that some households (labeled “mostly-wireless” in the survey research community) with both cell phone and landline phones rarely or never use their land line phones for incoming calls3 (Blumberg & Luke, 2008). Secondly, the population we have targeted is sufficiently rare that it would be inefficient from both cost and statistical perspectives to screen out a household that would otherwise be substantively “eligible.” Questions in the survey instrument will allow us to separate the cell-phone-only households from those with landlines so that weighting can be developed separately for these groups.

B1.2 Case studies

The selection of sites for the case studies will be based on available information about public libraries and feedback from the Chief Officers of State Library Agencies (COSLA) members and individual state librarians. The sites will be selected to illustrate a range of public access computer systems and users. While there are too few case studies to provide a statistically or nationally representative sample, we will strive for a sample that gives a picture of the full range of public access computer users; for example, we will look for libraries that illustrate differing sizes, operate in urban or rural settings, and serve substantial minority populations. The case studies will be used to provide a more complete picture of the use of public access computers. In each site, we will select users, librarians, and systems administrators, as well as staff from agencies that refer people to libraries for public access computer use or provide funding for library services.

B2. Procedures for the collection of information

B2.1 Telephone survey

Telephone Contact, Inc. will conduct the interviews and program the questionnaire into a CATI (computer assisted telephone interviewing) program. A CATI program will help interviewers with skip patterns, call scheduling, and tracking, and has many other important advantages over paper form administration.

To estimate the sample sizes for the telephone, we aimed to achieve a 95% confidence interval while minimizing the margin of error within generally accepted ranges. We estimated the standard error based on an 10% response rate to Q5 (Have you used a computer in a public library to access the Internet in the last year?). After taking into consideration the weighting and survey design effects, and calculating sample sizes for a range of margins of error, we determined the optimal sample size for the telephone survey to be 1,130 users, resulting in a margin of error of about +/-3.6% with a confidence level of 95%.4 We use a design effect of 1.5 to reflect the impact of differential weighting.

Under our current design parameters and response rate estimates, we will release into the field roughly 55,102 telephone numbers for the RDD sample and about 15,119 telephone numbers for the cell phone sample. Tables 1 and 2 below present the expected dispositions of the RDD and cell phone samples, respectively.5 The response (completion) rate was calculated as the ratio of the number of expected completed interviews to the sum of expected completed interviews plus the estimated number of eligible respondents whose household status and eligibility status were unknown (AAPOR, 2004).6



Table 1: Expected disposition for PAC user RDD telephone survey

Total purchased sample

106,378


Design parameters

held in reserve

20%

21,276

Able to ascertain HH

90%

main sample (released)

80%

85,102

% telephone HHs*

40%


Screening response

35%

Determining HH status

85,102

HH eligibility**

10%

HH status not ascertained

10%

8,510

Interview response

83%

estimated HH

40%

3,404


estimated eligible

10%

340

Target # of interviews

890

60% not HH

54%

45,955


40% HH to screen

36%

30,637


Screen for eligibility

30,637

Unknown

65%

19,914

estimated eligible

10%

1,991

90%

Not eligible

31.5%

9,651

10%

Eligible

3.5%

1,072

* Based on 2+ list assisted RDD design.

** Oct. 2002 CPS estmates 8.9% of US HH use PAC.


Interview status


1,072

Not interviewed

17%

182

Completed interview

83%

890


Overall completion rate

26%





Table 2: Expected disposition for PAC user RDD cell phone survey

Total purchased sample

18,899


Design parameters

held in reserve

20%

3,780

Able to ascertain HH

85%

main sample (released)

80%

15,119

% telephone HHs*

50%


Screening response

30%

Determining HH status

15,119

HH eligibility**

10%

HH status not ascertained

15%

2,268

Interview response

83%

estimated HH

50%

1,134


estimated eligible

10%

113

Target # of interviews

160

50% not HH

43%

6,426


50% HH to screen

43%

6,426


Screen for eligibility

6,426

Unknown

70%

4,498

estimated eligible

10%

450

90%

Not eligible

27%

1,735

10%

Eligible

3%

193

* Based on 2+ list assisted RDD design. According to Brick, et al. (2007) 52% of cell phone numbers in cell phone samples are households.

**8.9% HH use PAC.


Interview status


193

Not interviewed

17%

33

Completed interview

83%

160


Overall completion rate

21%

Both the RDD and cell phone samples will be randomly partitioned into sample replicates and released/managed separately. This protects against unexpected values in our key design parameters (e.g., the eligibility rate is 40% higher than expected). The data collection will commence slowly using only a few replicates. Based on the performance of these early replicate releases sample and design parameters will be fine tuned in order to reach the expected targets without exhausting the budget. Because of the large number of households that need to be screened and the limited availability of funds, advance letters will not be issued. To mitigate this, a minimum of 10 call backs to each sampled household will be used with calls staggered across different times and days of the week. Also, replicates will be given a ‘rest’ (i.e., not called for several days to a week) after 6 attempts after which time they will be re-fielded.

We expect the screening questions will average 2 minutes for administration; for the eligible respondents, the survey will average 15 minutes to administer. We expect the overall field period to range 10 to 12 weeks.

In accordance with section 3.2 of the OMB guidelines, all response rates will be calculated using weighted and unweighted measures, and item response rates will also be calculated to account for item non-response. For the RDD survey, we anticipate achieving an overall response rate of 26% (Table 1); for the cell phone survey component we anticipate 21% (Table 2). Since we are projecting a response rate well below 80% for both samples, we will conduct a non-response study to determine how non-respondents differ from respondents. This is discussed below.

Nonresponse follow-up. The nonresponse follow-up study will be used to explore nonresponse bias stemming from the low response rates in the telephone survey. The subsamples for the nonresponse follow-up will exploit the replicated sampling feature of the sample design. The subsamples will consist of the initial replicate release from both the RDD and cell phone samples (comprising about 20% of both overall samples). For the non-responders from these initial replicates, mailing addresses will be obtained to the extent possible using commercially available reverse matching services (e.g., Telematch, Equifax). For those telephone numbers with addresses, we will send a nonresponse notification letter. After a 'resting period' we will call and repeat an offer for participation. Our goal is to reach an overall 40% response rate for this nonresponse follow-up (regardless of frame), yielding an incremental 80 interviews from the follow-up replicates (i.e., beyond the number that will have been produced under our ‘usual protocols’ prior to the launching of ‘follow-up activity’).

In accordance with section 4.1 of the OMB guidelines, and in order to reduce non-response bias and increase the value of survey data, the final sample will be post-stratified to match national parameters for sex, age, education, race, and Hispanic origin, as taken from the U.S. Bureau of the Census. We will employ CPS or ACS population controls (whichever are most timely and appropriate).

 Once the data is collected from the telephone survey and the nonresponse follow-up, TCI will remove personal identifiers (i.e., phone numbers) from the dataset and will transmit only the data file.

B2.2 Case studies

The bi-fold aim of the case studies is: (1) to provide insights into the study’s research questions that are not amenable to quantitative investigation per the telephone survey (Table 3), and (2) to provide greater context/depth for the telephone survey questions. Because public libraries vary considerably from one another, we feel that conducting case studies in 4 diverse communities will allow our interviews and data analysis to reflect the fullest range of outcomes associated with PAC use.

Table 3: Research questions—constraints and recommended methods

Research Question

Constraint(s)

Recommended

method(s)

1. What are the demographics of people who use computers, the Internet, and related services in PLs?

  • Difficult to identify target population, high eligibility requirements

Telephone Survey

2. What information and resources provided by free access to computers, the Internet, and related services in PLs are people using, across the spectrum of on-site and off-site use?

  • Confounding—difficult to identify individual user from usage summaries

Telephone Survey

Case study

3. How do individuals, families, and communities benefit (with a focus on social, economic, personal, and professional well-being) from free access to computers, the Internet, and related services at PLs?

  • Difficult to identify causal mechanism from correlated survey data

  • Requires extended access to broad range of stakeholder groups

Telephone Survey

Case Study

4. What reliable indicators can measure the social, economic, personal, and/or professional well-being of individuals, families, and communities that result from access to computers, the Internet, and related services at PLs?

  • Low repetition of outcome indicators across previous studies

  • Requires development and testing of underlying logic model

Telephone Survey

5. What correlations can be made between the benefits obtained through access to computers and the Internet and a range of demographic variables? What correlations can be made to type, level, or volume of related services?

  • Requires a large, representative sample stratified by socio-economic, demographic variables

Telephone Survey

6. What computer and Internet services and resources are lacking at PLs that, if available, could bring about greater benefit?

  • Requires extended access to broad range of stakeholder groups

  • Requires asking open-ended questions

Case Study

7. What indicators of a negative relationship between users of PAC and their social, economic, personal, and/or professional quality of life can be identified where free access to computers and the Internet is weak or absent?

  • Difficult to identify target population

  • Difficult to identify root causes Requires asking open-ended questions


Case Study



Due to differences in the nature of libraries, library services, the very nature of communities themselves, and previous experience with this type of research, we feel conducting 4 case studies is necessary to fully reflect key variables affecting library service delivery. Related research has used similar numbers of case study sites, for example Durrance and Pettigrew (2002) involved study of three major library systems; and Durrance and Fisher’s (2005) IMLS-funded research to develop an outcomes toolkit for evaluating public libraries’ community services involved case studies of 5 library systems.

To this end, the project advisory committee established selection criteria to evaluate and narrow the field of potential sites in consultation with state library officers. These criteria in effect oversample for certain characteristics to help ensure that our research reaches all types of PAC users, especially those who are historically underrepresented in library research. Without these criteria we are unlikely to encounter significant numbers of PAC users who represent the full diversity of our population. Specifically, we sought out libraries in counties with higher than the national average of non-white populations and those with higher household poverty levels.

In addition to the aforementioned criteria related to the library’s community, we also sought to balance our selections across characteristics we felt might allow us to explore a full range of contextual factors that may contribute to PAC outcomes. These balancing criteria include the size of the library’s service population; total operating revenue on a per capita basis; and geographic region. Preliminary choices based on these criteria were reviewed with the appropriate state librarian to identify libraries that might be unable to participate due to resource constraints or other reasons.7

Applying these criteria, we have identified four libraries as probable case study sites: Marshalltown Public Library, IA, Enoch Pratt Free Library, MD; Fayetteville Public Library, AK; and Oakland Public Library, CA. These libraries are located in demographically diverse, low income communities of varying sizes and geographic regions and all meet demonstrable need and use of library computers by their stakeholders.

Each of the four libraries selected for participation in a case study will be sent a letter informing them of the study and requesting their participation. Libraries will then be contacted by telephone to arrange the local site visit. The initial telephone contact will provide background about the project and seek additional information on external organizations and partners involved in providing public access computing resources in order to identify key stakeholders. Based on this information, we will contact respondents and determine the best timing for the visit in order to accommodate the schedule of local respondents.

The case study site visits will be conducted by four-person teams drawn from University of Washington graduate students and researchers. Each team will be composed of one senior and three junior researchers. Senior staff on this project are experienced in field-based qualitative research and semi-structured interviewing of the type that will be used in this study. All researchers involved in the fieldwork will be trained with respect to the objectives of the study and the procedures to follow during the site visits.

The case study teams will spend approximately one week at each site. During each visit, they will aim to conduct one-on-one interviews with up to 30 library PAC users and 2 focus groups with 5 PAC users each, for a total of 40 user interviews at each of the 4 sites. In addition, they will interview up to 10 members of the library staff, board of trustees, and library volunteers; and up to 10 community stakeholders, such as local agency staff, policy makers/elected officials, and staff or volunteers at other community Internet access locations. These numbers are consistent with the numbers of subjects interviewed in the above referenced studies, and standard within the library community for qualitative studies of this nature, as is the range of interviewee types.

Participants from the user population will be recruited to participate in either a focus group session or a one-on-one interview  Children aged 14-17 will be recruited to participate in focus group sessions, if physical space allows, and subjects 18 or older will be recruited for one-on-one interviews. 14-17 year-olds will be interviewed one-on-one if facilities for focus groups are not available.  Librarians will assist in the recruitment process by identifying potential respondents of specific demographic groups known to be of interest in public access computing and library research, particularly low-income users who may be less likely to be reached by the telephone survey. Homeless persons, in particular, are assumed to be largely absent from the quantitative data collection. The target distribution for these interviews and focus group participants is reflected in Table 4.

We will use a snowball approach to sampling, starting with the references from library staff and asking users for others that might be able to help fill in areas that emerge as important for investigation from both the telephone survey results and previous interviews. Interviewers will approach subjects using the script in the interview instruments, informing them of the study purpose and sponsors, their rights to refuse to answer questions, confidentiality, and permission to record their responses. Each subject will be asked to fill out the appropriate consent form (assent form for minors); if a minor, their parents will be asked to sign a consent form as well.


Table 4: Target distribution of interview and focus group participants

Population of interest

Target number of subjects

Children aged 14-17

10 per site

Low income

10 per site

Homeless

10 per site

General population

10 per site



For users, in particular, the quota can be quickly reached if a focus group(s) is conducted in lieu of individual interviews. At the larger systems (i.e., Baltimore, Oakland) where two branch libraries will be included for study in addition to the central building, the maximum number of subjects will be sought.

B3. Methods to maximize response rates and to deal with issues of non-response

B3.1 Telephone survey

Methods to maximize the response rate for the national sample commence with the pretesting activity that precedes the launch of data collection. While debriefing respondents during the first survey pretest, we will explore factors related to the decision to participate so that we can enhance the field protocols at both the screening introduction as well as the introduction to the substantive questions. Moreover, we will conduct debriefings with the telephone interviewers after the second pretest to gather feedback on how to best gain cooperation and avert break-offs.

We will launch the survey using highly trained interviewers that will undergo an especially rigorous training and careful monitoring, all of which incorporate the findings of the pretest. Up to 10 attempts will be made to complete an interview for every sampled telephone number. These calls will be staggered over times of day and days of the week and rested to maximize the chances of making contact. We expect that our proposed approach will maximize the overall response rate to the telephone survey given the essential survey conditions that must be adopted.

B3.2 Case studies

State librarians aided in the selection of public library sites for case studies by identifying libraries that might be unable to participate due to internal constraints. To increase the likelihood of the selected libraries for participation in case studies, each will be given a $200 incentive. Library users who participate in interviews or focus groups will be given a $20 incentive for their participation. Staff and agency interviewees will not be compensated, since this activity can be reasonably assumed to fall within their normal responsibilities.

B4. Tests of procedures or methods

The research procedures for the telephone survey and case studies have been reviewed by the UW Internal Review Board and comply with federal regulations regarding the protection of human subjects participating in academic research. Additional testing and review of procedures and instruments is detailed in the following sections.

B4.1 Telephone survey

The development of the telephone survey was an iterative process, involving a thorough review of the literature, and extensive consultation with subject matter experts and experienced researchers from the project’s advisory committee and research team (Table 5). Throughout the survey development process, the project advisory committee was an active participant, both consulting on the identification of domain areas and high-value question topics, as well as thoroughly reviewing the survey instruments during teleconferences on two occasions. Consistent with Pressor and Blair (1994), we found these expert review sessions to be highly productive in identifying and diagnosing problems with the structure of the survey (logic and flow), as well as the questions themselves (comprehension and task related), both of which were revised accordingly.



Table 5: Project advisory committee

Name & position

Affiliation

Rick Ashton, Chief Operating Officer

Urban Libraries Council

Michael Barndt, Data Center Analyst

Nonprofit Center of Milwaukee

Susan Benton, Strategic Partners Executive

City/County Management Association (ICMA)

John Carlo Bertot, Professor and Associate Director

Information Use Management & Policy Institute, Florida State University

Cathy Burroughs, Associate Director

National Network of Libraries of Medicine

Sarah Earl, Acting Director

International Development Research Center Evaluation Unit

Wilma Goldstein, Senior Advisor for Women’s Issues

Small Business Association


Jaime Greene, Program Officer

Bill & Melinda Gates Foundation

Carla Hayden, Executive Director

Enoch Pratt Free Library

Peggy Rudd, Director and Librarian

Texas State Library and Archives Commission

Ross Todd, Associate Professor and Director


Center for International Scholarship in School Libraries, Rutgers University

Bernard Vavrek, Director


Center for the Study of Rural Librarianship, Clarion University of Pennsylvania



In addition to this development-stage review, we performed some limited pretesting of the survey with staff and student volunteers at the University of Washington. Though the purpose of this pretesting was to evaluate the survey’s logic and usability and was not specifically focused on comprehension, it nonetheless revealed some cognitive and procedural issues that were incorporated into a subsequent revision. The current telephone survey instrument also incorporates comments from the OMB.

Two further pretests will be conducted prior to the launch of the telephone survey: cognitive interviews/respondent debriefing; and behavior coding/debriefing of interviewers.

Cognitive interviews/respondent debriefing. Cognitive interviews are especially useful for identifying semantic or comprehension problems that may result in misunderstanding or misinterpretation of survey questions (Oksenberg, Cannell & Kalton, 1991). As such, face-to-face cognitive interviews with nine qualified volunteer subjects will be conducted by two senior researchers from the project team aided by graduate students. The subjects will be selected from public access computer users at the Seattle Public Library main branch; the researchers will select diverse users representative of the populations of interest with help from Librarians who will assist in the recruitment process by identifying potential respondents of demographic groups, particularly low-income users and members of non-white populations.

Because the survey contains a great number of screening and funneling questions, cognitive questioning will be mostly conducted following the complete administration of the survey to avoid breaking the relationship between questions that might occur when using, for example, a think-aloud protocol throughout the survey (Fowler, 1995). A limited number of think-alouds will be used during the administration of the survey for questions which the respondent is having particular trouble answering. To reduce participant fatigue, the retrospective interviews will focus on the qualifying questions, general use questions, the domain screening questions, and the open-ended questions.

Debriefing questions will focus on the respondents’ understanding of the terms used in the questions, as well as the cognitive process they used to arrive at their answer. Subjects will be encouraged to discuss areas of confusion or ambiguity, with interviewers probing for details as appropriate for each question to ensure the question is understood. Generally, probing questions will ask respondents to elaborate on their interpretation of questions and their answers. A detailed outline of the protocol for these interviews along with testing instruments is attached to this submission.

The interviewers will prepare a separate summary report for each respondent. The interviews will also be recorded and the transcripts analyzed by the research team, who will determine corrective actions for problematic survey questions. The survey instrument will be revised accordingly in advance of further field testing.

Behavior coding/debriefing of interviewers. Following revisions stemming from the cognitive interviews, the survey will be tested under field conditions by Telephone Contact, Inc.8 (TCI) with a modestly-sized RDD sample (N=40) in ZIP codes with median income in the lowest two national quintiles. Although there is currently no recognized method for determining sample sizes for behavior coding pre-tests, we agree with Zukenberg et al. (1996) that by focusing on respondents with more problematic circumstances we “raise the likelihood of rapidly identifying questionnaire flaws.” Further, this sample size falls within a generally accepted range (cf. Sudman, 1983; Sheatsley, 1983; Courtenay, 1978) and is sensitive to our time and resource constraints.

Interviews will be recorded and two members of the research team will code behaviors that might indicate problems with the survey, such as the interviewer not reading the question as written or the respondent asking for clarification. The behavior code categories developed by Oksenberg, Cannel & Kalton (1991) will be used for this portion of the testing. In keeping with generally accepted guidelines, if the question was reworded by the interviewer more than 15% of the time, or if the respondent provided adequate answers less than 85% of the time, the question will be reviewed and revised accordingly (cf. Fowler, 1989).

In addition to behavior coding, the interviewers will be monitored in real time by a TCI supervisor who will record any issues observed during the telephone testing and suggest changes. During this testing phase, interviewers will also be provided with a rating form similar to that developed by Fowler and Roman (1992) on which they can record problems they encountered in reading the questions or potential problems with respondents not understanding or having a difficult time answering questions. Interviewers will be notified in advance of this pretesting protocol and will also be debriefed by TCI supervisors.

Once survey data collection is launched, we will review the survey data to check for data-entry errors (e.g. keying, coding, editing, and imputation error), inconsistencies, and identify missing cases for any systematic bias. Cases with missing responses on half or more of our pre-specified key survey items will be deleted from the sample and treated as a unit nonresponse. Initial analysis will consist of running descriptive statistics for all variables to identify the center and distribution within the population and bivariate statistics (correlation and cross-tabulation) will be used to test for associations between variables. For variables where the team identifies an association based on the qualitative evidence, we will conduct multivariate analysis (e.g., multiple regression) to determine the proportion of variance that can be explained by the relationship.

B4.2 Case studies

An initial pre-test of 9 participants of each respondent type was conducted at a Washington library to test interview guides and field procedures. Results from this test were used to refine field protocols, instruments, and training for investigators, and to begin developing the codebook for qualitative analysis of interview transcripts. In general, library administrators, employees, and patrons were all found to be willing to share their experiences with public access computers.

We will analyze case study data as they are collected in order to aid in identifying a range of responses for each indicator. The schemes will reflect the data’s emergent themes and will be guided by the study’s logic model. The code book will be used to assign terms to all segments in the data that reflect particular concepts. After the final schemes are developed, tests of inter-coder reliability (cf., Krippendorf, 1980) will be conducted with independent coders and final adjustments will be made to the codes.

To ensure trustworthiness (reliability and validity) of the qualitative data, we will use several measures (cf., Chatman, 1992; Lincoln & Guba, 1985). Reliability will be ensured through: (1) consistent note-taking, (2) exposure to multiple and different situations using triangulated methods, (3) comparing emerging themes with findings from related studies, (4) employing intracoder and intercoder checks, and (5) analyzing the data for incidents of observer effect. Validity will be assessed as follows:

  • Face validity: ask whether observations fit an expected or plausible frame of reference;

  • Criterion/internal validity (credibility) based on pre-testing instruments, rigorous note-taking, methods, peer debriefing, and member checks or participant verification;

  • External validity: provide “thick description” and comprehensive description of our methods so others can determine if our findings can be compared with theirs;

  • Construct validity: examine data with respect to public access computing outcome literature, models of public library use, and principles of information behavior

B5. Individuals consulted on statistics and on collecting and/or analyzing data

The agency responsible for funding the study, determining its overall design and approach, and receiving and approving contract deliverables is:

U.S. Institute of Museums and Library Services

Office of Policy, Planning, Research and Communications

1800 M Street NW, 9th Floor

Washington, DC 20036-5802

Phone: 202-653-4630

Person Responsible: Mamie Bittner

The University of Washington, Information School is the prime cooperator for this study. It is responsible for implementing the overall design of the study and development of the data collection instruments. It will also field the case studies using its own staff, and will have responsibility for all data analyses obtained through the telephone survey, web survey case studies, and focus groups.

The Information School
Box 352840
Mary Gates Hall, Ste 370
Seattle, WA 98195-2840

Phone: (206) 685-9937
Fax: (206) 616-3152

Persons Responsible: Karen Fisher and Mike Crandall, principal investigators and Matthew Saxton, survey methodologist and statistical expert



The Urban Institute was consulted in the development of the telephone and web survey sampling frames and follow-up study methodology.

The Urban Institute

2100 M Street, N.W.

Washington, D.C. 20037

Phone: 202-833-7200

Persons responsible: Robert Santos, Senior Institute Methodologist and Timothy Triplett, Survey Methodologist



Telephone Contact, Inc. will conduct the telephone survey.

Telephone Contact, Inc.

3800 Hampton Ave., Ste. 200

St. Louis, MO 63109

Phone: 314-353-6666

Person responsible: Joyce Aboussie, President and CEO

References

The American Association for Public Opinion Research. (2004). Standard Definitions: Final Dispositions of Case Codes and Outcome Rates for Surveys. Ann Arbor, MI: AAPOR.

Blumberg S. J. and Luke J.V. (2007). Wireless substitution: Early release of estimates from the National Health Interview Survey, July-December 2007. National Center for Health Statistics. Available from: http://www.cdc.gov/nchs/nhis.htm.

Brick, J. M., P. D. Brick, et al. (2007). "Cell Phone Survey Feasibility in The U.S.: Sampling and Calling Cell Numbers Versus Landline Numbers." Public Opinion Quarterly 71(1): 23-39.

Chatman, E. A. (1992). The information world of retired women. New York: Greenwood Press.

Courtenay, G. (1978). "Questionnaire Construction." In Hoinville, G., and Jowell, R., Survey Research Practice, chapter 3. Heinemann Educational Books: London.

Durrance, J. C., & Fisher, K. E. (2005). How libraries and librarians help: A guide to identifying user-centered outcomes. Chicago: American Library Association.

Durrance, J. C., & Pettigrew, K. E. (2002). Online community information: Creating a nexus at your library. Chicago: American Library Association.

Fowler, F. J. (1995). Improving survey questions: Design and evaluation. Applied social research methods series, v. 38. Thousand Oaks: Sage Publications.

Fowler, F. Jr. & Roman, A. (1992). A study of approaches to survey question evaluation. Final report for U.S. Bureau of the Census.

Griffiths, J. and King, D. (2008). Public library survey results. InterConnections: The IMLS national study on the use of libraries, museums and the Internet. Washington, DC: Institute of Museum and Library Services. Retrieved from http://interconnectionsreport.org/reports/library_report_03_17.pdf.

Krippendorff, K. (1980). Content analysis: An introduction to its methodology. Newbury Park, CA: Sage

Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic inquiry. Newbury Park, CA: Sage.

Miles, M. B., & Huberman, A. M. (1994). Qualitative analysis. Thousand Oaks, CA: Sage.

Office of Management and Budget. (2006). Standards and guidelines for statistical surveys. Washington, D.C.: OMB. Available at http://www.whitehouse.gov/omb/inforeg/statpolicy.html#pr.

Oksenberg, L., Cannell, C., & Kalton, G. (1991). New strategies for pretesting survey questions. Journal of Official Statistics, 7:3, 349-356.

Presser, S. & Blair, J. (1994). Survey pretesting: do different methods produce different results? Sociological Methodology, 24: 73-1094.

Sheatsley, P.B., (1983). "Questionnaire Construction and Item Writing." In Rossi, P.H., Wright, J.D., Anderson, A.B. (eds.) Handbook of Survey Research, chapter 6. Academic Press, Inc.: San Diego, CA.

Sudman, S., (1983). "Applied Sampling." In Rossi, P.H., Wright, J.D., Anderson, A.B. (eds.) Handbook of Survey Research, chapter 5. Academic Press, Inc.: San Diego, CA.

Zukerberg, A. L., D. R. Von Thurn, and J. C. Moore. 1995. Practical considerations in sample size selection for behavior coding pretests. Proceedings of the Section on Survey Research Methods, Joint Statistical Meetings, American Statistical Association, 1116–1121, August 13–17, in Alexandria, VA.

1 Q5: Have you used a computer in the public library to access the Internet in the last year?

2 n=1,049

3 I.e., landline is reserved for DSL service, or fax, or at-home business.

4 This represents the half-width of a 95% confidence interval of an estimated percentage near 50%, assuming one person per eligible household is selected, a weighting effect (i.e., design effect due to differential weighting) of 1.5 and a nominal sample size of n=1,130; so that approximately, 3.6% = 1.96 x Sqrt[(1.5 x 0.25)/1,130].

5 The expected disposition tables do not include the nonresponse follow-up sample.

6 E.g., 890/(890+182+1991+340)=.26

7 E.g. Vacancies in key positions, recent budget cuts, or building conditions.

8 TCI will also be responsible for conducting the telephone survey once the instrument and protocols are finalized.

IMLS | 16

File Typeapplication/msword
File TitlePurple highlights indicate an OMB question
SubjectRevised per IMLS
AuthorSamantha Becker
Last Modified Byllanga
File Modified2009-02-20
File Created2009-02-20

© 2024 OMB.report | Privacy Policy