Justification B

Justification B.doc

Museum Survey of Public Support

OMB: 3137-0073

Document [doc]
Download: doc | pdf

B. Collection of Information Employing Statistical Methods

B1. Respondent Universe and Sampling Methods


The respondent universe is museums located in all U.S. States and territories. The sample for the survey will be selected from two different sampling frames. The primary sampling frame will be all museums that are in the Urban Institute’s National Center for Charitable Statistics (NCCS) database. The NCCS database contains all nonprofit organizations with annual gross receipts of $25,000 or more, which file a Form 990 with the Internal Revenue Service. The NCCS database contains 7,169 museums. The second sampling frame that will be used is the Heritage Preservation list that is the most complete list of all museums containing a total of 17,680 museums.

B1.1 Selecting the Sample

While the Heritage Preservation frame is larger, it contains only basic museum contact information. The NCCS database provides a huge amount of financial information on each museum and the museums will have already been coded into type of museum. The downside of the NCCS frame is that is does not contain government museums and small museums that do not have to file the IRS form 990. Therefore we are proposing a dual sampling frame approach where the sample selected from the NCCS frame (sampling frame 1) will be supplemented by a sample drawn from the Heritage Preservation list (sampling frame2). The Urban Institute has developed a program that we expect will remove most of the museums from the Heritage Preservation list that are also in the NCCS data base, reducing the number of museums in the Heritage Preservation list to somewhere between 10,000 and 12,000.


Both sampling frames will be divided into separate strata where the strata will be defined by the type of museum using a collapsed version of the NCCS type codes. Table B1 (below) shows the NCCS type codes and the estimated percentage of all museums that receive that classification.




Table B1 - Distribution of NCCS type codes


Type of Museum

Percent of all Museums

Combo/Hybrid Museums

16%

Art Museums

8%

Children's Museums

4%

Folk Arts

3%

History Museums

18%

Natural History/Natural Science Museums

2%

Science & Technology Museums

2%

Historical Societies & Historic Preservation

41%

Botanic Gardens & Arboreta

3%

Zoos and Aquariums

3%

B1.2 Drawing Sample from Sampling Frame 1 (NCCS)

The first step will be to sort each type strata by the museum’s annual expenditures. Due to the monetary influence of the museums with largest expenditures we will select with certainty the top 5 museums in terms of expenditures within each type strata. Then we will draw a stratified random sample of the remaining museums where the stratification variable is expenditures. We will be sampling close to 50% of the museums in each stratum and the proportion sampled from the smaller stratums may be slightly larger in order to ensure that separate analysis of the smaller museum types can be done. We expect to select a sample of about 3500 museums from sampling frame 1.


B1.3 Drawing Sample from Sampling Frame 2 (Heritage Preservation List)

It will not be possible to classify all of the museums in the Heritage Preservation list so sampling frame 2 will include an additional stratum which will be labeled as museum type unknown. Also we do not have any size indicators from this list so we will not be able to stratify by any measure of museum size. So, here we plan to simply sort each type strata by zip code to improve geographic distribution of the selected sample and then draw a random sample of museums within each stratum. We will be sampling around 20% of the museums in each stratum and again the proportion sampled from the smaller stratums may be slightly larger in order to ensure that separate analysis of the smaller museum types can be done. We expect to select a sample of about 2200 museums from sampling frame 2.

B1.4 Combining Samples and Weighting Adjustments

Population weights will need to be constructed to account for the differential probabilities at which museums from different strata and from the two sample frames were sampled. We will be able to estimate from interviews completed from the Heritage Preservation list the proportion of all museums that are either a government museum or a non-profit museum that has revenue of less than $25,000. Based on these estimates a weighting adjustment will be created that will ensure that the sample properly represents government and small museums that are only included in the sampling frame 2. Also the population weight will account for varying probabilities of selection within stratum and for the museums that were selected with certainty. Given the richness of the NCCS sampling frame, we will be able conduct a non-response analysis both overall and within stratum. If the non-respondents differ on any key characteristics then a non-response adjustment will be built into the final survey weights.


Since post–data collection statistical adjustments require analysis procedures that reflect departures from simple random sampling we will estimate the “design effect” associated with the weighted estimate. The term design effect is used to describe the variance of the weighted sample estimate relative to the variance of an estimate that assumes a simple random sample. In a wide range of situations, the adjusted standard error of a statistic should be calculated by multiplying the usual formula by the square root of the design effect (deft). Thus, the formula for computing the 95 percent confidence interval around a percentage is where is the sample estimate and n is the unweighted number of sample cases in the group being considered.

The average design effects for this study will be calculated using replicate weights. Replicating weights is one way to compute sampling errors to reflect a complex sample design. The replication method involved splitting the full sample into smaller groups, or replicate samples, each constructed to mirror the composition of the full sample. Each replicate consists of almost the full sample but with some respondents removed. The variation in the estimates computed from the replicate samples is used to estimate the sampling errors of survey estimates from the full sample.


B1.5 Case Studies

The selection of sites for the case studies will be based on available information about state-level funding for museums, preliminary information derived from the three pretest sites, and feedback from IMLS program officers and directors. The sites will be selected to illustrate a range of state-level systems through which public funding is delivered to museums, and the different purposes for which public funding is allocated to museums through state-level funding mechanisms. While there are too few case studies to provide a statistically or nationally representative sample, we will strive for a sample that gives a picture of the full range of state-level systems for funding museums. The case studies will be used to explore the ways in which and perceptions about how different public funding mechanisms impact the quality of museum services. In each site, we will select museums and government agencies that can speak to the variety of experience working within the public funding system. For example, we will look for museums that illustrate differing disciplines and sizes, and ones that operate in urban or rural settings, and serve special populations such as children and youth.


B2. Procedures for the Collection of Information

B2.1 Web Survey

All letters sent to respondents for the survey will include endorsements from the partner institutions (The Urban Institute and the Institute of Museum and Library Services). The protocol for survey management will include postcard reminders, follow up letters, and, if needed, telephone calls to ensure a high response rate. The primary mode of data collection will be a web survey using Dillman’s tailored design method (TDM)1. The key element of this TDM survey procedure is to carefully design and time contacts to the survey sample respondents. All sampled respondents will be given a respondent ID number to track whether or not they have completed the survey. The letters and e-mails will be personalized with the respondent’s name. All letters will be sent using Urban Institute letterhead and include the scanned signature of the Principal investigator or some person that would add credibility to the study. For this project, UI anticipates up to five contacts in order maximize the response rate. The first contact will be a pre-notification letter to all organizations in the sample. The second contact will be a cover letter along with a link to the web survey that includes the respondent’s unique password. The third contact will be a postcard reminder. The fourth contact, UI will call a sample of non-responders by telephone explaining the importance of this project and asking them to complete the questionnaire. For the fifth and final contact, UI will send a second reminder postcard to all non-responders.


B2.2 Case Studies

Selected case study organizations will be sent a letter informing them of the study and requesting their participation. Museums will then be contacted to arrange the local site visit. The initial telephone contact will provide background about the project and seek additional information on organizations and partners in order to identify key respondents. Based on this information, we will contact respondents and determine the best timing for the visit in order to accommodate the schedule of local respondents.


The case study site visits will be conducted by two-person teams drawn from Urban Institute staff and affiliated researchers. Each team will be composed of one senior and one junior researcher. Senior staff on this project are experienced in field-based qualitative research and semi-structured interviewing of the type that will be used in this study. All researchers involved in the fieldwork will be trained with respect to the objectives of the study and the procedures to follow during the site visits. In the training, team members will review the different discussion guides, become familiar with the types of information sought in the study.


B2.3 Quality Control Procedures for the web survey
Ensuring high quality data procedures is a priority of the Urban Institute. The Urban Institute strives to preserve data integrity and security. Strict confidentiality guidelines are a key component of the Institute’s internal data quality control process. The Institute operates two Hewlett-Packard Alpha Servers running the highly secure and reliable OpenVMS operating system. A firewall monitors and evaluates all attempted connections from the Internet to our public web servers and our private network. Up-to-date anti-virus software runs on our desktop PCs and our servers. We also implement other "best practices" for securing our servers and our desktop PCs. Backup procedures are also strictly employed by the Institute according to each projects’ specific data needs.
The web-based survey can incorporate all of the same branching, error checks, and data validation procedures that are available in computer assisted telephone interviewing (CATI) surveys. Furthermore, our web-based survey data are saved page by page so to preserve all data in progress throughout the data collection period. Finally, the Urban Institute has fully tested its web-based surveys so that they will work on a wide array of platforms and browser types so as to accommodate the vast majority of respondent computers and web browsers.
Backup servers are connected to each of the main network servers at each location, and are available if either of the main local area networks fails. A RAID array of hard drives is used for backup of daily interviewing data and is currently set for RAID 5. Thus, interview data is written simultaneously to multiple drives to ensure backup of each interview as it is being conducted. Interview data is stored on both the main and backup servers as it is collected. Additionally, a daily backup is created at the end of each day.
Once the nationwide survey is launched, UI will begin to monitor for errors and troubleshoot any problems in the survey regarding access and other problems respondents may have.
B2.4 Quality Control Procedures for the Case Studies

Prior to visiting sites or speaking with any potential respondents, Urban Institute staff will review all available materials about the local site and the selected museums and government funding agencies. . This will enable us to identify the appropriate individuals to interview on site who can best inform the central questions in the study.


As noted earlier and is common practice with field-based research, project staff will produce detailed notes of their interviews and a full site summary of each case study, both of which are reviewed by fellow team members to ensure that gaps or inconsistencies are resolved in a timely fashion, and the data are reliable for analysis and production of briefing memoranda and the final report.

B3. Methods to Maximize Response Rates and Deal with Nonresponse


B3.1 Web Survey

The Urban Institute employs strategies for achieving the highest possible survey response rates within the various project specific budget constraints. In this web-based study, the development of well-designed survey instruments, personalized mailings and multiple contacts will all contribute to our best effort to attain an acceptable response rate. While our goal is to obtain a high response rate, the Urban Institute along with other reputable survey organizations cannot guarantee survey response rates at a specific level because of extraneous factors beyond our control that can arise during the implementation of a project. The Urban Institute can guarantee to use the most current techniques based on reputable survey methodological research. The Urban Institute calculates response rates that are based on the CASRO and AAPOR standard definitions.


The topic should be of importance to most museums and we therefore are hoping for a relatively high level of cooperation and a response above 50%. However, we are concerned with recent museum studies where response rates have been much lower than 50%. In the event of the final response rate is less than 80%, we plan to conduct a non-response analysis using auxiliary information from the NCCS data base as well as other sources to determine if the nonrespondents differ in any significant way. If they do, we plan to include a nonresponse adjustment as part of our final survey weight.

B3.2 Case Studies

For the case studies, it is expected that all (or nearly all) of the museums and government agencies we approach will agree to participate in the study. We will work closely with IMLS to engage these respondents and assuage any concerns about participating in the study. Once we have secured the selected sites, site visitors will work closely with a person assigned to be the primary contact at the museum or agency to help in scheduling the site visit. One member of the two-person site visit team will take responsibility for working with the primary contact person to handle the scheduling and logistics of the site visit. Dates for site visits will be made at least one month ahead of time to permit ample time to schedule interviews. Scheduled interview appointments will then be confirmed via email the week prior to the visit. We will request that a quiet setting that is as private as possible (e.g., a conference room) be made available to interview those who do not have private offices, in order to encourage respondents to feel they can talk freely. Based on our experience, following these established field visit protocols leads to an interview completion rate approaching 100 percent of those scheduled in advance.



B4. Tests of Procedures or Methods to be Undertaken


B4.1 Web Survey

We will conduct a small pilot study that tests the survey prior to main data collection Given that this is a new questionnaire, pretesting the survey instrument with a representative sample of the population of target respondents is essential. The pilot will include contacting by phone many of the pre-test respondents to verify clarity and understandability. The pretest will look as much like the actual survey as possible using the same contact procedures that will be employed for the main survey. This will allow us to not only evaluate and revise the survey instrument, but also our data collection procedures, if necessary. We plan to complete 20 to 25 pilot interviews given the potential skip patterns and the variety of museums included in our sample.


B4.2 Case Studies

Case study discussion guides were developed in reference to interviews conducted with museum administrators and funders from public agencies in three pretest sites. The three states where interviews were conducted included: Michigan, Maine, and Washington State. Nine respondents were interviewed in each state. For these interviews, researchers selected respondents who represent key characteristics of the museum field. Respondents from museums represented the range of museum types and sizes. Respondents from government represented a range of funding agencies both within and outside the cultural sector. Respondents represented entities serving rural and urban areas and serving special populations, such as children and youth. The discussion guides have been reviewed for content and methodology, and have been revised to reflect comments by IMLS and the research team, who have conducted many similar studies. Overall, reviewers report that the discussion guides capture the intended data and in the prescribed amount of time to minimize burden on respondents.



B5. Individuals Consulted on Statistics and on Collecting and/or Analyzing Data


The agency responsible for funding the study, determining its overall design and approach, and receiving and approving contract deliverables is:


U.S. Institute of Museum and Library Services

Office of Policy, Planning, Research and Communications

1800 M Street NW, 9th Floor

Washington, DC 20036-5802



The Urban Institute is the prime cooperator for this study. It is responsible for implementing the overall design of the study and development of the data collection instruments. It will also conduct the web survey and field the case studies using its own staff, and will have responsibility for all data analyses obtained through the telephone survey, case studies, and focus groups.


The Urban Institute

2100 M Street, NW

Washington, DC 20037

(202) 833-7200


Persons Responsible: Carlos Manjarrez and Carole Rosenstein, Co-Principal Investigators and Timothy Triplett, Survey Methodologist and Statistical Expert


Direct Contact Information:

Manjarrez (phone: 202-261-5821; email: [email protected])

Rosenstein (email: [email protected])

Triplett (phone: 202-261-5579; email: [email protected])


1 Dillman, Don. 2006. Mail and Internet Surveys: The Tailored Design Method — 2007 Update with New Internet, Visual, and Mixed-Mode Guide. New York: Wiley

7


File Typeapplication/msword
AuthorBarbara Smith
Last Modified Bymary downs
File Modified2007-08-22
File Created2007-08-21

© 2024 OMB.report | Privacy Policy