FNS Generic Clearance for Pre-testing,
Pilot, and Field Test Studies
OMB# 0584-NEW
REQUEST FOR OMB Clearance
Supporting Statement Part B
Prepared by:
Food and Nutrition Service (FNS)
U.S. Department of Agriculture
March 23, 2016
Table of Contents
B.1 Respondent Universe and Sampling Methods 3
B.2 Procedures for the Collection of Information 3
B.3 Methods to Maximize the Response Rates and to Deal with Nonresponse 4
B.4 Test of Procedures or Methods to be Undertaken 5
B.5 Individuals Consulted on Statistical Aspects & Individuals Collecting and/or Analyzing Data 5
Describe (including a numerical estimate) the potential respondent universe and any sampling or other respondent selection method to be used. Data on the number of entities (e.g., establishments, State and local government units, households, or persons) in the universe covered by the collection and in the corresponding sample are to be provided in tabular form for the universe as a whole and for each of the strata in the proposed sample. Indicate expected response rates for the collection as a whole. If the collection had been conducted previously, include the actual response rate achieved during the last collection.
Under this generic clearance data will be collected for a variety of individual collections for the purpose of instrument testing and development activities. For the most part, the small-scale testing activities undertaken as part of this clearance will involve purposive samples with respondents selected either to cover a broad range of demographic subgroups or to include specific characteristics related to the topic of the study. The test sample will vary by study and focus on the particular subgroups of interest and may include nutrition assistance program administrators, program providers or program participants. Test samples will be small and will not be nationally-representative of the potential respondent universe. The testing of qualitative data collection instruments (e.g. focus group protocols) and quantitative data collection instruments (e.g. questionnaires) will be undertaken for the purpose of item validation, cognitive testing or testing of mode of delivery (e.g. web versus Computer-assisted telephone interviewing (CATI) administration) with a purposive sample of respondents. Statistical analysis will be limited to the test data to answer specific questions about the data collection instruments, protocols or procedures. The test data will be descriptive and will not be used for generalizing the test findings to the larger study universe.
The methods proposed for use in questionnaire and assessment development are as follows:
Field or pilot test. For the purposes of this clearance, we are defining field tests as data collection efforts conducted among either purposive or statistically representative samples, for which evaluation of the questionnaire and/or procedures is the main objective, and as a result of which only research and development (R&D) and methodological reports may be published, but based on which no statistical reports or data sets will be published. Field tests are an essential component of this clearance package because they serve as the vehicle for investigating basic item properties, such as reliability, validity, and difficulty, as well as feasibility of methods for standardized administration (e.g., computerized administration) of forms. Under this clearance a variety of surveys will be tested, and the exact nature of the surveys and the samples is undetermined at present. However, due to the smaller nature of the tests, we expect that some will not involve representative samples. In these cases, samples will basically be convenience samples, which will be limited to specific geographic locations and may involve expired rotation groups of a current survey blocks that are known to have specific aggregate demographic characteristics. The needs of the particular sample will vary based on the content of the survey being tested, but the selection of sample cases will not be completely arbitrary in any instance.
Behavior coding. This method involves applying a standardized coding scheme to the completion of an interview or questionnaire, either by a coder using a tape-recording of the interview or by a "live" observer at the time of the interview. The coding scheme is designed to identify situations that occur during the interview that reflect problems with the questionnaire. For example, if respondents frequently interrupt the interviewer before the question is completed, the question may be too long. If respondents frequently give inadequate answers, this suggests there are other problems with the question. Quantitative data derived from this type of standardized coding scheme can provide valuable information to identify problem areas in a questionnaire, and research has demonstrated that this is a more objective and reliable method of identifying problems than the traditional interviewer debriefing, which is typically the sole tool used to evaluate the results of a traditional field test (New Techniques for Pretesting Survey Questions by Cannell, Kalton, Oksenberg, Bischoping, and Fowler, 1989).
Interviewer debriefing. This method employs the knowledge of the employees who have the closest contact with the respondents. In conjunction with other methods, we plan to use this method in our field tests to collect information about how interviewers react to the survey instruments.
Exploratory interviews. These may be conducted with individuals to understand a topical area and may be used in the very early stages of developing a new survey. They may cover discussions related to administrative records (e.g. what types of records, where, and in what format), subject matter, definitions, etc. Exploratory interviews may also be used in evaluating whether there are sufficient issues related to an existing data collection to consider a redesign.
Respondent debriefing questionnaire. In this method, standardized debriefing questionnaires are administered to respondents who have participated in a field test. The debriefing form is administered at the end of the questionnaire being tested, and contains questions that probe how respondents interpret the questions and whether they have problems in completing the survey/questionnaire. This structured approach to debriefing enables quantitative analysis of data from a representative sample of respondents, to learn whether respondents can answer the questions, and whether they interpret them in the manner intended by the questionnaire designers.
Follow-up interviews (or re-interviews). This involves re-interviewing or re-assessing a sample of respondents after the completion of a survey or assessment. Responses given in the re-interview are compared with the respondents’ initial responses for consistency. In this way, re-interviews provide data for studies of test–re-test reliability and other measures of the quality of data collected. In turn, this information aids in the development of more reliable measures.
Split sample experiments. This involves testing alternative versions of questionnaires and other collection methods, such as mailing packages and incentive treatments, at least some of which have been designed to address problems identified in draft questionnaires or questionnaires from previous survey waves. The use of multiple questionnaires, randomly assigned to permit statistical comparisons, is the critical component here; data collection can include mail, telephone, Internet, personal visit interviews, or group sessions at which self-administered questionnaires are completed. Comparison of revised questionnaires against a control version, preferably, or against each other, facilitates statistical evaluation of the performance of alternative versions of the questionnaire. Split sample tests that incorporate questionnaire design experiments are likely to have a large sample size (e.g. several hundred cases per panel) to enable the detection of statistically significant differences, and facilitate methodological experiments that can extend questionnaire design knowledge more generally for use in a variety of data collection instruments.
Cognitive and usability interviews. This method involves intensive, one-on-one interviews in which the respondent is typically asked to "think aloud" as he or she answers survey questions. A number of different techniques may be involved, including asking respondents to paraphrase questions, probing questions asked to determine how respondents came up with their answers, and so on. The objective is to identify problems of ambiguity or misunderstanding, or other difficulties respondents have answering questions. This is frequently the first stage in revising a questionnaire.
Focus groups. This method involves group sessions guided by a moderator, who follows a topical outline containing questions or topics focused on a particular issue, rather than adhering to a standardized questionnaire. Focus groups are useful for surfacing and exploring issues which people may feel some hesitation about discussing (e.g., confidentiality concerns).
Describe the procedures for the collection of information including:
Statistical methodology for stratification and sample selection,
Estimation procedure,
Degree of accuracy needed for the purpose described in the justification,
Unusual problems requiring specialized sampling procedures, and
Any use of periodic (less frequent than annual) data collection cycles to reduce burden.
Data collection procedures for the testing conducted under this clearance will be varied and will most likely include focus groups interviews and surveys. Statistical results will address a variety of issues including response rates, item non-response rates, frequency distributions of data items, and analysis of behavior coding and respondent debriefing data.
The testing of data collection instruments and procedures will be conducted by the evaluation contractor for the study. The evaluator conducting the tests will have the requisite training and experience for the type of instrument being tested.
The sample selection and recruitment methods will be tailored for each study. Some common methods of recruitment include posting or distributing flyers at locations frequented by members of the population of interest, recruiting participants through community based organizations, snowball sampling (i.e., word of mouth) or other forms of advertising.
The number of tests to be conducted will vary by individual study. For example, an instrument being designed for administration to a diverse population will need to be tested with members of different subpopulations within the larger population to determine if there are differences in the how subgroups of respondents interpret the question. Development of study-specific questions will require more extensive testing. A survey instrument that incorporates many previously validated questions may require less testing.
Describe methods to maximize response rates and to deal with issues of non-response. The accuracy and reliability of information collected must be shown to be adequate for intended uses. For collections based on sampling, a special justification must be provided for any collection that will not yield "reliable" data that can be generalized to the universe studied.
Depending on the study objectives, targeted recruitment efforts will be used to reach and recruit the number of respondents needed for testing the data collection instruments (e.g. through schools, local food banks, local churches, Community Based Organization, Special Supplemental Nutrition Program for Women Infants and Children (WIC) clinics, or Supplemental Nutrition Assistance Program (SNAP) offices). In general, callbacks, reminder phone calls or letters, or second questionnaires will be used to maximize response rates. Reminder phone calls and/or letters to participants will be used to encourage them to respond. Tallies will be kept of the number of non-respondents to all testing activities. More specific information will be contained in the description provided to OMB at the time the data collection instruments are submitted.
Describe any tests of procedures or methods to be undertaken. Testing is encouraged as an effective means of refining collections of information to minimize burden and improve utility. Tests must be approved if they call for answers to identical questions from 10 or more respondents. A proposed test or set of tests may be submitted for approval separately or in combination with the main collection of information.
This entire submission consists of tests of data collection instruments and survey procedures. FNS expects that all the tests conducted under this clearance will result in improved FNS-administered questionnaires, interviews, focus groups, and/or procedures and thus reduced respondent burden.
Provide the name and telephone number of individuals consulted on statistical aspects of the design and the name of the agency unit, contractor(s), grantee(s), or other person(s) who will actually collect and/or analyze the information for the agency.
Advice on statistical aspects of each individual survey will be sought as the testing program proceeds. Depending on the nature of the research, FNS staff and research and evaluation contractors will have responsibility for data collection and analysis.
Whenever it is methodologically sound and appropriate to do so, programs will obtain input from statisticians regarding the development, design, conduct, and analysis of the testing and evaluation methods planned for the data collection and estimation procedures. This statistical expertise will be available from FNS statisticians/contractors. Technical assistance in data collection and estimation procedures may be sought, in some cases, outside of FNS from experts at National Agricultural Statistics Service (NASS), other Federal Agencies or outside of the government.
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
File Title | Supporting Statement for OMB No |
Author | USDA |
File Modified | 0000-00-00 |
File Created | 2021-01-25 |