ICRB_AdCouncil_2011_Tracking_English_rev_6.21

ICRB_AdCouncil_2011_Tracking_English_rev_6.21.docx

Food Safety Education Campaign-Tracking Survey

OMB: 0583-0150

Document [docx]
Download: docx | pdf

B. SUPPORTING STATEMENT FOR BE FOOD SAFE CAMPAIGN SURVEY (English Survey)



1. Describe (including a numerical estimate) the potential respondent universe and any sam­pling or other respondent selection method to be used. Data on the number of entities (e.g., establishments, State and local government units, households, or persons) in the universe covered by the collection and in the corre­sponding sample are to be provided in tabular form for the uni­verse as a whole and for each of the strata in the proposed sample. Indicate expected response rates for the collection as a whole. If the collection had been conducted previously, include the actual response rate achieved during the last collection.


The population of interest for this information collection is parents between the ages of 20 and 40, who are caregivers for children between the ages of 4 and 12. Based on a 20% incidence of this target in the general US population (308,745,538 US Census 2010), the universe of English-speaking respondents is estimated at 61,749,107. Recruitment quotas will include gender, age, and race/ethnicity as appropriate to mirror census estimates. In addition, samples will reflect variety in geographic density (e.g. urban, suburban, rural), and region of the country.

Six hundred English-speaking (600) parents will be recruited over the course of two weeks (racial quotas will reflect most recent census figures). Subjects will be contacted via random digit dial of land-line telephone numbers – cell phone respondents will not be included. Based on past experience, we expect a response rate among age-appropriate respondents of 20%. Please refer to section 2 for more detail regarding response rates, cell-phone respondents, and non-response bias. The number of non-respondents who will be contacted, but do not qualify will be approximately 3,000.


Respondent Type

Universe of Eligible Respondents

Estimated Total Number of Respondents

Estimated Total Number of Non-Respondents



Total

Benchmark Survey

61,749,107

600

3000

3,600

TOTAL

61,749,107

600

3000

3,600





2. Describe the procedures for the collection of information including:

  • Sample size determination

For this study, a power analysis was conducted along with a resulting sample size estimate as an important aspect to the study design. Without these calculations, a sample size may be too high or too low. If a sample size is too low, the study will lack the precision to provide reliable answers to the questions it is investigating. If sample size is too large, time and resources will be wasted, often for minimal gain.

Therefore it is recommended that N=600 is the optimal size given the constraints. It provides the following:

  1. The power is equal to or greater than .929 at a 7% actual difference associated with changes in population response (please see figure 1 below)

  2. Yields a 4% margin of error (please see figure 2 below)

Figures 1 & 2



  • Statistical methodology for stratification and sample selection

To address representation issues and potential sample error, we recommended the following three-stage RDD (random-digit-dialing) CATI approach for the USDA Food Safety Tracking Study:



Geographic Stratification: A sample frame will be developed using a stratified random sampling procedure that will proportionately divide the U.S. population into sampling units of geographic subpopulations (strata).  These strata will be defined by standard U.S. Census Divisional Mapping, where the U.S. is divided into nine (9) divisions that can be benchmarked for population parameters. 

  1. Div 1:  New England

  2. Div 2:  Middle Atlantic

  3. Div 3:  East North Central

  4. Div 4:  West North Central

  5. Div 5:  South Atlantic

  6. Div 6:  East South Central

  7. Div 7:  West South Central

  8. Div 8:  Mountain

  9. Div 9:  Pacific



Samples are first systematically stratified to each ZIP code in the survey area in proportion to the sampling frame selected.  After a U.S. Census geographic Division area has been defined as a combination of ZIP codes, the sum of the estimated telephone households or requested frame value is calculated and divided by the desired sample size to produce a sampling interval and determine the amount of sample to be allocated to each Census Division in the frame.



The ZIP codes are ordered by state and county. A random number between zero and one is generated and multiplied by the sampling interval to calculate a random starting point between one and the sampling interval.  A cumulative count of elements is calculated.  At the point at which the accumulation reaches the random starting point, a specific ZIP code is selected and the next sampling point is one interval away.  Accumulation continues in this fashion until the entire sample frame has been apportioned.


Following the survey, the Ad Council will compare responses from subgroups within the sample to note any significant differences at the 90-95% confidence levels. In order to qualify for this analysis, each subgroup must have at least 80 respondents. Such analysis will be conducted for any categories that meet that sample size – most likely age, gender, income, employment status, educational status. The geographic divisions listed above are unlikely to be large enough to qualify, and therefore inferences will not be made based on geography.



Random-digit-dialing, combining EPSEM (Equal Probability Selection Method): The Random-digit-dialing data collection technique will provide representative sampling across all strata.  Fielding efficiencies are gained with business number removal and pre-screening for disconnected numbers.  Further, by combining EPSEM, we will ensure that every possible telephone number in working geographic strata has an equal probability of selection.


Screeners and Quotas: These are part of the original study design per the RFP, and will be used for specific targeting objectives (i.e. gender, age, children in HH, cook meals, and race/ethnicity) as a function of specific demographic/behavioral quota definitions.  Thus, redistributing the burden associated with sample error to geographic representation.



  • Estimation procedure,

As part of the sampling procedure, demographic and behavioral screeners/quotas will be used; therefore, the primary sample statistic for estimation error defaults to geographic representation.  We feel this is mitigated through the use of geographic stratification prior to sampling (please see above). 



  • Degree of accuracy needed for the pur­pose described in the justification,

Since the goal is the longitudinal tracking of effects within specific cohort groups then a 90-95% confidence interval is recommended.

  • Unusual problems requiring specialized sampling procedures, and

None anticipated


  • Any use of periodic (less frequent than annual) data collection cycles to reduce burden.

In order to track shifts in attitudes, perceptions, and awareness, benchmark and post-wave surveys are needed. This submission covers the benchmark survey only.



3. Describe methods to maximize response rates and to deal with issues of non-response. The accuracy and reliability of information collected must be shown to be adequate for intended uses. For collections based on sam­pling, a special justification must be provid­ed for any collection that will not yield "reli­able" data that can be generalized to the universe studied.



We will partner with an industry leading global phone data base vendor (Survey Sampling International "SSI").  SSI starts with a database of all directory-listed households in the USA. Using area code and exchange data regularly obtained from Telcordia and additional databases, this file of directory-listed telephone numbers is subjected to an extensive cleaning and validation process to ensure that all exchanges are currently valid, assigned to the correct area code, and fall within an appropriate set of ZIP Codes.



Non-response Issues:  These can be divided into four categories:

  1. Contact Rate

  2. Cooperation Rate

  3. Early Terminates

  4. Non-Response Bias



Contact Rate:  No Answers, Answering Machines, and Busy Signals.  We attempt an average of 10 call-backs to ensure adequate representation.



Cooperation Rate:  Immediate refusals - both hard/soft.  We implement a best practice training guide for our interview staff to recognize the nature of the refusal and use the correct level of rebuttals.



Early Terminates:  These are analyzed closely during the fielding of any study, to determine causes and any necessary modifications needed for the survey instrument and/or sample frame.



Non-Response Bias:  Since it’s quite difficult and often impractical to design a survey to impact the difference between the observed and non-respondent answers, we focus our attention on reducing the non-response rate in order to reduce bias.  The first (and possibly most important) step in reducing non-response bias is to create a properly designed survey. The design of the survey can have a large impact on whether a respondent chooses to partake in the survey, and to what extent they complete the survey. Having a personable yet professional introduction, interesting survey content, short survey length, clear and concise wording, practical and appealing incentives, placing multiple follow-up calls on non-respondents, and being mindful of the time, day, or season that the survey is fielded all can impact the non-response rate.



Non-Coverage (cell phone only households - CPO vs. landline):  In this study conventional landlines (phone exchanges) will be the primary and only source of data collection. The non-coverage problem currently associated with this method, while not damaging for estimates for the entire population, we find well documented evidence that it does create biased estimates on certain variables for young adults, 25-30 percent of whom are cell-only according to the most recent Pew Research Center and government estimates. As CPO households have grown, telephone surveys that rely on landline samples have experienced a decline in the percentage of younger respondents. Over the past five years, according to the Pew Research Center, the average percentage of respondents aged 18–34 years in unweighted landline samples declined from 21 percent in 2005 to 20 percent in surveys conducted in 2006, 19 percent in 2007, and 17 percent thus far in 2010. This decline is consistent with the fact that the CPO household population is heavily tilted toward young people. Since this is a well documented trend, we have confidence that the survey screeners and quotas associated with age (in this case we will only interview subjects that fall between 20-40 yrs of age) will alleviate most of, if not all of the bias relative to skewing the mean age higher in the overall sample.



Non-Response Bias Questions:  The survey will include a number of voluntary demographics (i.e. employment status, race/ethnicity, income etc.) that will serve as test statistics when compared to the latest U.S. Census "Community Survey." Using a percentage of total basis for each of the demographic test statistics, this will allow for an easy check when fielding to verify representation. Additionally, this may serve to inform a potential data weighting scheme to reconcile any observable discrepancies on an overall U.S. National basis.



4. Describe any tests of procedures or methods to be undertaken. Testing is encouraged as an effective means of refining collections of information to minimize burden and improve utility. Tests must be approved if they call for answers to identical questions from 10 or more respondents. A proposed test or set of tests may be submitted for approval separate­ly or in combination with the main collection of information.



Cayenne Global will perform a "soft launch" prior to formal CATI field operations to ensure proper questionnaire flow, length of interview, and programming logic/data capture. These "test" interviews will be facilitated by our quality assurance/control team using test subjects in and around the business premises to include a minimum of 10 interviews. These test interviews will not be included in part/whole as part of the base of N=600. Furthermore, interviewer feedback throughout the duration of field operations will provide a constant source for any potential corrective actions.



5. Provide the name and telephone number of individuals consulted on statistical aspects of the design and the name of the agency unit, contractor(s), grantee(s), or other person(s) who will actually collect and/or analyze the information for the agency.



Include all individuals who have contributed to or commented on the survey, sample frame, statistical methods or other aspects of the collection.



The information collection will be conducted by a contractor:


Kevin Smith

Managing Partner

Cayenne Global, LLC

San Francisco,  CA  94517


925-672-2256 (voice)

[email protected]



The following individuals contributed to the study design, screener, and questionnaire:


Nirmal Deshpande

Research Manager

The Advertising Council

815 Second Avenue, 9th Floor

New York, NY 10017

212-984-1937

[email protected]


Page 6


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File Modified0000-00-00
File Created2021-02-01

© 2024 OMB.report | Privacy Policy