03 11 13 CBO Part B

03 11 13 CBO Part B.docx

An Assessment of the Roles and Effectiveness of Community-Based Organizations in the Supplemental Nutrition Assistance Program

OMB: 0584-0578

Document [docx]
Download: docx | pdf








U.S. Department of Agriculture

Food and Nutrition Service








An Assessment of the Roles and Effectiveness of

Community-Based Organizations in the

Supplemental Nutrition Assistance Program




Project Officer: Rosemarie Downer


Part B











December 4, 2012





TABLE OF CONTENTS


part B. Collection of Information Employing Statistical
Methods 1

B.1. Respondent Universe and Sampling Methods 1

B.2. Procedures for the Collection of Information 7

B.3. Methods to Maximize Response Rates and the Issue of
Nonresponse 10

B.4. Tests of Procedures 11

B.5. Consultants 12


Attachment A: Interview Protocols

ATTACHMENT A.1A: STATE SNAP DIRECTOR EMAIL

ATTACHMENT A.1B: STATE SNAP DIRECTOR INTERVIEW PROTOCOL

ATTACHMENT A.2A: CBO DIRECTOR EMAIL

ATTACHMENT A.2B: CBO DIRECTOR INTERVIEW PROTOCOL

ATTACHMENT A.3A: SNAP LOCAL AGENCY DIRECTOR EMAIL

ATTACHMENT A.3B: SNAP LOCAL AGENCY DIRECTOR INTERVIEW PROTOCOL

ATTACHMENT A.4: SNAP LOCAL AGENCY WORKER INTERVIEW PROTOCOL

ATTACHMENT A.5: CBO EMPLOYEE AND VOLUNTEER INTERVIEW PROTOCOL


Attachment B: LOCAL SNAP AGENCY consent and authorization to record form

Attachment C: client satisfaction survey

ATTACHMENT D: ADMINISTRATIVE DATA REQUEST LETTER AND INSTRUCTIONS

ATTACHMENT E: DATA CONFIDENTIALITY PLEDGE

ATTACHMENT F: PRE- SURVEY NOTIFICATION LETTER

Attachment G: Nass comments




part B. Collection of Information Employing Statistical Methods

B.1. Respondent Universe and Sampling Methods

Describe (including a numerical estimate) the potential respondent universe and any sampling or other respondent selection method to be used. Data on the number of entities (e.g., establishments, State and local government units, households, or persons) in the universe covered by the collection and in the corresponding sample are to be provided in tabular form for the universe as a whole and for each of the strata in the proposed sample. Indicate expected response rates for the collection as a whole. If the collection had been conducted previously, include the actual response rate achieved during the last collection.


The purpose of this section is to document the statistical procedures to be used for the SNAP CBO Client Satisfaction Survey. The satisfaction survey will be based on a randomly selected, representative sample of SNAP participants who applied for SNAP in selected demonstration counties in the survey reference period. The reference period for SNAP participants who were interviewed by a CBO is October 1, 2012 through March 31, 2013. The reference period for local SNAP office-interviewed participants is January 1, 2013 through March 31, 2013. The sampling plan for the survey will be probability-based so that study findings can be used to make statistically defensible inferences about the entire population of SNAP participants who were interviewed.

FNS is planning to utilize results from the SNAP CBO Client Satisfaction Survey to assess comparative data on the experiences, perceptions, and satisfaction of SNAP participants were interviewed by a CBO staff member versus participants who were interviewed through a SNAP local office at the time of application. Key objectives of the CBO Satisfaction Survey include the following:

  • Objective #1: Assess the levels of satisfaction that SNAP participants report about the quality of services they receive from CBOs (i.e., staff knowledge about SNAP application processes, customer service, wait times, etc.).

  • Objective #2: Determine the factors that contribute to SNAP participants' choice or decision to apply for benefits at a CBO instead of a SNAP office.

  • Objective #3: Assess how SNAP participants describe their experience with the CBOs and, of the participants who have also had an experience with SNAP offices, how they compare both experiences.

  • Objective #4: Determine whether the experiences and opinions of participants who complete interviews with CBOs differ from those who complete interviews with SNAP offices.

Key results will be tabulated using the satisfaction score, which will be measured in two ways. One measure of satisfaction will be an average score across sample members in each group (i.e., CBO-interviewed participants versus SNAP-interviewed participants). The other measure will be based on the percentage of SNAP participants who are mostly, or completely, satisfied with the CBO (if they were interviewed through a CBO) or local SNAP office (if they were interviewed through a local SNAP office). The goal of designing the sample is to permit accurate statements regarding the overall satisfaction levels of SNAP participants that were interviewed at a CBO, as compared to SNAP participants that were interviewed at a SNAP local office. We aim to complete interviews with 2,000 SNAP participants (500 SNAP participants in each of the 4 States with waivers). In summary, the sample for the study was designed to achieve the following goals:

  • Across-State development of CBO customer satisfaction ratings with 95-percent, two-tailed confidence intervals of between 1.4 and 3.1 percentage points across all 4 States1

  • Across-State development of SNAP local office customer satisfaction ratings with 95-percent, two-tailed confidence intervals of between 1.4 and 3.1 percentage points

  • Within-State development of CBO customer satisfaction ratings with 95-percent, two-tailed confidence intervals of between 2.7 and 6.2 percentage points for each State

  • Within-State development of SNAP local office customer satisfaction ratings with 95-percent, two-tailed confidence intervals of between 2.7 and 6.2 percentage points for each State

To minimize recall bias, we will keep the period between application submission and data collection as short as possible. Fielding of the survey is scheduled to begin in April 2013. Because some CBOs serve a relatively small number of applicants compared to the local SNAP offices, we expect we will need to include all CBO-interviewed participants that were certified at some point during the 6 months preceding data collection. On the other hand, we believe that 3 months of data from the local SNAP offices will yield enough cases to select an adequate sample of local office-interviewed cases. Thus, in requesting data from the States, we will instruct them to include CBO-interviewed participants that applied and were initially certified for SNAP between October 1, 2012 and March 31, 2013, as well as SNAP office-interviewed participants that initially applied and were certified for SNAP between January 1, 2013 and March 31, 2013. Steps involved in the sample design are briefly described below.

B.1.1 Target Population. The target population for this survey includes all SNAP participants who applied for SNAP in selected demonstration counties in the survey reference period. The reference period for participants who were interviewed by a CBO is October 1, 2012 through March 31, 2013. The reference period for local SNAP office-interviewed cases is January 1, 2013 through March 31, 2013. Note that the survey will cover all SNAP participants, not all SNAP applicants, to control for any bias in satisfaction scores due to denial of the application.

B.1.2 Survey Eligibility. All individuals in the target population are eligible for the study, so no screening will be conducted.

B.1.3 Sampling Frame. We plan to build the sampling frame from this target population. States will be asked to submit extracts from the caseload database of SNAP participants that applied for SNAP in selected demonstration counties during the survey reference period.

B.1.4 Statistical Methodology for Stratification and Sample Selection. We plan to select a stratified, random sample of SNAP participants within each State. Prior to sample selection, we will first stratify the State by the source of interview (CBO or State office). Substrata will then be defined within each strata based on 1) demonstration county and 2) the amount of the household’s SNAP benefit.

Prior to selecting the sample within each State, a sample allocation program will be run to determine the sample sizes within each of the substrata. SNAP participants will be allocated to each substratum in proportion to the size of that substratum (defined by the sum of all SNAP participants in that substratum). The benefits of this procedure include the fact that all weights are exactly the same; as such, there is no “oversampling” of certain strata causing variation in the weights. As a result, the variance of the overall satisfaction estimates for CBOs or local offices is smaller than would be otherwise.

After the appropriate sample size is allocated within each State substratum, the SNAP participants will be sorted within substratum by ZIP Code before sampling to ensure a representative sample within these groups. We will then perform systematic sampling within strata. This method involves numbering the SNAP participants in the population from 1 to N (N = total records in population). To select a sample of n participants, we take a participant at random from the first k participants and every kth participants thereafter until the appropriate number of participants is achieved in the stratum. In this way, each participant in the sampling frame will be given a known, nonzero probability of selection so that weighted inferences can be made about the entire population of participants.

Assuming an 80-percent response rate and a 100-percent eligibility rate among selected participants, we plan to select approximately 2,500 SNAP participants. Based on the anticipated response rate, this will yield approximately 2,000 completed interviews with SNAP participants.

B.1.5 Response Rates. Our goal is to achieve an overall response rate of 80 percent. We feel that this is a likely response rate for this survey for several reasons. First, we plan to use a proven data collection methodology (telephone survey with locating and follow-up of nonrespondents). Additionally, to reduce the respondent burden, we have kept the questionnaire length to a minimum. Finally, respondents will be offered a $10 gift card for participation in the survey. Finally, the survey addresses a subject matter that is likely to be important and relevant to them.

B.1.6 Reliability of Estimates. Overall, estimates of satisfaction percentages (such as the percentage of clients with specific experiences) for the CBOs across the four States will have 95-percent, two-tailed confidence intervals of between 1.4 and 3.1 percentage points, and we will be able to detect differences of 2.2 percent or more between the CBOs and SNAP local offices. 

In addition to making survey comparisons across all demonstration States, estimates of satisfaction will also be computed within each State. Within each State, with 250 completed interviews in each group, the client sample that was interviewed by CBOs will have 95-percent, two-tailed confidence intervals of between 2.7 to 6.2 percentage points, which will not allow us to detect any differences between CBOs and local offices that are less than 6 percent. For example, assuming that the respondent sample size is 250 for CBO applicants in any one State and the percentage of individuals who were satisfied with their communication is 50 percent, then using a 95-percent confidence interval, in 95 out of 100 samples like the one selected, the results should be no more than 6.2 percentage points above or below this figure.

Follow-up CATI methods will be used to ensure that the response rate goal of 80 percent is achieved.

B.1.7 Estimation Procedures. The primary purpose of the analyses is to assess any differences in client satisfaction and experiences by the interview source (CBO or local SNAP office). We will calculate the final survey response rate and adjust the initial sample weights for nonresponse based on relevant applicant variables available from the State database from which the sample frame was drawn. Following data collection, sample weights (or base weights) for SNAP applicants will be 1) prepared based on the initial probability of selection, 2) adjusted to compensate for nonresponse, and 3) edited to remove multiple selection opportunities. The product of these three weights will result in final weights suitable for use in analysis of responses. This weighting scheme inflates the respondents' data to represent the entire universe of SNAP applicants.

We will use SAS v9.2 for data management and to conduct simple cross-tabulations and will use SUDAAN v9.0.1 for standard errors and tests of significance. SUDAAN provides the correct computations for the standard errors by accounting for the design of the sample. Various multivariate and descriptive statistical techniques will be used to analyze the data, including cross-tabulations and frequency distributions, t-tests, chi-square tests, and regression analyses including logit, multinomial logit, and least squares methods. Direct variance estimates that reflect the sample design will be computed for each analysis variable, and will be used in all analytic comparisons of final results. Variations in output, per type of analysis, will depend on what statistics are appropriate for the variable and the measurement level (i.e., nominal, ordinal, or scale) for each defined variable. For example, a nominal measure (e.g., nondirectional categories, related to the respondents’ background and SNAP history) will be analyzed using frequencies and percentages. An ordinal measure (e.g., directional categories, such as strongly agree to strongly disagree) will produce counts, percentages, and an overall mean for the variables. A scale measure (e.g., a numerical value, such as age, income, and household size) will produce a mean, median, standard error, percentile, or other customized summary statistics.

The interview experience and satisfaction questions include Likert-type scales of ordered responses. Analysis of Likert scales cannot assume equal intervals among response options, so it is best to analyze the data in terms of frequencies or percentages. For the most part, chi-square tests or other nonparametric tests are appropriate for testing significance.

B.2. Procedures for the Collection of Information

Describe the procedures for the collection of information including:

  • Statistical methodology for stratification and sample selection;

  • Estimation procedure;

  • Degree of accuracy needed for the purpose described in the justification;

  • Unusual problems requiring specialized sampling procedures, and

  • Any use of periodic (less frequent than annual) data collection cycles to reduce burden.

B.2.1 Data Collection. The proposed satisfaction survey is designed as a telephone survey using CATI with nonresponse telephone follow-up to obtain information on the satisfaction of SNAP participants with the CBOs and local SNAP offices. We considered various methods of data collection, including mail, and recommend utilizing CATI for this target population for several reasons. CATI is an efficient way to reach a substantial number of respondents where the sampling frame is sufficiently large and the contact information is adequate to provide a reasonably high response rate. Phone numbers are likely to be accurate given the fact that the sample members recently applied for SNAP and the State offices maintain current telephone contact information for their participants. Still, we acknowledge that cell phone numbers, which may be the primary telephone of many of these individuals, are less stable and more likely to change than landlines. However, based on our experience with low-income populations, we find that those living in medium-to-small communities, which dominate in these demonstration States, tend to change cell phone numbers much less frequently than those living in larger communities. When we find that telephone contact information is inaccurate, we plan to utilize standard locating procedures to identify a current phone number for the sample member. The use of CATI offers several advantages that can shorten the data collection period. For example, call attempts can be scheduled to maximize the chances of reaching the intended respondent, and interviewers can often obtain immediate locating information when the contact information on file is incorrect.

We believe that, overall, CATI would yield a higher response rate for this study than other modes of data collection, with an anticipated response rate of 80 percent for the final results. In addition, CATI will improve the quality of the data by ensuring that the most knowledgeable respondent is interviewed for the survey. The data collection methodology is as follows:

  • The CATI instrument will be developed, tested, and programmed to assign interim and final status codes to track refusal, ineligible, and unlocatable cases.

  • A survey management system will be programmed to track completed cases, partially completed cases, call history, and locating history.

  • A training program will be developed and interviewers will be thoroughly trained on all aspects of the study.

  • Tracing efforts using commercial locating databases will be implemented to obtain updated phone numbers for nonrespondents.

  • Response rates will be monitored and analyzed by completed cases by time of day and days of the week to optimize calling times.

  • Refusal conversion calls will be made by specialists trained in refusal conversion.

Follow-up methods. We propose a multipronged strategy for ensuring strong response rates, including 1) obtaining the most current contact information from SNAP administrative records; and 2) use of respondent-locating techniques2 as needed. The interview scripts will assure sample members that their SNAP benefits will not be affected by their responses. We will also design the scripts so they are not too lengthy. Our interviewers are trained in refusal conversion techniques and will utilize a wide range of methods to minimize nonresponse and maximize the complete data available for analysis. Procedures to maximize the response rate include the following:

    • Follow-up attempts on different days/at different times of day. Research shows that the incremental increase in response rates diminishes beyond seven calls. Messages will be left for recipients to call a toll-free number to complete the survey.

    • Call rotation and flexibility. The CATI system can schedule calls to rotate among various times throughout the day and evening during callbacks. The system allows respondents to call in to complete a survey or continue a survey over multiple sittings. Interviewers can also schedule appointments so that respondents can participate at a time convenient to them.

    • Refusal conversion. We plan to implement refusal conversion appropriate to the needs of the project. The level of conversion will be communicated to interviewers as part of the training.

    • Cross-sectional design. The survey is cross-sectional, so no future contacts are planned after a completed questionnaire is returned and/or the interview is completed by phone.

B.3. Methods to Maximize Response Rates and the Issue of Nonresponse

Describe methods to maximize response rates and to deal with issues of non-response. The accuracy and reliability of information collected must be shown to be adequate for intended uses. For collections based on sampling, a special justification must be provided for any collection that will not yield “reliable” data that can be generalized to the universe studied.

The methods described above have been proven in methodological research to yield response rates of 80 percent for SNAP participants and 100 percent for State and Local Government staff and CBO staff, when the survey is of reasonable length and sample members consider the topic important. The following strategies will be used to help achieve this response rate, unless otherwise noted:

  • Personalized pre-notification letters

  • Strategically scheduled follow-up attempts

  • Survey sponsorship by a recognized Federal agency

  • A brief introduction that underscores the importance of the survey topic to sample members

  • Interviewer training that addresses potential obstacles in reaching or communicating with SNAP participants and offers strategies for overcoming these obstacles

  • A toll-free number for respondents with questions

  • Locating efforts using commercial locating databases and directory assistance in an effort to obtain updated phone numbers for unreachable sample members

The pre-notification letter will be printed on USDA letterhead and will briefly explain the purpose of the study and the reasons why sample members should volunteer their time. The letter will also include the estimated completion time of the survey, and assurances of confidentiality. Stating the sponsorship of the survey helps to engage sample members by providing immediate assurance that the survey is legitimate and not an attempt to sell them something. The likelihood of acceptance is greatly increased when sample members are told early why the survey is being conducted and why their responses are important.

B.4. Tests of Procedures

Describe any tests of procedures or methods to be undertaken. Testing is encouraged as an effective means of refining collections of information to minimize burden and improve utility. Tests must be approved if they call for answers to identical questions from 10 or more respondents. A proposed test or set of tests may be submitted for approval separately or in combination with the main collection of information.

We had intended to pretest the client satisfaction survey in October 2012 but it has been delayed as we are waiting for the States to provide us names and contact information for recent SNAP applicants. We also asked a local snap office and community-based organization in each State to provide up to two names and telephone numbers of recent SNAP applicants that we may contact for the pretest. Upon receiving this pool of names, we will select 9 for the pretest. The intent is to complete the pretest by December 7, 2012. The telephone data collection procedures themselves have been well-tested on SNAP participants.

The survey instrument comprises three subsections: Section A gathers information about the respondents’ experience at a local SNAP office or CBO. Sections B and C measure satisfaction with the services received. We estimate that the satisfaction survey will take approximately 15 minutes to complete by phone. See Attachment C for the survey questionnaire.


B.5. Consultants

Provide the name and telephone number of individuals consulted on statistical aspects of the design and the name of the agency unit, contractor(s), grantee(s), or other person(s) who will actually collect and/or analyze the information for the agency.

Anne Peterson of Insight Policy Research provided consultation on the statistical aspects of the design. Insight Policy Research is also responsible for collecting and analyzing all data for this study. In addition FNS consulted with Edwin Anderson at the National Agricultural Statistics Services (NASS) about the design, level of burden, and clarity of instructions for the collection.

1 The lower bound of this range reflects the 95-percent confidence interval when the population mean of a binary variable is 10 or 90 percent; the upper bound when it is 50 percent.

2 We will use locating databases such as LexisNexis and residential telephone listings to locate sample members based on names, addresses, current or former telephone numbers, and/or other identification numbers.


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorYaeko Tise
File Modified0000-00-00
File Created2021-01-24

© 2024 OMB.report | Privacy Policy