SIF National Assessment OMB Part B 4 16 2015

SIF National Assessment OMB Part B 4 16 2015.docx

National Assessment of the Social Innovation Fund

OMB: 3045-0169

Document [docx]
Download: docx | pdf

SUPPORTING STATEMENT

For

Paperwork Reduction Act Submission



National Assessment of the Social Innovation Fund (SIF)















Part B. Collection of Information Employing Statistical Methods

















Submitted by:

Corporation for National & Community Service

1201 New York Avenue, NW
Washington, DC 20525








April 16, 2015







Supporting Statement

Part B. Collections of Information Employing Statistical Methods

B1. Respondent Universe and Sampling Methods

The SIF National Assessment is designed to compare the experience of change in capacity and organizational behavior of SIF grantees before and after SIF funding to the experiences of two other nonprofit populations. The potential respondent universes for survey data collection include current SIF grantees (intermediaries), non-funded grant applicants, and other grantmaking nonprofits. In addition to the survey, follow-up interviews will be conducted with SIF intermediaries to document the objective evidence they have to support their reports of organizational change. The first round of the survey (administered in 2015) will collect data from all three populations on their experiences/capacities in 2009 and 2014, factors that contributed to change over that period, and related topics. The second round of the survey (administered in 2016) will collect information from the 2009-2014 period reported in the first round of the survey to 2015. This second survey round will be administered to all the SIF grantees (including both the 2010-2012 cohorts and the 2014 cohort) and the non-funded SIF applicants from the 2014 cohort.

Universe Definitions

  1. SIF grantees: All of the 27 SIF grantees (intermediaries) that received SIF funding in 2010, 2011, 2012, and 2014 will be included in the survey data collection. This universe is defined to include grantees that are retained in the program over the course of funding as well as those that leave the program early or that complete their term of participation in the program. Of the 27 funded SIF intermediaries, 20 were funded in 2010-2012 and 7 were funded in 2014.



  1. Non-funded applicants: A comparison of SIF grantees to non-funded applicants with applications that were determined to be compliant and were scored “Satisfactory” or above by CNCS (and its external reviewers) on both the program review and evaluation review will provide an important counterfactual for SIF grantee experience. Examining the performance of non-funded applicants relative to SIF grantees will control for motivation (i.e. selection bias) to be in SIF. Given the rigorous SIF program requirements (e.g., 1-to-1 cash match, compliance with federal regulations, high standards for evidence development, etc.), it is appropriate to ensure that this comparison group is limited to those organizations that were willing to comply with the requirements of the SIF program and that were assessed by CNCS to be compliant with the solicitation requirements and of acceptable quality. This group includes 45 applicant organizations that submitted SIF applications in 2010, 2011, 2012 or 2014 and whose applications were rated as compliant and of acceptable quality. These include 33 organizations that submitted applications in 2010-2012 and an additional 12 organizations that submitted applications in 2014.



  1. Other grantmaking nonprofits: The comparison of SIF grantees to a national probability sample of grantmaking nonprofit organizations is designed to address the issue of the extent to which SIF grantee experience of change in capacity and organizational behavior is attributable to SIF participation, rather than to more general trends in the larger grantmaking world. As such, this sample is designed to provide a cross-section of the population of national nonprofit organizations that meet basic SIF eligibility criteria, and for that reason does not involve sample matching. Because nonprofit organizations are required to submit IRS Form 990 or Form 990-PF, it is possible to identify and select a nationally representative sample of grantmaking nonprofits. The IRS data make it possible to limit the selection of other nonprofits so as to exclude nonprofit organizations that are small in size and ones that do not make grants to nonprofit organizations or only make relatively small grants. SIF participation involves a size standard: SIF makes grants to intermediaries of $1,000,000 or more per year for the grant period, and intermediaries commit to making subgrants totaling at least 80 percent of the SIF grant (i.e., at least $800,000). To define a population of grantmaking nonprofits of size comparable to SIF intermediaries, CNCS’s contractor, ICF International, has worked with GuideStar, an organization that obtains and manages IRS data for use in research and other applications. The estimate is that approximately 3,000-4,000 U.S. grantmaking nonprofit organizations meet the size criteria for this comparison group (i.e., have revenues of at least $1,000,000 and make grants of at least $800,000 per year). A probability sample of organizations will be selected from these GuideStar records, as detailed below.



Round 1 Survey: Design and Analysis



The Round 1 survey is scheduled to be administered in 2015. To address the research questions detailed in Part A of this submission, it will collect data on capacity, behaviors and experiences in 2009 and 2014, factors that contributed to change over this period, and related topics. These data can be analyzed to address a number of research questions without having to have the Round 2 data available. The Round 2 data collection and analyses will provide information on longer-term outcomes and comparisons of the experience of earlier and later SIF cohorts.



The Round 1 survey will be administered to all SIF intermediaries and all non-funded applicants for the 2010-2012 and 2014 cohorts.



In addition, a sample of other nonprofit gran makers will be selected to represent this population of organizations. The sample will be selected using explicit stratification by organization revenue and implicit stratification, within each revenue stratum, by grants made to U.S. nonprofit organizations. Organization revenue will be divided into seven strata: $1M-$5M; $5M-$10M; $10M-$20M; $20M-$50M; $50M-$75M; $75M-$100M; $100M+. Within each of these revenue strata, organizations will be ordered by grants to nonprofits and every th record will be selected, such that , where is the total number of records in revenue stratum and is the number of records to be selected from that stratum. The starting point for selection in each revenue stratum will be a random number between 1 and . The number of records to be selected from a stratum, , will be proportional to the size of the stratum in the population, such that , where is the total population size and is the total number of records to be selected. This sample design ensures that the distribution of organizations in the selected sample will be proportional to the population distribution with respect to revenue and grants to nonprofits. In addition, the variance achieved with this sample design is no larger than that for a simple random sample and may be smaller, potentially increasing the precision of estimates.



To achieve an absolute margin of error of at most +/- 5 percentage points on overall estimates from the sample data, the target number of completed surveys, , for the other grantmaking nonprofit organization group is 400. Based on the literature on surveys of nonprofits, the estimated response rate is 30-40% (Baruch & Holton, 2008; Hager et al., 2003; Saunders, 2012). To be conservative, a total sample will be selected of grantmaking nonprofits to be contacted for the survey.



Multiple sources and approaches will be used to obtain contact information for the survey. For SIF intermediaries and non-funded applicants, CNCS has contact information. For the other nonprofit sample, the information available from IRS forms includes the organization’s address, telephone number, and Employer Identification Number (EIN); in addition, for nonprofits that submit IRS Form 990 (but not ones that submit Form 990-PF), the organization’s website is included. Contact information for the organizations in this sample, including an email address, will be obtained as described under B2 below.



As a first step in the analyses, baseline equivalence analyses will be conducted for the three groups (SIF intermediaries, non-funded SIF applicants, and other nonprofit grantmaking organizations). The baseline equivalence analysis of the SIF intermediaries and non-funded applicants will assess the extent to which the intermediaries differ from the non-funded group on such characteristics as prior level of capacity for evaluation, subgrantee support, experience with scaling of interventions, and level of funding for grant programs. It is expected that some differences will be found: because the selection of SIF intermediaries was competitive, the selected applicants are expected to have a higher level of initial capacity and experience. Because the differences between these groups are important in their own right, the differences (and similarities) will be described and discussed in the national assessment reports. In addition, calculations of percentages and similar measures will show both the initial level and the degree of change, and multivariate analyses can incorporate key differences as covariates to isolate the effects of SIF participation on change.



It is expected, also, that the SIF intermediaries will differ from the national comparison group of grantmaking nonprofit organizations. For instance, at the time of selection, it is likely that SIF grantees will be more experienced than other nonprofits in such areas as competitive selection of organizations to fund, use of rigorous evaluation to build the evidence base for innovative programs, and, possibly, federal funding. The analyses of differences between the SIF intermediaries and the larger world of nonprofit grantmakers will provide valuable insight about the extent and ways in which SIF grantees are similar to or different from the larger group of their peers. For example, if SIF intermediaries are more likely than other grantmakers to have had prior federal funding, this information may be helpful to CNCS in targeting outreach efforts or helping potential applicants to understand the kinds of experience that may be relevant to their applications. The analyses of similarities and differences between SIF intermediaries and other nonprofit grantmakers will be reported and key differences included as covariates in models of change.



After completion of the baseline equivalence analyses, analyses will focus on change in organizational capacity/behavior and SIF and other factors related to change. Prior to conducting any significance testing, histograms and scatterplots will be used to visually assess the distributions of the variables to be analyzed. These assessments will provide insight into possible patterns in the data as well as indicate potential outliers. Outliers will be identified on a case-by-case basis, and because they may exert an inordinate effect on group means given the small sample sizes of the intermediary and non-funded applicant groups, these data points may be trimmed or excluded as needed to ensure the accuracy of group statistics. Conventional tests of hypotheses (inferential statistics) about between-group differences are designed for application with probability samples of each group. Statistical tests take into account the sampling variability associated with the estimated value for the measures in the different groups. However, due to censusing two of the three survey populations (i.e., the grantees and the non-funded applicants), there will be no sampling error associated with estimates from these data. Furthermore, because these estimates are not intended to represent any groups other than those specifically covered by this survey (i.e., the 2010, 2011, 2012, and 2014 SIF intermediaries and “Satisfactory” non-funded applicant cohorts), it is appropriate to treat these estimates as population values for analysis. Thus, the following approaches will be used to test for between-group differences:



  • For comparisons between SIF intermediaries and the 2010-2014 non-funded applicants, direct comparisons of measures (e.g., means, percentages) between the two groups will be used; for these comparisons there is no sampling error to take into account. Note that if a non-response adjustment is necessary to account for substantial attrition/non-response in these groups (described in Section B2), this additional variance will be treated as sampling error, making conventional inferential statistics applicable.

  • For comparisons between SIF intermediaries and the sample of other grantmaking nonprofits, inferential statistics, which account for the sampling error in the estimates from the sample of other grantmaking nonprofits, will be used to test for significant differences. Because the estimates from the SIF grantee data can be treated as population values, the simplest approach to comparing estimates between these two groups is a one-sample t-test. The sample size needed to achieve statistical power of at least .8 for a two-tailed, one-sample t-test with depends on the expected effect size for the comparisons being made. To be conservative, a small effect size is assumed, of . The sample size required is then , where is the non-centrality parameter required for .8 power with . Note that the addition of covariates, which could be handled through multiple regression analysis, would reduce the error variance in the outcome and hence increase statistical power; the power analysis for a one-sample t-test therefore provides an upper bound on the sample size required to ensure that these comparisons (or, equivalently, tests of main effects in a regression analysis) are sufficiently powered. This analysis indicates that the sample size recommended above for achieving an absolute margin of error of at most +/-5 percentage points on overall estimates ( ) will provide sufficient power for these between-group comparisons (in particular, the power for detecting small effects of with this sample size will be .98).

Although the extent to which it will be possible to use multivariate analyses will be limited by the number of cases in the SIF grantee and non-funded applicant groups, the analyses will explore the use of such methods as and logistic regression (for dichotomous variables) and ordinal regression (e.g., for comparing differences on the ordinal “extent” scales) with the inclusion of covariates to control for between-group differences (e.g., types of organizations, geographic location, whether they have a geographic or issue focus, and the substantive areas in which they fund community nonprofits) or for the effect of other contextual factors (e.g., economic conditions).

Additional analyses will be conducted to explore differences among subgroups (e.g., cohorts, smaller and larger intermediaries). Given the small numbers of cases in these subgroups, these analyses will be mostly descriptive. Open ended survey questions will be analyzed using ATLAS.ti to code and analyze text entries such as “other (specify)” responses and narrative responses to open-ended questions. (The approach to nonresponse analysis is described below, in B3.)

In addition to the survey, follow-up interviews will be conducted with SIF intermediaries to document the objective evidence they have to support their reports of organizational change. Data from these interviews will be incorporated in the report of survey findings.

Round 2 Survey: Design and Analysis

The Round 2 survey will be used to follow up with all members of the 2014 SIF grantee and non-funded applicant cohorts and with the 2010-2012 SIF grantee cohorts. (For the 2014 cohort, the Round 1 survey will cover their pre-SIF experience and, for the 2014 SIF grantees, their early months of experience in the program.) This follow-up will facilitate comparison of 2014 (and earlier) experiences against 2015 experiences for these key groups to provide additional information about how SIF funding affects organizations’ behavior, capacity and experiences. For example, by comparing the 2014 to 2015 experience for the 2010-2012 and 2014 cohorts of SIF intermediaries, it will be possible to compare the pace of change in the first year of funding with that experienced in later years. By comparing the 2014 SIF intermediaries with non-funded 2014 applicants, it will be possible to examine the effect of SIF participation on change during the year.

Because these groups will be censused during the Round 2 survey, there will be no sampling error associated with the resulting estimates. Consequently, direct comparisons of measures (e.g., means, percentages) between these groups will be used to assess differences.

B2. Procedures for the Collection of Information

Based on the size and nature of the organizations studied and the experience of the pilot test, it is anticipated that all or nearly all the organizations surveyed will have access to the internet and so can be sent communications by email and can complete the survey online. However, a hard-copy version of the survey will be made available to all organizations that do not have internet access, as well as to those that prefer to respond using a hard-copy version of the survey.

For the SIF grantees and non-funded applicants, CNCS has contact information that will be used to send survey communications and provide access to the survey. For the nonprofit comparison group, basic information is available from the IRS submissions and GuideStar. The IRS submissions for all organizations provide their EIN, address and telephone number, and nonprofits (other than private foundations) provide a website address. Contact information (email address) for this sample will be looked up using multiple sources. For some, contact information will be available from GuideStar or from such sources as the Foundation Directory; for others, organization websites may provide the needed information. In some cases, it may be necessary to telephone the organization to obtain the needed contact information. Once the contact lists are developed, a determination will be made as to whether any organizations lack internet access; for those, mail and telephone will be used to contact the organization. For organizations for which an email address is available, the initial communication will be sent by email and, if a response such as “undeliverable” is received, mail and telephone contact will be used to follow up, either to obtain a correct email address or to confirm there is no email address.

The survey communications will be undertaken in several steps. For the SIF intermediaries, which are aware of the National Assessment, CNCS will send out early communications through emails to intermediaries, announcements through the SIF Knowledge Network site, and communications to such groups as the SIF Evaluation Working Group.

The communications plan for the survey is based on Dillman’s Tailored Design Method (Dillman, Smyth, & Christian, 2014) and other survey experience. For all selected organizations, a pre-notification letter will be sent a week ahead of the survey. This will provide information about the survey’s purpose, the use CNCS will make of the information respondents provide, and other information such as the estimated time to complete the survey, the information that participation is voluntary, and a contact in case they have questions or want more information. The survey pilot was used to ascertain that the appropriate person to send the communication to for organizations in the other nonprofit sample is the Executive Director; communications will be directed to this person, with information about the survey and a request to provide information about the appropriate person to send the survey to or, if the organization prefers, to have the Executive Director receive the survey and direct it to the appropriate person.

One week after the pre-notification, all organizations that have email will be sent the link to the survey URL. The survey will be administered in an online format using SurveyGizmo software. All data exported from the SurveyGizmo secure web site will be kept in a secured folder.

For respondents that receive a hard-copy survey, either because they do not have email and internet access or because they prefer to respond via hard copy, the survey will be mailed, along with a self-addressed stamped envelope for easy return. Surveys received in a hard-copy version will be entered into the programmed instrument upon receipt.

Following the send-out of the survey, survey receipt will be closely monitored, including tracking the response rates and checking to see that the surveys received are complete (e.g., are not missing sections) and, as necessary, following up with reminders to obtain completed surveys or determine reasons for non-completes. Undeliverables also will be monitored and telephone or mail will be used to contact those organizations.

Because of the importance of responses from the SIF intermediaries and the non-funded applicant group, intensive follow-up will be used to maximize the response rate for these groups. These will include telephone follow-up and, if needed, having a trained staff member administer the survey by telephone and enter the data in the programmed instrument.

The plan is to allow four weeks for organizations to respond before closing the survey. Response rates will be monitored by week to determine if that period is sufficient or if additional time or follow-up would enhance the response rate.

Respondents will be sent a “thank you” for their participation. Also, the survey asks if the organization would like to receive the survey report and a benchmarking report that compares their responses with aggregate data for other organizations, and these reports will be sent to those who request it. SIF intermediaries will also receive a request for a 15-30 minute follow-up interview, where they will provide objective evidence to support their reports of organizational change in the survey.

For the 2016 follow-up survey, a similar survey administration procedure will be followed. A pre-notification letter will be sent a week ahead, followed by the survey link and reminder emails as needed. The Round 2 survey asks respondents to provide data on many of the same questions as the Round 1 survey in order to obtain information about change experienced during the period between the two surveys. To facilitate responses and minimize burden, the Round 2 surveys will be customized to include the information the respondent provided in the Round 1 survey, with a request to report changes since the earlier response.

Survey Weights

Because the SIF grantee and non-funded applicant populations will be censused, there is no need to compute sampling weights for these groups. However, non-response adjustments will be useful to ensure that weighted totals match population totals for these groups. If, as anticipated, the levels of non-response are low, the non-response adjustment will be a simple ratio adjustment so that the sums of the weights attached to responding records match the total number of records in the population. Specifically, the non-response weight would be , where is the total population size for the group (SIF grantees or non-funded applicants) and is the number of responding organizations in that group. As discussed in more detail below (Section B3), higher levels of non-response in either group may warrant a more sophisticated approach to dealing with non-response, including a non-response bias analysis and the use of a response propensity model to more accurately account for potential bias with the non-response weight.

The data collected during the Round 1 survey of the other nonprofit grantmakers, which involves a national probability sample, will be weighted to make estimates representative of the national population of nonprofit grantmakers meeting SIF eligibility criteria (see Universe Definitions, above). Due to the use of proportional stratification, this sampling design can be considered “self-weighting” in that the probability of selection, and hence the sampling weights, will be equal for all sampled records. Specifically, the sampling weight will be computed as , where is the total population size and is the number of records selected for inclusion in the survey. This scaling adjustment ensures that the sums of weights match population totals by revenue stratum and overall. Next, a non-response adjustment will be applied to account for potential non-response bias related to organization size. The non-response adjustment will be computed as , where is the number of selected records and is the number of completed interviews in stratum . The non-response adjusted weight, applied within each revenue stratum, will be . This weight will be used to compute estimates during analyses, using procedures (such as SAS PROC SURVEYFREQ) that account for the stratified sample design.

B3. Methods to Maximize Response Rates and Deal with Issues of Non-Response

Drawing on Dillman’s Tailored Design Method (Dillman et al., 2014) and consultation with pretest respondents, the survey administration focuses on establishing trust, increasing benefits, and decreasing costs of survey participation. The messages to respondents will emphasize trust by communicating the legitimacy and importance of the data collection and the information they provide, and by ensuring the confidentiality of information provided. Benefits of participation will be fostered by collecting information of interest to respondents and the field and offering them a report of the survey findings and a benchmark report that compares their organization with other organizations in aggregate. Costs of participation will be minimized by making the survey easy to complete (short, well-designed, programmed for online submission), sending reminder notices (with the survey link) and, if respondents prefer, making a hard-copy version available on request.

Specific strategies that will be employed to maximize response rate and minimize issues associated with non-response during the survey process include:

  • Working with GuideStar, Foundation Directory, and other resources (e.g., organization websites) to collect accurate and up-to-date contact list for potential survey respondents;

  • Using a well-designed, easy-to-use online survey;

  • Assuring respondents that their responses will be handled in a confidential manner;

  • Providing respondents with a contact name and telephone number for inquiries;

  • Providing respondents multiple reminders to complete the survey (the link to the web-based survey will be provided each time a reminder is needed);

  • Providing respondents with an explanation of how their participation will help to inform positive changes to the field of nonprofit grantmaking and to CNCS’s and other agencies’ social innovation initiative; and

  • Offering the survey in hard-copy form for participants who cannot be contacted via email or who request a hard-copy survey.

Response rates will be computed for all three target populations following AAPOR's Standard Definitions for Web surveys of named persons. Modifications to the standard definitions may be required to accommodate the fact that these are surveys of organizations and any such modifications will be explicitly noted when reporting response rates.

It is recognized that there may be some non-response, particularly among the other grantmaking nonprofits. In addition, there may be some non-response among the non-funded SIF applicants and possibly among the SIF grantees. Since both the SIF grantee and non-funded applicant populations are relatively small, it will be possible to conduct telephone follow-up as needed to achieve high response rates. After telephone follow-up, it is anticipated that data will be obtained from almost all members of these two populations. In such cases, the finite population correction applies and the effect of non-response on the analyses will be minimal (Cochran, 1977). If, however, there is a substantial amount of non-response among grantees or non-funded applicants, available auxiliary data will be compared between grantees or non-funded applicants that have responded and those that have not responded to determine if there are any predictors of response. If so, these predictors can be used to build a response propensity model, and the resulting propensity scores can be incorporated as additional non-response adjustment to the survey weights for these groups.

Finally, for the sample of other nonprofit grantmakers, a response rate of around 30-40% is expected based on a review of the literature relevant to establishment survey response rates (Baruch and Holton, 2008; Hager et al., 2003; Saunders, 2012). For this sample, a non-response bias analysis will be conducted using auxiliary data from the sampling frame, which includes IRS Form 990/990-PF data. As with the non-response analysis for the other groups, any strong predictors of non-response may be candidates for building a response propensity model, which could then be incorporated into the non-response adjustment as part of the survey weights for this group. Frame variables that may be useful in this regard include organization size, geographic location and National Taxonomy of Exempt Entities (NTEE) code, which codes organizations by their major functions or activities.

In addition to the approach to handling unit non-response as outlined above, item non-response will be investigated. The first step will be to identify items with high proportions of missing responses. Data drawn from such items may be excluded from the analysis. Alternatively, for key survey items with high missing rates, the analysis will explore whether there are any predictors, in terms of other survey responses or auxiliary/frame data that could indicate differences in who chooses to respond to these questions. Such correlates could be useful for conducting multiple imputation so that these items can be used in analysis, although any discussion of such analyses would need to clearly indicate the use of imputed data.

B4. Tests of Procedures or Methods to Be Used

The procedures for data collection and response maximization were tested with a total of nine organizations selected to represent the three respondent groups (three per group). Cognitive interviews were conducted with three respondents (one from each group). In the cognitive interviews, respondents were asked to explain their thought process and provide perspectives about the survey while they were answering the questions. These interviews provided information that was used to evaluate the quality of the response and to help determine whether the question was generating the information that was intended. The survey was revised and programmed, then sent to six additional respondents (two from each group), with a follow-up interview with three of the six respondents.

The cognitive interviews and pilot test of the programmed survey provided both information for revision of the survey instrument and confirmation of key aspects of the survey.

Because the survey will be administered first in 2015, it will be necessary to collect data retrospectively. During the pilot, both survey responses and discussions with respondents indicated that respondents were able to provide information about their organizations’ capacities and practices for 2014 and 2009, and their responses did not suggest a social desirability effect. For example, some respondents indicated that there was substantial change in practice between the two points in time, whereas others indicated no change, whether the initial level was low or high. The distribution of reported levels and change suggests that there will be substantial variation among respondents and respondent groups. Respondents also were asked to record reasons for any change between 2009 and 2014; a “Not applicable, no change” option was available for those respondents that did not perceive any change.

The survey pre-notification and instructions make clear to respondents that they may want to ask colleagues to help provide information and the online survey administration allows for multiple respondents from an organization if needed. Most pilot respondents said they were able to respond for the different topics and for both periods, although a few asked a colleague to help.

The items that respondents said were most difficult and time-consuming to answer concerned the evaluation budget (in dollars and as a percentage of the organization’s overall budget). For respondents who could provide the data, the original budget questions were retained. However, a simplified version of the question was added to simply ask for a comparison of the magnitude of change in budget between the two years, rather than for actual figures. Respondents that had difficulty answering the original, more detailed, question reviewed the simplified version and said they would be able to answer the question framed this way.

In several cases the survey was sent to a general contact for the organization or to the organization director. The pre-notification and cover email explained the survey and asked the contact to answer the survey themselves or forward it to an appropriate colleague. Additionally, in the interviews respondents were asked who the survey should be sent to within the organization. Their responses confirmed that it should go to someone like the Executive Director of the organization, both as a practical matter and to ensure that the Executive Director is aware of the survey.

The administration of the programmed version of the survey showed that respondents could access the survey link and most could complete and submit the survey. One respondent thought he completed the survey but was unable to submit it. Another submitted a partially completed copy of the survey. Based on this information, the survey administration plan was adjusted in several ways. First, each survey will be checked as it is returned and if the survey is only partially completed there will be follow up with the respondent. Second, a hard copy will be provided for respondents who have problems completing the online survey. (In the case observed in the pilot test, the respondent received the emails and accessed the survey link, but was unable to submit the completed survey.) The respondent will be offered the opportunity to conduct the survey by telephone if the respondent has difficulty with online or hard-copy versions.

Overall, respondents appeared engaged and interested in the survey issues and questions. In the pilot test, comparison group members were asked if they were interested in receiving a copy of the survey report when completed, and were asked to provide contact information if they would like a copy. All said they would like to receive a copy, and the contact information they provided will make it possible to send reports to them.

The exhibit below summarizes the changes made to the instruments based on the pilot test and additional review.

Area/questions (and instrument)

Revisions made

Introduction and instructions (All)

Made minor wording changes to clarify instructions to respondents. These include:

  • Make clear that survey asks about all their grant programs, not just ones paid for by federal money;

  • Added definition of evaluation to evaluation section;

  • Corrected some grammatical errors and made some changes in sentence structure to make instructions easier to follow.

Revised wording of some questions to sharpen focus on SIF areas of emphasis

Examples:

  • Rigorous evaluation (not just “evaluation”);

  • Evaluation-based evidence (not just “data”);

  • Scale-up decisions based on evaluation evidence.

Background/demographic questions (non-funded applicants and nonprofit sample)

Added question to identify organization as issue- or geographic-based, for use in analyses.

Extent” questions (All)

Made minor revisions in language of some items

Added “N/A” option for factors contributing to change so respondents mark N/A if no change occurred between 2009 and 2014.

Deleted a few items from lists because they were not needed or duplicated questions asked elsewhere in survey.

Because of structure of programmed instrument, moved the “specify” fields for “other (specify)” responses to come right after a set of questions. This allows respondents to record other factors not included in the lists.

For collaboration questions – dropped question about federal funding because the information is collected elsewhere in survey and this item duplicated that information.

Evaluation budget (All)

Added question that asks respondents to compare level of budget for evaluation in 2014 with level in 2009 (total and as a percentage of the organization’s budget). This is followed by the original question about actual evaluation budget if they can get the information. This comparison question was added because pilot test participants reported that the question as written was difficult for some organizations to answer. Adding this question makes it possible to obtain basic information if people have problems getting actual budget information.

Support received by intermediaries (SIF)

Revised some specific items to clarify intent and to make questions simpler to answer.

Federal funding (SIF)

Added: was SIF funding the first federal funding the intermediary received? This was added because preliminary evidence suggests that not having had prior federal funding is related to early challenges in SIF implementation.

Tiered evidence programs

Simplified questions about tiered evidence programs to focus on insights based on SIF experience. Ask only of SIF intermediaries.

Reflections on SIF experience (SIF)

This section was revised and shortened based on comments from SIF participants in the pilot:

Dropped comparison of SIF experience with other experience as grantee;

Added question SIF challenges/problems;

Dropped question about whether other federal agencies could use SIF-type model; kept question about advice to agency considering SIF-like intermediary model.



B5. Individuals Consulted on Statistical Aspects of the Design and Organizations/Persons Collecting and Analyzing the Data

The following individuals are collecting and analyzing the data:


Janet Griffith, Ph.D.

Senior Fellow

ICF International

9300 Lee Highway

Fairfax, VA 22031

703-934-3000

[email protected]


Jing Sun

Technical Specialist

ICF International

9300 Lee Highway

Fairfax, VA 22031

703-934-3000

[email protected]


Elyse Goldenberg

Associate

ICF International

9300 Lee Highway

Fairfax, VA 22031

703-934-3000

[email protected]


Whitney Marsland

Associate

ICF International

9300 Lee Highway

Fairfax, VA 22031

703-934-3000

[email protected]



The individuals listed below were consulted on statistical aspects of the design.

Kurt R. Peters, Ph.D.
Survey Methodologist
ICF International
126 College Street
Burlington, VT 05401
802-264-3713

[email protected]



Ronaldo Iachan, Ph.D.
Technical Director
ICF International
Rockville, MD
[email protected]
301-572-0538


Randy ZuWallack, PhD.
Principal
ICF International

126 College Street
Burlington, VT 05401
[email protected]
802-264-3724


References



AAPOR. Response rates: An overview. Retrieved from http://www.aapor.org/Response_Rates_An_Overview1.htm.

Baruch, Y, and Holton, B.C. (2008) Survey response rates and trends in organizational research. Human Relations 61:1139-1161.

Cochran, W.G. (1977); Sampling techniques. New York, Wiley and Sons, 98, 259-261.

Dillman, D. A., Smyth, J. D., & Christian, L. M. (2014). Internet, Phone, Mail, and Mixed-Mode Surveys: The Tailored Design Method. John Wiley & Sons.

Hager, M.A., Wilson, S., Pollak, T.H. and Rooney, P.M. (2003) Response rates for mail surveys of nonprofit organizations: A review and empirical test. Nonprofit and voluntary sector quarterly 32:252-267.

Saunders, M.N.K. (2012) Web versus mail: The influence of survey distribution mode on employees’ response. Field Methods 24:1-18.

8



File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorWilson, Alicia
File Modified0000-00-00
File Created2021-01-25

© 2024 OMB.report | Privacy Policy