Supporting Statement B-Community Engagement Final_Revised_10 7 14_clean

Supporting Statement B-Community Engagement Final_Revised_10 7 14_clean.docx

AmeriCorps Community Engagement Survey

OMB: 3045-0161

Document [docx]
Download: docx | pdf


SUPPORTING STATEMENT

Part B



Corporation for National and Community Service AmeriCorps
Community Engagement Study

Revised: October 7, 2014





Corporation for National and Community Service

1201 New York Ave., NW

Washington, DC 20525

Telephone: (202) 606-5000

[email protected]





B. COLLECTION OF INFORMATION AND EMPLOYING STATISTICAL METHODS

This section describes the potential respondent population, and data collection and analysis procedures.

B.1 Potential Respondent Universe and Sampling Methods

For this assessment, there are three respondent groups for each phase of this assessment, each with its own potential respondent universe. All three groups are surveyed in this design to adequately respond to the research questions. The respondent groups by phase include:

  • Phase 1: Primary AmeriCorps grantees (grantee survey; Attachment A)

  • Phase 2: AmeriCorps grantee service locations (service location survey; Attachment B)

  • Phase 3: Local community partners of AmeriCorps grantee service locations (partner survey; Attachment C)

The respondent universe includes all grantees in operation during the 2015 fiscal year (starting October 1, 2014), and that have been in operation during the 2014 fiscal year, their service locations, and up to 5 community partner organizations collaborating with each service location. This was chosen because grantees that recently have been awarded are not anticipated to have had sufficient time to develop the community engagement activities treated in the survey. The sampling frame includes all AmeriCorps State & National, Tribes, and Territories grantees. The frame consists of 12,589service locations and 773 grantees. The frame differs from the population in that 71 service locations missing RUCA code information were removed, as this variable was used to define the Rural/Urban stratum.



This universe presents important challenges to sampling. Most AmeriCorps grantee organizations have multiple service locations. The number of locations for an individual grantee ranges from 1 to over 100, with the largest number of affiliated locations for one primary grantee being more than 2,000. In addition, service locations within the same grantee are likely to be very similar. A simple random sample of service locations would likely result in over-representation of few grantee organizations that have a large number of sites, limiting our ability to present results from a wide range of grantees. The high degree of clustering could also limit our statistical power.

In our sampling approach, the primary sampling unit for this survey will be grantee service locations. Sampling will stratify by the number of locations per grantee in order to ensure that grantees with many sites are not overrepresented. Within this stratum, a random sample of service locations will be selected, with a restriction that no more than 5 service locations from any one primary grantee will be drawn.

All grantees associated with the selected service locations will be surveyed, and up to 5 partners will be identified through the service locations to be surveyed. The grantee and service location sample is expected to be representative of the entire population of grantees and service locations. The sample of partner organizations, however, is not expected to be representative of all existing or potential partner organizations for AmeriCorps grantees. We expect grantee service locations to provide information for their partners that are strongest or biggest advocates of their programs. We still believe information from these partner organizations to be useful, for both instrumental and programmatic purposes. AmeriCorps has never attempted to collect data from community partners in a systematic way, despite this being an important stakeholder group to the success of AmeriCorps programs.

In addition to stratifying by grantees’ number of service locations, the sampling design will also stratify by rural/urban commuting area (RUCA).1 88 percent of AmeriCorps grantee service locations in the sampling frame are located in Metropolitan areas with RUCA code 1 through 3. Using a simple random sample or proportional sampling on RUCA code would limit the power of the study to provide any analysis on rural grantees. There is strong policy and programmatic interest in rural areas – AmeriCorps grant decision-making often prioritizes services in rural areas, as there is strongly held belief that organizational and civic capacity is lower in rural areas due to limited resources. In addition, based on program experience, rural programs tend to be more decentralized, focusing more on capacity building, and provide a broader array of services, rather than being targeted on specific interventions. If this research finds this type of data collection effort useful to measure community engagement, it is anticipated that it may be most useful to rural grantees who have trouble measuring their interventions and activities.



Power Analysis

A target sample size of 300 completed responses from randomly selected service locations (phase 2) was determined to provide estimates with an appropriate level of precision and statistical power for analyses, based on resource availability. As stated in Part A, most of our statistical tests will be contingency tables using χ2 tests, or linear regression models using t-tests. Assuming 80% power, and an alpha level of 0.05, with 300 respondents, the minimally detectable effect size (Hedges’ G2 for linear regression coefficients, Cohen’s w3 for contingency tables) comparing two groups is 0.15.4 Due to limited available studies in this area, it is difficult to know what effect size to expect in this type of study, and whether detecting an effect size of 0.15 is adequate. Cohen’s seminal work on power analysis presents 0.10 as small and 0.30 as medium for w in contingency table analysis, and 0.20 as small for Cohen’s d (similar to Hedges’ G).5This guidance would suggest that our study has sufficient power to identify small effect sizes. From an empirical perspective, we were not able to find meta-analyses or other studies compiling research on community engagement or capacity building comparable to our study. In the educational literature, this effect size is near those commonly reported. For example, in a meta-analysis on online learning programs sponsored by the US Department of Education, the average effect size across 23 studies comparing traditional instruction vs instruction augmented with online activities was 0.35, and 28 studies comparing traditional vs purely online found an average effect size of 0.14 (both used Hedges’ G).6 While our expected sample size will be underpowered for small effect sizes, the substantive utility to program planning and development of such effect sizes may be limited

Assuming a response rate of around 80 percent, the proposed service location survey will be administered to 394 randomly selected service locations in phase 2. Primary AmeriCorps grantees directly associated with the random sample of service locations will comprise the sample of respondents for phase 1. Community partners identified by service locations during phase 2 will comprise the sample for phase 3.



Sample

The sample consists of 394 service locations, stratified by number of grantee service locations and rural/urban status. These locations are tied to 304 grantees, who will be surveyed in phase 1. We will be requesting up to 5 community partners from each service location, which will result in a maximum of 1,970 partner respondents. Assuming each location provides 5 partners, the total size of the sample to be surveyed is 2,668.

Table 1 reports the breakdown of the frame and the sample by these strata. Our sample oversamples service locations that are under grantees with fewer locations, and locations in rural areas. We have selected a maximum of 5 service locations from a single grantee – if in the selection process more than 5 locations were chosen from a grantee, the excess were randomly drawn from other grantees. We have also selected all locations for all Tribal and Territories grantees, given the population is small (20).

Table 1. Sampling Frame and Sample, by Strata

Strata

Frame

Sample

Number of Grantee Locations

Rural/Urban

N

% of Frame

N

% of Sample

1 to 5

Small and Isolated Small Rural Town

35

0%

21

5%

1 to 5

Large Rural City/Town

28

0%

21

5%

1 to 5

Urban

420

3%

42

11%

6 to 25

Small and Isolated Small Rural Town

293

2%

30

8%

6 to 25

Large Rural City/Town

246

2%

30

8%

6 to 25

Urban

2646

21%

60

15%

26 to 100

Small and Isolated Small Rural Town

204

2%

25

6%

26 to 100

Large Rural City/Town

215

2%

25

6%

26 to 100

Urban

2077

16%

50

13%

Over 100

Small and Isolated Small Rural Town

197

2%

7

2%

Over 100

Large Rural City/Town

303

2%

16

4%

Over 100

Urban

5905

47%

47

12%

Tribes/Territories

Small and Isolated Small Rural Town

0

0%

0

0%

Tribes/Territories

Large Rural City/Town

14

0%

14

4%

Tribes/Territories

Urban

6

0%

6

2%

Total


12,589


394




Tables 2 and 3 break down the sample by each stratum individually, and show the oversampling outlined above.



Table 2. Sampling Frame and Sample by Number of Grantee Locations


Frame

Sample

Number of Grantee Locations

N

% of Frame

N

% of Sample

1 to 5

483

4%

84

21%

6 to 25

3185

25%

120

30%

26 to 100

2496

20%

100

25%

Over 100

6405

51%

70

18%

Tribes/Territories

20

0%

20

5%

Total

12,589

 

394

 



Table 3. Sampling Frame and Sample by Rural/Urban


Frame

Sample

Rural/Urban Status

N

% of Frame

N

% of Sample

Large Rural City/Town

729

6%

83

21%

Small and Isolated Small Rural Town

806

6%

106

27%

Urban

11054

88%

205

52%

 Total

12,589

 

394

 



After the sample was drawn, it was compared against the sampling frame based on grantee characteristics not included as strata to ensure proportional representation. These factors include grant type, number of years the grantee has received the AmeriCorps funding, and service focus area.



B.2 Information Collection Procedures

As described in B.1 above, the survey administration will occur over three phases.

  • Phase 1: Survey of primary AmeriCorps grantees affiliated with randomly selected service locations

  • Phase 2: Survey ofrandomly selected AmeriCorps grantee service locations

  • Phase 3: Survey of up to five local community partners of randomly selected AmeriCorps grantee service locations

CNCS maintains a database of primary AmeriCorps grantees contact information. Before initiating contact with grantees in phase 1, we will notify state service commissions of the data collection effort. We will use e-mail to distribute an invitation in Phase 1 to primary AmeriCorps grantees to participate that describes the study’s background and purpose and survey procedures to grantees. Each primary AmeriCorps grantee respondent will be requested to provide contact information for the randomly selected service locations who will be invited in the second phase of the evaluation to complete the service location survey. CNCS has contact information for the majority of these service locations. However, this information is updated only once a year and may not be current. Thus, each primary grantee will be requested to provide contact information (i.e., name, phone number, and email) for randomly selected service locations (up to as many as three service locations per AmeriCorps grantee).

In Phase 2, we will use e-mail to distribute an invitation to contacts for each service location (based on contact information supplied by primary grantees in Phase 1) to participate in the survey. The invitation will describe the study’s background and purpose and survey procedures. Each service location respondent will be requested to provide contact information for up to five local community partners.

In Phase 3, we will use e-mail to distribute an invitation to contacts for each identified community partner (based on contact information supplied by service locations in Phase 2) to participate in the survey. The invitation will describe the study’s background and purpose and survey procedures.

The invitations will include a link to the survey website and a timeframe for response. We will send a reminder e-mail to respondents who have not responded within one week of the invitation. If, within two weeks of the initial invitation, we have not received responses from the minimum number of all targeted subsamples, we will call potential participants to encourage response and address any potential concerns about participation.

Survey software will save responses in an analytic database.



Estimation and Calculation of Survey Weights

In order to obtain valid survey estimates, estimation will be done using properly weighted survey data. The weight to be applied to each respondent is a function of the overall probability of selection, and appropriate non-response and post-stratification ratio adjustments. Base weights are calculated as the inverse of the selection probability based on the sample design, , where is the subsampling rate for stratum .

There will inevitably be some nonrespondents to the survey and weighting adjustments will be used to compensate for them. The nonresponse-adjusted weight, for weighting class will be computed as where is the base-weighted sum of eligible respondent in weighting class , and is the base-weighted sum of eligible nonrespondents in weighting class .

The weighting classes will be based on a propensity score model created with the goal to minimize the bias due to non-response. The propensity score model will estimate the probability of response using logistic regression. The variables in this model are derived from the various data sources available on the sampling frame, and are included in Table 4.



Table 4. Variables for Non-Response Analysis

Urban vs rural service location

CNCS award amount

Number of years receiving CNCS funding

Number of AmeriCorps members

Number of service locations

Service focus area (Education, Healthy Futures, etc.)

Total revenue (from IRS 990 filings)

Grantee type: National Direct, State Competitive, State Formula, Tribal, Territory

Percent of members serving full-time vs part-time

Geographical area (e.g. State, Metropolitan Area)

Performance Measure data from current and previous year (for continuation grantees)



The propensity scores will be grouped into quintiles. Within each quintile class ( ) we will ratio adjust the respondents to reflect the nonrespondents as described above.



B.3 Maximizing Response Rates

We will employ a number of strategies to maximize response rates, including keeping all survey instruments short and simple. Procedures were designed to minimize respondent burden. Participants will determine when and where they respond based on their own convenience. We will send multiple personalized reminders to participate as necessary, and will follow-up with phone calls to non-responders. These approaches are expected to maximize response rates.

E-mail requests, which include a hyperlink to the survey, will be sent to survey respondents to complete the Web-based surveys. Strategies that will be used to enhance the response likelihood include the following:

  • E-mail requests will be individualized by respondent name.

  • The request will include friendly and inviting language and include an estimate of the short time required to complete the survey.

  • These e-mail requests will convey the potential value of results to respondents, and will indicate that the participants’ responses will not be identified to any Government Agency.

  • Web-based survey respondents also will be provided with a “resume” capability that allows them to break off the session mid-survey and then return to the survey at a later time to complete it without losing previously entered data.

  • Reminder e-mail notices will be sent to non-responders beginning 3 days after the initial invitation. A total of 3 reminders e-mails (1 week apart), which include a hyperlink to the Web-based survey, will be sent to non-responders.

  • Phone calls will be made to non-responders by contract research staff.



Respondents to a pre-test expressed enthusiasm for participation.

We expect to achieve at least 80 percent response rate.

Measuring Non Response Bias

For the each of the three instrument samples, we will perform a comparison of early or initial survey respondents to responses obtained from individuals who respond late (following the last e-mail notification; late respondents) on the response patterns to key survey items. This will be done using standard regression models, where the independent variable is response timing (early/late), and the model chosen will depend on the scale of the specific survey item (binary, ordinal, interval). Studiesi have shown that late-respondents, or those who respond after several attempts, tend to have some similarities with individuals who do not respond (non-respondents). Any differences between these subsets of survey respondents will provide a measure of the potential non-response bias.

In addition, we will conduct a statistical analysis to identify whether there are statistically significant differences between respondents and non-respondents on characteristics that we have in our databases for all grantees and service locations. To the extent that there are significant differences, we will add non-response weights to the survey weights as outlined in section B.2 to adjust for differential propensity to respond. Respondents and non-respondents will be compared on the characteristics outlined in Table 1 above.


B.4 Tests of Procedures

The survey instruments have been reviewed extensively by the Office of Research & Evaluation, AmeriCorps Program Officers, and the research team at AFYA. AFYA research staff have also tested the data capture procedures to ensure that the Web-enabled surveys capture and render data correctly. Two members of our project team manually completed 10 of each survey (on hard copy), in parallel with our online data entry component and compared the outputs to ensure that all data were captured correctly. This process was also used to confirm the proper function of the skip logic.

A pilot test was conducted on a sample of 8 grantees, 7 service locations, and 7 partners. Respondents were purposively chosen to reflect the different grantee sizes and rural/urban location. The results from the pilot were assessed to ensure response patterns were as expected. While the sample is too small to conduct any basic statistical tests, they allowed us to understand the extent to which respondents utilized the range of response options, whether responses on similar items were in alignment, and whether responses to certain questions were as expected.

Follow-up interviews were held with 3 grantees, 4 service locations, and 2 partners that responded to the pilot. Semi-structured interviews were held with these respondents, asking the following questions:

  • How likely would you have responded if the interviewer had not scheduled you for the pilot test?

    • What would have made you more likely to respond?

    • What would have made you less likely to respond?

  • Did you have any problems linking to, or navigating the survey?

  • Was the introduction page clear?

  • Were the questions clear?

  • Ask respondents to briefly explain their decisions in responding to a question on each page (except contact info).

  • Do you have any recommendations for making the survey easier to understand?

  • Do you have any recommendations for improving the survey generally?

The pilot also tested the administration procedures, including the full process from contacting grantees, requesting contact information for service locations, contacting service locations, requesting contact information for partners, and then contacting the partners. The results of this stage helped improve our plans for administration.

Overall, the results from the above pilot testing pointed to areas where we made substantive changes to the survey and to its administration. The pilot confirmed many aspects of the survey. Respondents found the scales used to be intuitive and understandable, and the instructions to be clear. Respondents’ understanding of key terms like “community members,” “program model,” “stakeholders,” and “partner organizations” was in line with our intended purpose, although they made some recommendations for adjusting wording of some survey items to be clearer. Respondents also indicated that they felt the length of the survey was reasonable, and most felt that there were no key items missing. One respondent would have liked the opportunity to describe the changes that have resulted from AmeriCorps and community partner efforts, particularly with respect to service capacity.We did not address this comment due to concern that a closed-ended question would have a high risk of demand characteristics or positive bias based on the response options and the nature of the survey. This could end up yielding limited useful information given the additional burden such an item would create for respondents. An open-ended question could be more useful here, but given our goals of a relatively short survey that can be easily administered, we believe that adding such a question would inordinately increase respondent burden. Finally, respondents selected from a range of responses for most items, indicating that if the full sample has the same degree of variability, statistical modeling could find interesting and informative results.

The following summarizes key findings related to the administration of the survey:

  • One item on the service location survey was changed that addressed the frequency of community needs assessments.

  • One item on the service location survey asking for the number of partner organizations was changed to add response options.

  • Contact information for grantees is not as up to date as we had expected. We need to build in time and resources to contact multiple individuals at each grantee organization as needed.

  • Our proposed method for contacting service locations and partners worked well, and weplan on implementing it with the same fidelity as in the pilot.

  • A template letter with information about the study will be provided to grantees to send to service locations, and to service locations to send to their partners. This letter will validate the survey’s authenticity and provide emphasis that the grantee/service location is participating, thereby increasing the likelihood of response.

  • An initial notice from AmeriCorps advising of the study is necessary to provide assurance of the authenticity of the study, particularly if invitations and reminders will come from the contractor.

  • The term “operating site” should be avoided when referring to the location where AmeriCorps members serve, as it can be confusing as to the interpretation of it. “Service location” is more appropriate, and has been replaced in all instances of “operating site”

  • The term “AmeriCorps program” has been adjusted in some places to provide more clarity on certain items, as one respondent found some items using this term confusing as to whether it referred to the members or some other aspect of the program.

  • We removed references to, or wording that could be interpreted as referencing to, fundraising activity and displacement of staff by members or volunteers, to avoid concern from grantees that they would be responding to prohibited activities (AmeriCorps members are allowed to engage in very limited fundraising activities). Although we have interest in learning about these activities from a programmatic standpoint and not a compliance one, there was concern that the survey was a “hidden” mechanism to ensure compliance with regulations.

  • One item referring to the goals of the AmeriCorps program was reworded, to refer instead to the reasons for applying to AmeriCorps.

  • One item referred to “potential beneficiaries” and “potential members” was confusing. This item has been changed to remove these options.

  • An item referring to the growth of the organization as a result of AmeriCorps was split into sub-questions to allow respondents to address growth in different aspects of their program.

  • The invitation and introduction were revised to state more clearly how survey results will be used.

B.5 Names and Telephone Numbers of Individuals Consulted.

The organization responsible for data collection activities during the survey administration period will be a contracted research firm. Data analysis will be conducted by CNCS Office of Research and Evaluation, led by Robin Ghertner, MPP.

1RUCA codes are defined by the USDA Economic Research Service. For documentation, see: http://www.ers.usda.gov/data-products/rural-urban-commuting-area-codes/documentation.aspx#.VByU3RbmUec

2Larry V. Hedges (1981). "Distribution theory for Glass' estimator of effect size and related estimators". Journal of Educational Statistics 6 (2): 107–128.

3Cohen, J. (2013). Statistical Power Analysis for the Behavioral Sciences. Routledge.

4This result was obtained via simulation. We simulated 5000 datasets with a continuous dependent variable, a dichotomous dependent variable, and 4 additional control variables. All independent variables were defined to be correlated with one another, with a correlation of 0.3. This was chosen because we expect the various grantee characteristics to not be dependent. We ran these 5000 simulations with varying sample sizes and effect sizes to identify the minimally detectable effect size (Hedges’ G)

5Cohen, J. (1992). A power primer. Psychological Bulletin, 112(1), 155–159.

6Means, Barbara, Yukie Toyama, Robert Murphy, Marianne Bakia, and Karla Jones.(2009). Evaluation of Evidence-Based Practices in Online Learning: A Meta-Analysis and Review of Online Learning Studies. Report to US Department of Education.

iLahaut VM, Jansen HA, van de Mheen D, Garretsen HF, Verdurmen JE, van Dijk A. Estimating non-response bias in a survey on alcohol consumption: comparison of response waves. Alcohol. 2003 Mar-Apr;38(2):128-34


Bose, J. (2001) Nonresponse bias analyses at the national center for Education statistics. Proceedings of Statistics, Canada Symposium 2001: Achieving Data Quality in a Statistical Agency: a Methodological Perspective. Pgs: 8



File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorRobin
File Modified0000-00-00
File Created2021-01-27

© 2024 OMB.report | Privacy Policy