Competitive Advantage Justification Part B_Revised_10 8 14

Competitive Advantage Justification Part B_Revised_10 8 14.docx

AmeriCorps Competitive Advantage Survey

OMB: 3045-0162

Document [docx]
Download: docx | pdf

AMERICORPS COMPETITIVE ADVANTAGE SURVEY

Shape1

SUPPORTING STATEMENT FOR PAPERWORK REDUCTION ACT SUBMISSIONS

B. COLLECTIONS OF INFORMATION EMPLOYING STATISTICAL METHODS

B1. Describe (including numerical estimate) the potential respondent universe and any sampling or other respondent selection method to be used. Data on the number of entities (e.g., establishments, State and local government units, households, or persons) in the universe covered by the collection and in the corresponding sample are to be provided in tabular form for the universe as a whole and for each of the strata in the proposed sample. Indicate expected response rates for the collection as a whole. If the collection had been conducted previously, include the actual response rate achieved during the last collection.

The potential respondent universe for the AmeriCorps Competitive Advantage Survey consists of all establishments in the United States (defined as the 50 states and the District of Columbia). An establishment is defined as an employer located at a particular address or location. Data will be collected with respect to this location, even if the employer has other locations. This approach recognizes that hiring responsibilities are generally distributed to sites and not the sole function of the organization’s headquarters. One individual will be selected per establishment.

The potential respondent universe for the AmeriCorps Competitive Advantage Survey consists of all establishments excluding those employing fewer than 50 people. The sampling frame will be created from the Dun & Bradstreet’s Hoover’s database, which provides all essential frame information (e.g., employee size, NAICS code, contact information) for 15.2 million establishments. This file is considered the most comprehensive commercially available business list.

Given the focus of the discrete choice items in the questionnaire on an entry-level general office position that does not require highly specialized experience or technical skills but does require a bachelor’s degree, we seek to reach respondents who participate in some way in hiring staff for such positions. Participation could mean that the respondent reviews resumes, screens candidates, conducts interviews, or recruits candidates. Therefore, the survey includes two components: the first component includes a screening telephone call to sampled establishments to: 1) determine the eligibility of the establishment/‌business; and 2) determine the name of a person who is appropriate to complete the survey questionnaire. This person is referred to as the “selected respondent.” Please see Attachment D for the screener.

Sampled establishments meeting one or more of the following three criteria will be treated as ineligible for the survey. These include: 1) telephone recruitment efforts cannot confirm that the establishment is open/in business during the field period; 2) the establishment does not hire people for entry-level general office positions requiring a bachelor’s degree but does not require highly specialized experience or technical skills; and, 3) the establishment employs fewer than 50 people.

To efficiently reach the business sectors, we will stratify the sample by industry using NAICS codes. Although a company can be not for profit in any industry, most nonprofits are located in education, healthcare, social services, religious and civic services, and the arts (Salamon, Sokolowski, and Geller, 2012). The proposed stratification scheme is shown in Exhibit 1 below.

Exhibit 1. Strata for AmeriCorps Competitive Advantage Survey

Stratum

NAICS code

Industry name

Government

92

Public Administration


6111

Elementary and Secondary Schools


22132

Sewage Treatment Facilities


4911

Postal Service


51912

Libraries and Archives


71219

Nature Parks and Other Similar Institutions

Nonprofit

61 (except 6111)

Educational Services


6214,6216,6219

Outpatient Care Centers; Home Health Care Services; Other Ambulatory Health Care Services


622

Hospitals


623

Nursing and Residential Care Facilities


624

Social Assistance


7111

Performing Arts Companies


712

Museums, Historical Sites, and Similar Institutions


813

Religious, Grantmaking, Civic, Professional, and Similar Organizations

For profit

All other NAICS codes

All other industries

We discuss the characteristics of the government and nonprofit strata in further detail below.

Government. Generally, Public Administration (NAICS=92) is considered government. However, public administration only represents 57% of the government sector. To identify the remaining government agencies, we conducted a review of the Quarterly Census of Employment and Wages (QCEW) data and identified other NAICS classes that included government agencies. We included industry classes where at least 50% of the establishments are public entities. We included the Elementary and Secondary schools in government stratum since 78% of them are public. As currently defined, we expect that over 90% of the establishments contacted will be government entities. Further, we estimate that this stratification covers over 80% of the government sector. Government entities included in the for-profit and nonprofit strata will be identified by an item in the survey instrument and correctly classified. Conversely, for-profit and nonprofit entities included in the government stratum will be identified by an item in the survey instrument and correctly classified.

Nonprofit. Salamon et al. (2012) conducted research concerning the reach of nonprofit business (Section 501(c)(3)) across industry class, geography and time, using identified QCEW data matched against the publicly available register of tax exempt entities maintained by the Internal Revenue Service. We leveraged the industry information to identify the NAICS codes that will efficiently reach nonprofit establishments. As currently defined, we expect that 65% of the establishments contacted will be nonprofit. Further, we estimate that this stratification covers nearly 85% of the nonprofit sector. Nonprofit entities included in the for-profit and government strata will be identified by an item in the survey instrument and correctly classified. Conversely, for-profit and government entities included in the nonprofit stratum will be identified by an item in the survey instrument and correctly classified.

We cannot precisely specify the size of the universe due to the nature of the QCEW establishment size estimates, which are only reported for nongovernment establishments. Conversely, QCEW counts of establishments by ownership, which include government establishments, do not separate establishments by size. The size of the universe and sample size for the three strata described above are shown in Exhibit 2.

Exhibit 2. Universe and Sample Sizes for AmeriCorps Competitive Advantage Survey


Universe Size


Stratum

Private Only

All Govt.

Total

Sample Size

50+ employees





Government

4,677

235,431

240,108

513

Nonprofit

53,899

23,882

77,781

513

For profit

326,332

34,879

361,211

514

Total

384,908

294,192

679,100

1,540

All Sizes





Government

21,314

235,431

256,745

513

Nonprofit

1,064,487

23,882

1,088,369

513

For profit

7,766,219

34,879

7,801,098

514

Total

8,852,020

294,192

9,146,212

1,540

Notes: Universe size from QCEW Quarter 1 2013 counts. Government counts include establishments of all sizes as breakdown by government establishment size is not available.

We estimate a response rate of 7%-9% will be achieved based on the results of the FMLA Employer Survey conducted by Abt SRBI on behalf of the Department of Labor (DOL) in 2012. This was the response rate achieved for key informants who were contacted by email and were not asked to complete the survey over the phone. Although a $20 conditional incentive was added to the protocol used in the FMLA Employer Survey, the AmeriCorps Competitive Advantage Survey will not benefit from the name recognition and authority that DOL had for human resources professionals, particularly given the survey’s focus on the Family and Medical Leave Act.

B2. Describe the procedures for the collection of information, including: Statistical methodology for stratification and sample selection; Estimation procedure; Degree of accuracy needed for the purpose described in the justification; Unusual problems requiring specialized sampling procedures; and any use of periodic (less frequent than annual) data collection cycles to reduce burden.


To minimize demand characteristics, all survey communication will be done through Abt SRBI, with no direct communication between respondents and CNCS. Communication will state that the survey is an independent study on hiring, financed by a federal agency, but will not indicate that the agency is CNCS.


The sample will be selected from the D&B Hoover’s database. The nature of the sampling scheme to be employed is driven by the capabilities of the database, which is accessible solely via Web interface. We will draw a sample of establishments with 50 or more employees within strata defined above using the random sample function of the Hoover’s database. In order to identify nonprofit establishments included in the sample, establishments with employer identification numbers (EIN) will be run by the Urban Institute’s list of nonprofits. At this stage, Abt SRBI will keep all establishments selected in the for-profit and government sectors and all those in the nonprofit stratum that were matched to the Urban Institute list and select a subsample of organizations within the nonprofit stratum that were not matched to the Urban Institute list. The goal of this procedure is to increase the effective incidence of nonprofit establishments in the survey sample. The selection of the subsample will take place offline and not in the Hoover’s database. The subsampling fraction will depend on the number of matches. Based on a review of the Hoover’s Database, we expect 60% of the sampled records to have EIN. All selected establishments (i.e., all but those not included in the nonprofit subsample) will then be uploaded to Hoover’s database and one person with an email address will be selected per organization using the Hoover’s random sample function, with the following Hoover’s functional groups excluded from matching: administration, secretary, and facilities. In the first replicate, all selected establishments without email addresses will be retained for the telephone verification. Likewise, all selected establishments with an email address will be retained for the email sample. The second and third replicates may involve a subsample of those with email addresses if email sample response rates surpass those of telephone verification sample. Any subsampling will take place offline.

Exhibit 3. Universe and Sample Sizes for AmeriCorps Competitive Advantage Survey






Estimated Actual Establishment Category

Stratum

N

n*

Subsample

Screens

Gov.

Nonprofit

For-profit

Total

Government

118,208

6,090

6,090

548

499

29

13

540

Nonprofit

62,384

10,730

7,444

670

13

479

44

535

Match EIN


4,852

4,159

374

0

374

0

374

No match EIN


5,878

3,286

296

13

105

44

161

For-profit

314,540

5,160

5,160

464

2

5

458

464

Target





514

513

514

1,540

* Based on 9% response rate.

Estimation

In order to obtain valid survey estimates, estimation will be done using properly weighted survey data. The weight to be applied to each responding business establishment is a function of the overall probability of selection, and appropriate non-response and post-stratification ratio adjustments. Base weights are calculated as the inverse of the selection probability based on the sample design, , where is the subsampling rate for stratum and segment (government, nonprofit, for-profit based on screening) j.

There will inevitably be some nonrespondents to the survey and weighting adjustments will be used to compensate for them. The nonresponse-adjusted weight, for weighting class will be computed as where is the base-weighted sum of eligible responding establishments in weighting class , and is the base-weighted sum of eligible nonresponding establishments in weighting class .

The weighting classes will be based on a propensity score model created with the goal to minimize the bias due to non-response. The propensity score model will estimate the probability of response using logistic regression. The dependent variables will be based on frame data available from the D&B database, such as establishment revenues, industry code, number of employees, geographic location, years in business, and business type (headquarters, branch). The propensity scores will be grouped into quintiles. Within each quintile class ( ) we will ratio adjust the respondents to reflect the nonrespondents as described above.

To help reduce possible under-coverage errors in the sampling frame and reduce possible nonresponse bias, the final estimation weights will also include a post-stratification adjustment to reflect the most recent population information available from the QCEW. The adjustments will be made within broad classes (post-strata) such as establishment type (government, private), region, and size of establishment (50-99 employees, 100 to 249, 250 to 499, 500 to 999, 1,000 or more).

Sampling Error

We expect the design effect (DEFF) for the survey to be about 1.5-2.0. The expected standard errors for each establishment type are presented in Exhibit 4. Standard errors will be computed using statistical software that accounts for the complex survey such as SUDAAN and SAS SURVEY procedures.

Exhibit 4. Expected Standard Errors

Stratum

Sample size

DEFF

50/50

40/60

30/70

20/80

10/90

Government

513

1.50

2.7%

2.6%

2.5%

2.2%

1.6%



2.00

5.6%

5.5%

5.2%

4.5%

3.4%

Nonprofit

513

1.50

4.9%

4.8%

4.5%

3.9%

2.9%



2.00

5.6%

5.5%

5.2%

4.5%

3.4%

For-profit

514

1.50

4.9%

4.8%

4.5%

3.9%

2.9%



2.00

5.6%

5.5%

5.2%

4.5%

3.4%

Total

1540

1.50

1.6%

1.5%

1.4%

1.2%

0.9%



2.00

1.8%

1.8%

1.7%

1.4%

1.1%

B3. Describe methods to maximize response rates and to deal with issues of non-response. The accuracy and reliability of information collected must be shown to be adequate for intended uses. For collections based on sampling, a special justification must be provided for any collection that will not yield “reliable” data that can be generalized to the universe studied.

The AmeriCorps Competitive Advantage Survey employs a number of strategies to maximize response rates while maintaining cost control, which are detailed below. Data collection is exclusively through web surveys, which reduce data collection costs and minimize respondent burden. Survey administration will proceed according the steps shown in Exhibit 5.


Exhibit 5. Survey Administration Steps

Shape2 Shape19 Shape18 Shape17 Shape16 Shape15 Shape14 Shape13 Shape12 Shape11 Shape10 Shape9 Shape8 Shape7 Shape6 Shape5 Shape4 Shape3

1. Preparation of Sample File

5. Mail Reminder

7. Telephone Reminder

6. Email Reminder

3. Mail Invitation

4. Email Invitation

2. Telephone Screener

Email Address



Shape21 Shape20

Yes

No




Shape22 Shape24 Shape23

Email Address

Yes

No


Shape25

No





Step 1: Preparation of sample file: preparation of the sampling file is described in responses to questions 1 and 2, above.

Step 2: Telephone screener for establishments for which there are no named individuals with email addresses in the D&B frame (see Attachment D). As mentioned in response to Question 1 of this submission, the survey includes two components, the identification of an employee involved in the hiring process (also referred to as “selected respondent”) and the survey administration to that identified individual. Identification of the selected respondent will be conducted via a telephone call to the selected establishment. A maximum of five attempts will be made to contact establishments.

Step 3: For establishments for which no email address is available, a selected respondent is chosen, and which do not provide an email address for that respondent, a mailed invitation will be sent containing a link to the unique web survey URL for that individual (see Attachment E).

Step 4: For individuals for whom an email address is available in the Hoover’s database or for whom an email was provided during telephone verification, an email invitation will be sent from CNCS.gov using CNCS’s GovDelivery email system (see Attachment A).

Step 5: A reminder letter will be mailed to selected respondents for whom no email address is available and who did not respond to the invitation letter (see Attachment F).

Step 6: Nonrespondents for whom an email address is available will be sent up to five email reminders from CNCS (see Attachment B).

Step 7: Telephone reminder calls will be made to 7% of sample drawn, with the limited use of telephone reminders being based on cost concerns. Calls will be targeted to strata with lower rates of response (see reminder text in Attachment G). Where the selected respondent does not take the call, the interviewer will leave a reminder message on voicemail or with an office manager or other gatekeeper. As with reminder calls to selected respondents with a known nonworking email address, interviewers will be able to update the email address and send a message with the selected respondent’s unique survey URL. In both cases, a message will be left once, minimizing respondent burden; numbers will be redialed only where there is a busy signal, temporary service interruption, or other temporary condition preventing a call. Calls will be made during working hours in the time zone associated with the establishment address; no calls will be placed on weekends or federal holidays.

In addition to the above­mentioned steps, as described in Part A, Question 9, respondents will be offered a $20 incentive paid on completion of the survey.

To assess the impact of nonresponse bias in our study, we will conduct statistical analysis to identify any characteristics of respondents that are correlated with response. Using a logistic regression model, we will create inverse probability weights for each respondents to adjust the results for non-respondents. These nonresponse weights will be combined with sampling weights based on our stratification plan to create the final weights for our analysis.

There is the possibility for demand effects in this study, as AmeriCorps is presented throughout the study while candidates from the public and private sector are not given such recognizable names. The decision was made to use the AmeriCorps and CNCS brand names due to pilot feedback about the perceived legitimacy of the survey, confusion about CNCS’ identity as a federal agency, and the need to describe AmeriCorps experience to those who may not understand what it entails. In our study planning phase, R&E discussed the merits and drawbacks of this approach, but ultimately decided that a more generic term, such as national service or community service, was too vague, and invited incorrect interpretations about the nature of that experience. For example, using the term “national service” might imply military service, which is qualitatively quite different from AmeriCorps national service. Likewise, some might interpret “community service” to be court mandated service or some other form of required volunteering which is certainly different than AmeriCorps national service. While we realize that private and public sector candidates do not have a similar brand name presented with their work experience, we felt that including more specific descriptors, such as a real or made up firm name, might convey unwanted and/or undetectable signals to respondents.

To minimize demand characteristics, all communication will be sent through the contractor with no communication directly from CNCS. We have ensured that the length of the description of AmeriCorps, public sector, and private sector work experience is the same and contains similar text. We have also included question administered post-survey that seeks to detect any demand effect communicated, or non-survey interview administered to a small sample of respondents that asks about respondents’ experience1. We have also included questions in our second pilot, to be administered before main data collection (see section B4) that will help identify demand characteristics and provide insight into changes to the instrument to reduce these effects.



B4. Describe any tests of procedures or methods to be undertaken.


The survey instrument included in this justification was reviewed by experts in discrete choice methodology, human resources, and employee recruitment. The instrument and an initial version of administration procedures were piloted in January and February, 2014. That pilot led to important changes to the administration and the instrument, outlined in detail in the subsection below.

To adequately test the current instrument, in agreement with the Office of Information and Regulatory Affairs, we will conduct a pilot on a sample of 30 to 40 respondents drawn from the sampling frame. All respondents to the pilot will receive a $20.00 incentive payment. The pilot instrument has an additional series of questions related to the experience and comprehension of respondents to the survey (included in Appendix I). We will also conduct semi-structured follow-up interviews with a randomly selected group of respondents. The questions used in the interview are found in Exhibit 6.

Exhibit 6. Questions for Pilot Follow-Up Interviews

Did you have any problems linking to the survey, or navigating within the survey?

Was the introduction to the survey clear?

  • Who did you think sponsored the survey?

  • How do you think they wanted you to respond in the survey?

Were the instructions and questions clear?

In your own words, briefly describe AmeriCorps.

Before taking this survey, if we had asked you what you knew about AmeriCorps, what would you have said

Based on the information provided in the survey on AmeriCorps, plus anything you knew about AmeriCorps previously, what is your view of the organization and its programs?

Did you feel like you had sufficient information about the job candidates to make a realistic decision about them?

Of the 7 different job candidate characteristics in the survey, which were the most important to you? Why?

  • Which were the least important? Why?

Were any job candidate characteristics missing from the survey that you typically use in making hiring decisions?

How similar was your thought process in this survey to how you approach hiring actual employees?

Do you have any recommendations for making the survey easier to understand?


Following the administration of this pilot, we will prepare a brief memorandum highlighting the key findings and any changes made to the survey instrument and administration procedures. We will share this memorandum with OIRA as an addendum to this justification. If the changes are not substantial, OIRA has agreed to provide us with the clearance to administer the instrument on the full sample. If there are substantial changes made to the instrument, we will resubmit this justification package for as a new 30-day notice; however, the instrument will not need to go through a 60-day public comment period and publication in the Federal Register.


Initial Pilot

As of early January, CNCS did not have an active clearance for general piloting that could be used for this project. Despite that, we felt it necessary to test the instrument and data collection process to identify any areas of confusion or weakness. To get enough information to refine our final instrument but to heed Paperwork Reduction Act regulations, we developed 2 additional, different instruments that were similar to, but not the same as the master copy of our draft instrument, that had different instructions and significantly modified questions. Respondents would be randomly assigned one of the 3 versions of the instrument to give us maximum feedback. While these 3 instruments would not give us data that we could aggregate and analyze together, we would be able to get feedback on the overall layout of the survey, ease of use of the question matrices, and time to complete. For the pilot, a sample was drawn from the frame, selected companies were verified via telephone, a screener was be administered, and an invitation email was sent. Respondents were asked a number of debriefing questions in the web survey and were asked whether they would be willing to be interviewed by CNCS staff. We were not able to connect with any respondents to gather additional feedback.

After completing an initial review of the job candidate factors by experts in discrete choice methodology and in human resources and recruiting, we intended to only do one pilot test, as we believed we would get enough feedback from even a handful of respondents. However, our first pilot test with respondents from the sample only yielded 2 complete responses, only one of whom was willing to be contacted to give detailed feedback (she was not able to be reached for follow up). We had 5 bounce backs during this pilot, and thus discovered that the contact information in the Duns and Bradstreet database (our sampling frame) was of lower quality than anticipated. Telephone reminders and follow-ups revealed that many respondents did not get the emailed survey links. Additionally, a number of interviewers reported back that respondents did not want to take the survey because they could not identify CNCS, were unsure of its actual status as a federal agency, or were not able to find sufficient information on the study on CNCS’ website.

Based on these results, we decided to try a second pilot roughly 2 weeks later to see if some modifications to incentive size, inclusion criteria for respondents, and communications protocol would increase response. We created a study information page on CNCS’ website, sent the first email invitation from CNCS’ GovDelivery system, and edited our introductory language to address issues of perceived legitimacy. Again, respondents would randomly be assigned one of the 3 versions of the survey. At the end of the second pilot, we had 2 respondents with complete surveys and one with a partially completed survey. There were 4 bounce backs.

The results from the pilot revealed that the estimated time to complete the survey was about 5 minutes less than anticipated- 10 minutes versus our projected 15. Additionally, despite the small sample size, we felt that larger incentives did not seem to influence response rates, and determined that our initial incentive offering of $20 was sufficient. We also determined that our communications and reminder protocol should be amended to more clearly communicate the legitimacy of CNCS as a federal agency, and made adjustments to anticipate the large number of bounce-backs and bad email addresses. Finally, we broadened the inclusion criteria for respondents to include individuals who are involved in additional aspects of the hiring process, such as recruitment, resume review, or interviewing. While we recognize that the small sample size of the pilot efforts does not allow us to test the data collection process, instrument, and output as rigorously as would be desired, we felt that it was important to gain some feedback, however limited, before proceeding with main data collection. We believe the feedback collected and changes made have improved the quality of both the data collection process and the instrument.

B5. Provide the name and telephone number of individuals consulted on statistical aspects of the design, and the name of the agency unit, contractor(s), grantee(s), or other person(s) who will actually collect and/or analyze the information for the agency.

Abt SRBI has been contracted to conduct the survey. The individuals at Abt SRBI assigned to this project include:

  • Benjamin Phillips, Ph.D., Vice President, (617) 386-2609

  • Randall ZuWallack, M.S., Senior Sampling Statistician, (617) 386-4068

CNCS will analyze the information itself. The individuals at CNCS assigned to this project include:

  • Adrienne DiTommaso, MPA, Research Assistant, 202-606-3611

  • Robin Ghertner, MPP, Senior Research Analyst, 202-606-6772

In addition, the Project Officer for CNCS is Adrienne DiTommaso, MPA, Research Assistant, 202-606-3611


References


Salamon, Lester M., S.W. Sokolowski, and Stephanie L. Geller. 2012. “Holding the Fort: Nonprofit Employment During a Decade of Turmoil.” Nonprofit Employment Bulletin No. 37, January. Center for Civil Society Studies, Johns Hopkins University, Baltimore, MD. Available at http://ccss.jhu.edu/wp-content/uploads/downloads/2012/01/NED_National_2012.pdf.


1 These options are based on work by Orne, see: Orne, M.T. Demand characteristics and the concept of quasi-controls. In R. Rosenthal & R. Rosnow (Eds.), Artifact in behavioral research. New York: Academic Press, 1969. Pp. 143-179.

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorDiTommaso, Adrienne (Guest)
File Modified0000-00-00
File Created2021-01-27

© 2024 OMB.report | Privacy Policy