Competitive Advantage Justification Part A_Revised_10 8 14

Competitive Advantage Justification Part A_Revised_10 8 14.docx

AmeriCorps Competitive Advantage Survey

OMB: 3045-0162

Document [docx]
Download: docx | pdf

AMERICORPS COMPETITIVE ADVANTAGE SURVEY

Shape1

SUPPORTING STATEMENT FOR PAPERWORK REDUCTION ACT SUBMISSIONS

 

A. Justification

A1.  Need for Information Collection-

The primary purpose of the analysis is to identify any competitive advantage in the job market that may be conferred by AmeriCorps experience. The results of the analysis will inform Corporation for National and Community Service’s (CNCS) policy discussions around training, curricula, resources, and policies regarding member development. This is the first rigorous attempt to answer questions about employment benefits of service, and as such, it is an initial step in a larger, anticipated exploration of AmeriCorps member experience and employment. This study will not be a definitive statement about the economic benefits of service, but an added piece of information to triangulate the diverse effects AmeriCorps may have on members.

This study is part of our broader research agenda on the impact of national service on members, and we anticipate its results being combined with those other studies to produce a more comprehensive and defensible outlook on the benefits and costs of service. Other research we are conducting or may conduct include, for example, member exit surveys, alumni follow-ups, pre- and post-tests of members, and research on grantees and employers of members.

The pool of respondents for this study will be employees involved in the hiring process in the private, public, and non-profit sectors. Response to this survey is not required for obtaining grant funding support from CNCS.

This study will answer the following research questions:

  • Does AmeriCorps experience confer an advantage in a hiring situation?

    • Does the AmeriCorps brand send a signal to employers, relative to experiences in the private and public sectors?

    • Do the skills and experiences AmeriCorps gives members confer an advantage, relative to those members could potentially have received had they worked elsewhere?

In accordance with CNCS’ member theory of change, we hypothesize that the AmeriCorps member experience provides members increased employment opportunities through three mechanisms, shown in Exhibit 1.

Exhibit 1. Mechanisms For Employment Benefits of AmeriCorps Service

Shape2



These mechanisms are: increased skills (human capital), increased access to resources (social capital or networks), and association with the AmeriCorps brand, which we hypothesize is a signal to employers of the quality of the applicant. This study focuses on the brand effect and the human capital gain provided by AmeriCorps service, and begins to explore the notion that there is an advantage conferred by these two factors in the job market. Limitations to the design and how those limitations affect the interpretation of the results are discussed at the end of this section.

The skills included in the survey are consistent with those cited in CNCS’ member theory of change, and were determined through a review of existing data on members; interviews with program staff, representatives from alumni networks; and in consultation with subject matter experts. Additionally, a review of existing literature on employer preferences (particularly Brown and Campion, 1994) helped narrow down the skills to three of the most important: leadership, cross-cultural competency, and project development and implementation (i.e. project management)1.

The literature review on employer preferences and discussions with subject matter experts also identified three additional factors of importance: work experience in a sector related to the potential employer’s sector; possession of technical skills related to those needed to complete the proposed work; and similar outlook and values, or “fit”, to the potential employer. Our selected methodology makes efficient and effective use of these criteria through an experimental design called a Discrete Choice Experiment (DCE). DCE is a methodology that assesses the factors influencing choice behavior, and are frequently applied by market researchers and economists to determine respondent preference for different characteristics of a certain product or service, in this case, job candidates (see Cole et al. 2007; Mangham, Hanson, and McPake 2009). DCE relies on Random Utility Theory, a well-established and extensively researched behavioral theory, to explain the choice behavior. The theory holds that utility, a latent concept that cannot be directly measured, is comprised of systematic and random components. A probabilistic model, usually a logistic regression, is used to estimate these two components. Data are collected using a survey format to present respondents with sets of products that vary along a limited number of characteristics (or factors), and asks the respondent to choose the product in the set that they prefer. The data are then analyzed using logistic regression to identify the relative importance or value of each characteristic.

In this study, we chose to use a DCE because it allows us to efficiently and effectively explore the importance of the AmeriCorps brand (as a signal of applicant quality) and human capital resources, specifically the skills discussed above. DCEs typically are used when a new or hypothetical circumstance is being explored, or when revealed preference data are hard to obtain. In our case, we lack sufficient baseline information, and a randomized controlled trial that might generate revealed preference data would be extremely burdensome, expensive, and difficult to execute.

DCEs also makes it possible to independently evaluate the different effect of attributes of target objects (e.g. a product) on decision making to be assessed independently. We will be able to look at both the overall effect of brand and skills, as well as the independent effects of each. DCEs are beginning to be used more and more in research outside of marketing and economics, and are proving useful in healthcare policy, transportation, and environmental research (Ryan and Gerrard, 2003, and Viney, Lancsar, and Louviere, 2002, for example). Norwood and Henneberry (2006) conducted a DCE on employers’ hiring preferences for recent college graduates that is similar to our study. Wilkin and Connelly (2012) used a related model to compare the effects of volunteering to paid work experience.

The procedure for respondents in our study is as follows. First, respondents choose among a group of candidates, whose characteristics are presented in a simplified format that mimics content found on a typical resume. Candidates present different work experience (AmeriCorps vs. private sector or public/non-profit). Specific characteristics for each candidate are manipulated experimentally to isolate the independent effect of each characteristic, and those not manipulated are held constant. Respondents then rank the candidates in order of preference to hire for a hypothetical job, and indicate which candidates are unacceptable.


Validity and Limitations of Research Design

DCEs are noted for high internal validity, as the experimental design allows systematic variation of each relevant factor while holding others constant. In a full factorial design, all effects in the design are orthogonal, meaning they are measured independently. As a result, results are unbiased by unobservable or omitted variables. The design used in this research is a fractional factorial, where only main effects and certain second-order interactions are orthogonal. This means there is the potential for confounding with higher order interactions. However, we do not believe that this is a high risk, given the complexity in interpreting higher order interactions, and the limited evidence that such interactions will be meaningful.

The external validity of a given DCE is dependent on a variety of factors, including how similar the exercise is to actual practice, the sample used, the strength of the administration, and whether the results predict or are otherwise similar to choices made in practice (i.e. reveled preferences), among other factors. Studies have shown that revealed and stated preference studies produce comparable results, and that DCEs can reflect actual decision making behavior, even if the DCE exercise does not seem to resemble the behavior it is studying. For example, Train’s 2002 study predicting public transit use in San Francisco compared DCE data with data capturing actual transit use, and found that the two estimates were no more than an average of 3% points different across a number of transit categories. Other studies, such as Whitehead et al 2008; Telser and Zweifel, 2007); Wardman, 1988; and Scarpi et. al 2003 confirm these findings in their own comparisons.

These results notwithstanding, we do not yet have a mechanism to assess the external validity of our research design in actual hiring situations. Therefore, the results only pertain to the hypothetical situation where the job is not well defined, no interviews are held, no additional information is had on candidates, and there are not consequences to the respondents’ choices. Not all employers hire employees the same way, and this research’s emphasis on discrete factors may not be valid for employers that focus more on intuition or “feeling” in the hiring decision. Another limitation is that in our experiment, we define the attributes of each candidate, whereas in real hiring decisions, these factors must be inferred from job candidates’ resumes and interviews.

We have purposely introduced vagueness in the definition of the factors as well as the hypothetical job, in order to allow respondents to apply the appropriate details for their firm. This is necessary because the firms included in the sample are heterogeneous in the types of positions they hire for, and the way they would interpret things like “Related Field” and “Related Skills”. The estimates from our model will represent averaging across this heterogeneity, and to the extent that interpretation of the design is similar within strata and firm types, our planned subgroup analysis should shed light on these differences. However, there is still the possibility that there will be unobserved heterogeneous interpretation of the design, which will reduce precision of the estimates, but is not likely to bias the results.

Finally, we are confident that our sampling design will provide us a representative sample of the population of firms in the Dun & Bradstreet’s Hoover Database. However, the generalizability of our findings to all firms in the US will be limited by the representativeness of that database to all firms.

As a result of these considerations, the results of this research must be discussed in the context of these experimental conditions until we are able to identify a suitable means to assess the external validity of the results.


Exhibit 2 displays the time schedule for the entire project.

Exhibit 2. Employer Survey Project Time Schedule


Pilot testing of surveys

January-February 2014

Finalize survey instruments and justification for surveys

February 2014

Office of Management and Budget (OMB)

package under review

February-August 2014

Draw sample (in three replicates)

Late May/June-August, 2014

Final approval by OMB

TBA

Telephone interviews begin

Late August, 2014

Telephone interviews end

November, 2014

Web survey opens

Late August, 2014

Web survey closes

November 2014

Final dataset

Late December, 2014

Draft methodology report

January, 2014

Final methodology report

February, 2014


A2.  Indicate how, by whom, and for what purpose the information is to be used.

According to CNCS’ member theory of change, AmeriCorps experience contributes to the development of members’ life and career skills. CNCS’ Office of Research and Evaluation (R&E) has begun to develop a longer term research agenda to explore the pathways to developing these skills, as well as the pathways that are created as a result of the skills and experiences conferred by the member experience. A key component of this research agenda concerns employment- specifically the mechanisms by which members benefit or leverage their experience to an advantage in the job market. This study will be one of a broader range of studies exploring the relationships between the member experience and employment.

The data gathered in this study will be used to statistically calculations estimate employer preferences for the different types of work experience, skills, and personal characteristics. We will develop statistical models to identify the independent relationships between the various factors in our experimental design and test hypotheses we have of their relative importance in employers’ hiring decisions. The data collected will be used by research staff at CNCS. The data will be collected during FY 2014. The primary purpose of the analysis is to identify any competitive advantage in the job market that may be conferred by AmeriCorps experience. The results of the analysis will inform discussions regarding CNCS’ member development and experience, particularly policies, programming content, and resources targeting members’ career and professional development. Rather than serve as a definitive study on members’ employability or a, this study is an initial step in a larger, anticipated exploration of AmeriCorps members and employment.

 A3.  Minimize Burden: Use of Improved Technology to Reduce Burden

This survey will use Internet administration, with limited use of telephone interviewing to determine establishment eligibility for establishments for which no employee email address is available and to remind a selected subset of non-respondents to take the survey. Interviewers will call establishments to determine eligibility and identify an appropriate respondent within the organization. Once the appropriate employer respondent has been identified, an email invitation (Attachment A) and up to four email reminders will be sent (Attachment B). We assume that all employer respondents will have work email addresses and access to the Internet. A reminder call will be made to a subset of non-respondents. We have attempted to minimize the burden on respondents by conducting the survey through the Internet, minimizing length of administration. The surveys are expected to take on average 15 minutes per respondent.

A4.  Non-Duplication

We have reviewed previous CNCS data collection efforts and have determined that there are no prior efforts to systematically collect data, representative of employers, on member experience and employment. There are no other surveys of a random sample of businesses that ask questions specifically related to the degree to which managers with hiring responsibilities give preference to job candidates with AmeriCorps experience.

A5.  Minimizing for economic burden for small businesses or other small entities.

Approximately 65% of the AmeriCorps Competitive Advantage Survey respondents are expected to represent small businesses. According to the Small Business Association, the Office of Advocacy defines a small business as an independent enterprise having fewer than 500 employees. (We treat all nongovernment establishments with fewer than 500 employees as small businesses.) The data collection procedures have been designed to minimize the burden on those individuals as well as representatives from larger organizations through: 1) Web-based administration of the survey, which will further reduce burden in so far as respondents may end any given session on the survey and return to their previous answers in the survey at their discretion. This will allow respondents the ability to complete the survey at a time and place most convenient to them. 2) Use of a relatively brief instrument of 15 minutes expected length.

A6.  Consequences of the collection if not conducted, conducted less frequently, as well as any technical or legal obstacles to reducing burden.

Under Goal 2 of CNCS’ strategic plan, AmeriCorps experience is intended to expand educational and professional opportunities. Understanding how and why AmeriCorps experience expands these opportunities is a core part of CNCS’ research agenda on member outcomes. Yet currently, CNCS has very little data on the relationship between the member experience and employment, or on employment outcomes of national service members. As an initial step in exploring this part of our research agenda, CNCS seeks to collect data examining the unique contributions AmeriCorps experience may have on members in their future employment. This information is important for CNCS and our grantees in understanding, validating, and improving the member experience.


While AmeriCorps is not a job-training program, if positive effects are found, this study will provide initial evidence that taxpayer investment in AmeriCorps helps create more qualified and desirable candidates in the job market; this evidence will inform the development of more in-depth research on employment outcomes of members. If negative or null effects are found, this will provide concrete direction on adjustments to AmeriCorps programs and policies. In particular, null effects will demonstrate that not only does the AmeriCorps brand fail to communicate valuable information about a job candidate’s employability, but that perhaps the skills we believe AmeriCorps members gain from their experience are not the most marketable to employers. Without this study, policies for promoting service to employers2 or recruiting future members would not be evidence informed, and would be less efficient or effective.

The existing evidence related to this study is not definitive, but supports this study as an important step in exploring the relationship between the member experience and employment. A 2006 qualitative study conducted by Abt Associates revealed that national service experience, particularly AmeriCorps, was not well understood, and hence under-valued, by major private sector employers. However, a 2008 longitudinal survey showed that AmeriCorps alumni believed they were more employable as a result of their service experience, and a 2012 member experience survey revealed that well over half of members thought their service was a professionally defining experience. None of these studies compared the effect of AmeriCorps experience to other work or internship experiences on members’ employment outcomes as this study will do. This study will provide concrete, quantitative data on the potential employment benefits that service has for AmeriCorps members (i.e. expanded social network or human capital, and “prestige” conferred by the AmeriCorps brand name), and will allow us to establish baseline knowledge that can help the agency track progress towards meeting an important strategic goal.

A7.  Special circumstances that would cause information collection to be collected in a manner requiring respondents to report more often than quarterly; report in fewer than 30 days after receipt of the request; submit more than an original and two copies; retain records for more than three years; and other ways specified in the Instructions focused on statistical methods, confidentially, and proprietary trade secrets.

The survey will not involve any of these circumstances.

A8.  Provide copy and identify the date and page number of publication in the Federal Register of the Agency’s notice. Summarize comments received and actions taken in response to comments. Specifically address comments received on cost and hour burden.

The 60 day Notice soliciting comments was published on Monday, February 24th, 2013 on Regulations.gov (docket ID: CNCS-2014-0003). Two comments were received. Those comments involved suggestions regarding the hiring scenario used in the survey; candidate experience profiles; and the brand identity of members. Clarifying information was provided to the commenters to further explain the use of a bachelor’s degree as the common level of education across candidates, and the use of a highly generalized office position as the hypothetical position of interest. A technical question about the possible incompatibility of certain combinations of candidate experiences was answered through an example. Finally, the suggestion that we test AmeriCorps brand salience vs. grantee brand salience was addressed by explaining the lack of sufficient space in the experimental design, as well as concerns of confusing and overburdening respondents with too much information.

 A9.  Payment to Respondents

The research literature on incentives—both experiments and meta-analyses—suggests that incentives are associated with increased cooperation rates and response rates across modes (depending on the study, cooperation and/or response rates may be reported; Brick et al. 2005; Church 1993; Edwards et al. 2002, 2005; James and Bolstein 1992; Shettle and Mooney 1999; Singer et al. 1999; Singer, Van Hoewyk, and Maher 2000; Yammarino, Skinner, and Childers 1991). Incentive payments to survey respondents have been used extensively for many years to improve survey response rates. There is considerable research-based evidence supporting the value of compensation for increasing cooperation and improving the speed and quality of response in a broad range of data collection efforts.

There is greater variation in the literature with respect to the comparative effectiveness of prepaid and postpaid (i.e., conditional on survey completion) incentives. Findings are inconsistent across meta-analyses. Singer et al. (1999) find no significant difference between prepaid and postpaid incentives, both of which are more effective than no incentive. Edwards et al. (2002) find prepaid incentives more effective than postpaid incentives, though both pre- and postpaid incentives are more effective than no incentive. Church (1993) finds no significant difference between postpaid and no incentive conditions. Cantor, O’Hare, and O’Connor (2007) draw a valuable distinction between mail surveys, which have generally found positive effects for postpaid incentives, and interviewer-mediated surveys, which have not generally found postpaid incentives to be effective. As the AmeriCorps Competitive Advantage Survey is self-administered, it should be closer to the experiences of mail surveys than interviewer-mediated surveys. Cantor et al. (2007) find that postpaid incentives in the $15 to $35 range increase response rates for mail surveys (cf. Cantor et al. 2003; Strouse and Hall 1997), as would be the case here. In consideration of the research literature on incentives, a $20 incentive will be paid conditional on survey response. Additionally, feedback from pilot testing with a limited number of respondents revealed that a larger sized incentive ($30 and $50 were tested) did not increase response rates. As a result, we decided to retain a $20 incentive.

Despite the research above, it has not been fully established that government survey research benefits from incentives. With this in mind, we will randomly assign incentives to half of the sample, and afterwards assess whether response rates were higher for those receiving incentives. This will inform not only this, but also later research projects, on whether incentives should be considered for this type of sample and survey. This will also allow us to assess whether respondents receiving incentives systematically differed in their responses from those not receiving incentives.


A10.  Assurance of Confidentiality and its basis in statute, regulation, or agency policy.

The Privacy Act of 1974 does not apply to business establishments, and therefore we are unable to legally assure privacy. However, we will make clear that we intend to keep information private and not share in individual responses. Measures will be taken by Abt SRBI to remove key identifiers (e.g., name of company) prior to data analysis, so that individual responses cannot be linked to a specific individual or employer. The basis for the assurance of privacy is from the privacy statement and nondisclosure agreement that is part of the project’s contract.

The survey data will be stored on an Abt SRBI computer that is protected by a firewall that monitors and evaluates all attempted connections from the Internet. Private information on each survey respondent (name, email address, and direct telephone number) will be maintained in a separate data file apart from the survey data so that it is not possible to link particular responses to individual respondents. Once the survey is completed, all private data on each respondent will be deleted, though it should be noted that Abt SRBI maintains backup tapes for five years and transaction logs for seven years and these are not amenable to the deletion of particular files. The entire database will be encrypted so that any data stored will be further protected. Finally, access to any data with identifying information will be limited to only contractor staff directly working on the survey.

Participation in the survey is voluntary. All analyses, summaries or briefings will be presented at the aggregate level and it will not be possible to link specific responses to individual respondents in any way. The database delivered to CNCS will not include any identifying information such as names, addresses, telephone numbers, or DUNS numbers that might support reverse identification of respondents.

The exact statement indicating the privacy of respondents’ answers is: “Your responses to this survey will remain private to the extent permitted by law.” It can be found in Attachment C.

A11.  Sensitive Questions 

No questions of a sensitive nature will be asked in the survey.

A12. Hour burden of the collection

Sampled establishments

23,805


Respondents

1,540


Frequency of response

Once


Annual hour burden:




Screeners



1,805 establishments x 2 minutes each

60 hours


Extended interview



1,540 respondents x 15 minutes each

385 hours


Total Burden (1,540 respondents)

445 hours

Annualized cost to respondents: (445 hours at $52.69* per hour)

$23,477


*U.S. Department of Labor, Bureau of Labor Statistics, Table National employment and wage data from the Occupational Employment Statistics survey by occupation, May 2012 Hourly rate for Human Resource Managers (accessed from http://www.bls.gov/news.release/ocwage.t01.htm)

Burden estimates are based on pilot survey length.

A13. Cost burden to the respondent

The telecommunications costs of the screener interviews are considered part of customary and usual business practices. The survey will not involve any additional cost burden to respondents or record-keepers, other than that described above. 

A14. Cost to Government

This survey involves a one-time cost to the Federal Government. The cost to the Federal Government for the Employer Survey totals $208,193. This includes revising and fielding the survey, providing a summary report to respondents, analysis, and reporting on the results. See Exhibit 1 for a detailed breakdown of these costs.

Exhibit 1. Breakdown of Costs by Project Tasks

Activity

Approximate cost

Percentage of total cost

Administrative Activities

$34,324

16%

Data Collection

$149,237

72%

Data Delivery

$9,150

4%

Report on Survey Methodology

$11,423

5%

Travel

$4,059

2%

A15. Reasons for program changes or adjustments in burden or cost.

This is a one-time survey.

 A16.  Publication of results

Results will be compiled into an internal report to CNCS stakeholders, with accompanying briefing and presentation materials. Because this study is some of the first systematic work CNCS has conducted on the member experience and employment, we anticipate that the results of the analysis will find the most use internally; we will work with the necessary stakeholders within CNCS and with key external partners to determine an appropriate dissemination plan, should one be necessary. Depending on the quality of the results, a peer reviewed article may be developed.

A17.  Explain the reason for seeking approval to not display the expiration date for OMB approval of the information collection.

The expiration date will appear on the materials.

 A18.  Exceptions to the certification statement

There are no exceptions.

References

Brick, J. Michael, Jill Montaquila, Mary Collins Hagedorn, Shelly Brock Roth, and Christopher Chapman. 2005. “Implications for RDD Design from an Incentive Experiment.” Journal of Official Statistics 21:571-589.

Bills, D. B. (1990). Employers’ Use of Job History Data for Making Hiring Decisions. The Sociological Quarterly, 31(1), 23-35.

Cantor, David, P. Cunningham, T. Triplett, and R. Steinbach. 2003. “Comparing Incentives at Initial and Refusal Conversion Stages on a Screening Interview for a Random Digit Dial Survey.” Paper presented at the Annual Meeting of the American Association for Public Opinion Research, Nashville, TN.

Cantor, David, Barbara C. O’Hare, and Kathleen S. O’Connor. 2007. “The Use of Monetary Incentives to Reduce Nonresponse in Random Digit Dial Telephone Surveys.” Pp. 471-498 in Advances in Telephone Survey Methodology, edited by James M. Lepkowski, Clyde Tucker, J. Michael Brick, Edith de Leeuw, Lilli Japec, Paul J. Lavrakas, Michael W. Link, and Roberta L. Sangster. New York: Wiley.

Church, Allan H. 1993. “Estimating the Effect of Incentives on Mail Survey Response Rates: A Meta-Analysis.” Public Opinion Quarterly 57:62-79.

Cole, Michael S., Robert S. Rubin, Hubert S. Feild, and William F. Giles. 2007. “Recruiters’ Perceptions and Use of Applicant Résumé Information: Screening the Recent Graduate.” Applied Psychology 56:319-43.

Dipboye, R. L. (1994). Structured and unstructured selection interviews: Beyond the job-fit model. Research in personnel and human resources management, 12, 79-123.

Drummond, M. F., Sculpher, M. J., Torrance, G. W., O'Brien, B. J., & Stoddart, G. L. (2005). Critical assessment of economic evaluation. Methods for the economic evaluation of health care programmes, 27-51.

Edwards, Phil, Rachel Cooper, Ian Roberts, and Chris Frost. 2005. “Meta-Analysis of Randomised Trials of Monetary Incentives and Response to Mailed questionnaires.” Journal of Epidemiology and Community Health 25:987-99.

Edwards, Phil, Ian Roberts, Mike Clarke, Carolyn DiGuiseppi, Sarah Pratap, Reinhard Wentz, and Irene Kwan. 2002. “Increasing Response Rates to Postal Questionnaires: Systematic Review.” British Medical Journal 324:1183-85.

James, Jeannine M. and Richard Bolstein. 1992. “Large Monetary Incentives and Their Effect on Mail Survey Response Rates.” Public Opinion Quarterly 56:445-53.

Louviere, Jordan J., Terry N. Flynn, and Richard T. Carson. 2010. “Discrete Choice Experiments are Not Conjoint Analysis.” Journal of Choice Modelling 3(3):57-72

Mangham, Lindsay J., Kara Hanson, and Barbara McPake. 2009. “How to Do (or Not to Do)…Designing a Discrete Choice Experiment for Application in a Low Income Country.” Health Policy and Planning 24:151-58.

Norwood, F. B., & Henneberry, S. R. 2006. Show me the money! The value of college graduate attributes as expressed by employers and perceived by students. American Journal of Agricultural Economics, 88(2), 484-498.

Ryan, M., & Gerard, K. 2003. Using discrete choice experiments to value health care programmes: current practice and future research reflections.Applied health economics and health policy, 2(1), 55-64.

Scarpa, R., Ruto, E. S., Kristjanson, P., Radeny, M., Drucker, A. G., & Rege, J. E. (2003). Valuing indigenous cattle breeds in Kenya: an empirical comparison of stated and revealed preference value estimates. Ecological Economics, 45(3), 409-426.

Shettle, Carolyn and Geraldine Mooney. 1999. “Monetary Incentives in U.S. Government Surveys.” Journal of Official Statistics 15:231-50.

Singer, Eleanor, John Van Hoewyk, Nancy Gebler, Trivellore Raghunathan, and Katherine McGonagle. 1999. “The Effect of Incentives on Response Rates in Interviewer-Mediated Surveys.” Journal of Official Statistics 15:217-30.

Singer, Eleanor, John Van Hoewyk, and Mary P. Maher. 2000. “Experiments with Incentives in Telephone Surveys.” Public Opinion Quarterly 64:171-88.

Strouse, Richard C. and John W. Hall. 1997. “Incentives in Population-Based Health Surveys.” Proceedings of the American Statistical Association, Survey Research Section, 952-957.

Telser, H., & Zweifel, P. (2007). Validity of discrete-choice experiments evidence for health risk reduction. Applied Economics, 39(1), 69-78.

Viney, R., Lancsar, E., & Louviere, J. 2002. Discrete choice experiments to measure consumer preferences for health and healthcare.

Wardman, M. (1988). A comparison of revealed preference and stated preference models of travel behaviour. Journal of Transport Economics and Policy, 71-91.

Wilkin, C. L., & Connelly, C. E. 2012. Do I Look Like Someone Who Cares? Recruiters’ Ratings of Applicants’ Paid and Volunteer Experience. International Journal of Selection and Assessment, 20(3), 308-318.

Whitehead, J. C., Pattanayak, S. K., Van Houtven, G. L., & Gelso, B. R. (2008). Combining revealed and stated preference data to estimate the nonmarket value of ecological services: an assessment of the state of the science. Journal of Economic Surveys, 22(5), 872-908.

Yammarino, Francis J., Steven J. Skinner, and Terry L. Childers. 1991. “Understanding Mail Survey Behavior: A Meta-Analysis.” Public Opinion Quarterly 55:613-39.

1 See also Cole, Rubin, Field, and Giles (2007); Bills (1990); and Dipboye (1994) for the most relevant academic discussions of criteria and factors used in making hiring decisions.

2 An example of such an effort is CNCS new initiative called Employers of National Service (EONS). As part of the work conducted under the President’s Taskforce on Expanding Service, EONS educates employers about the benefits of hiring national service alums, and encourages them to take steps to recruit and hire alums.

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorDiTommaso, Adrienne (Guest)
File Modified0000-00-00
File Created2021-01-27

© 2024 OMB.report | Privacy Policy