Supporting Statement 12 15 06- Appr Eval

Supporting Statement 12 15 06- Appr Eval.doc

Evaluation of Registered Apprenticeship

OMB: 1205-0462

Document [doc]
Download: doc | pdf

SUPPORTING STATEMENT FOR

PAPERWORK REDUCTION ACT 1995 SUBMISSION

INFORMATION COLLECTION PLAN FOR

AN EVALUATION OF REGISTERED APPRENTICESHIP


A. JUSTIFICATION

1.  Explain the circumstances that make the collection of information necessary.  Identify any legal or administrative requirements that necessitate the collection.  Attach a copy of the appropriate section of each statute and regulation mandating or authorizing the collection of information.

This information collection includes a one-time survey of sponsors of registered apprenticeship programs and site visits to five states. The survey is necessary to gain a thorough and systematic understanding of the views of sponsors, who are primarily employers. The information will be critical in refining policies on registered apprenticeship within a demand-driven workforce system. The data collection will fill a gap in knowledge by providing, for the first time, systematic information on the views of sponsors in general and of sponsors in high-growth industries who have recently embraced apprenticeship as a training method.

A random stratified sample of sponsors will be used to ensure broad representation while permitting detailed information on the views of sponsors in groups of industries, including clusters of those identified in the President’s High Growth Job Training (HGJT) Initiative. Examples of industries identified in the high growth initiative include Aerospace, Geospatial, and Health Services, among many others.

Registered apprenticeship is a time-tested training method for in-depth occupational skill development. It allows for hands-on instruction by experienced workers at the jobsite with a mentor and related classroom instruction, all within a framework that identifies needed skills and provides recognized credential. Apprenticeship programs are financed, sponsored, and implemented primarily by private sector employers and their workers, thus presenting minimal costs to the taxpayer. While an apprenticeship program may be sponsored 1) unilaterally by a single employer, a group of employers or a trade organization or 2) jointly by a single employer or a group of employers with a union, employers are involved as sponsors in all cases. Currently, there are about 28,800 registered apprenticeship programs (and similar number of sponsors) in the U.S. with an estimated 413,000 registered apprentices in a multiplicity of occupations.

The Department of Labor, in accordance with the 1937 National Apprenticeship Act, is responsible for promoting the apprenticeship concept, assisting interested employers in developing and establishing apprenticeship programs, recognizing State Apprenticeship Agencies and Councils, registering apprenticeship programs and agreements, certifying registered apprentices, and monitoring the progress of registered programs and apprentices. While promoting registered apprenticeship is a legislative requirement, expansion of this training approach has long been a goal pursued by DOL. The Department’s efforts in the last five years have focused on expanding the use of apprenticeship to high-growth industries and new occupations. These promotional efforts have been an important element in the broader DOL initiative to create a demand-driven workforce system responsive to employer needs and successful in developing worker talent.

The data collection proposed here will provide a thorough and systematic understanding of the views of sponsors. It will employ a methodology fully consistent with the Administration’s strong commitment to scientifically based information, and will provide accurate, statistically sound, national-level data, rather than piecemeal, anecdotal information.

The proposed data collection will provide a strong sense of what motivates and troubles sponsors; what they value, dislike, or would like changed; what they see as the main benefits and costs of apprenticeship; the types of data they maintain on apprentices; and their contacts with the One-Stop Centers in regard to apprenticeship. The study will also show how, if at all, views differ by type of industry, number of apprentices, by type of program (unilateral or joint), or region.

The survey will be conducted by telephone and Internet (as per the respondent’s choice). In addition to the survey, the evaluation involved face-to-face on-site interviews with key stakeholders of registered apprenticeship in five states (Iowa, New Hampshire, North Carolina, Pennsylvania, and Texas). In each state, the contractor talked in depth with apprentices, sponsors, state apprenticeship administrators, One-Stop Career Center directors, and community college officials and instructors involved in providing related instruction to apprentices. Information on burden associated with the site visits is included here, pursuant to a 11/15/2006 notice of action (OMB Control No. 1205-0460) which provided an important clarification of the Paperwork Reduction Act regulations.

Section 172 of WIA (attached) is the authority under which the Employment and Training Administration (ETA) this evaluation is being conducted.

2.  Indicate how, by whom, and for what purpose the information is to be used.  Except for a new collection, indicate the actual use the agency has made of the information received from the current collection.

The information to be collected in the survey, combined with the findings from on-site interviews in five states, will be used by administrators and policy makers in the Department of Labor to: (1) determine appropriate measures to improve the operation of the national apprenticeship system, including possible changes in regulation, administrative guidance, and technical assistance; (2) promote further expansion of registered apprenticeship in high growth industries and make the system more employer-responsive; and (3) to provide information on the types of data available for possible impact studies and benefit-cost analyses in future.

Since this is a new collection, there has been no previous use of the information.

3. Describe whether, and to what extent, the collection of information involves use of automated, electronic, mechanical, or other technological collection techniques or other forms of information technology, e.g., permitting electronic submission of responses, and the basis for the decision for adopting this means of collection.  Also, describe any consideration of using information technology to reduce burden.

 The survey will be administered through telephone interviews or, electively by the interviewees, by electronic submission on the Internet. Sponsors will be contacted by telephone and asked to complete the survey during the call or alternatively via the Internet. The option of completing the survey via the Internet is intended as a means of reducing respondent burden and increasing the efficiency of data collection and processing. It is anticipated that between 50 and 66 percent of the respondents will reply electronically.

Computer Assisted Telephone Interviewing (CATI) will be used in the survey to aid collecting phone responses. This technology will permit cost-savings and allow for more accuracy, as the CATI program will accept only valid responses and can be programmed to check for logical consistency across answers. Calls can also be made through an auto-dialer, linked to the CATI system, which virtually eliminates dialing error. The automated call scheduler will also simplify scheduling and rescheduling of calls to respondents at their convenience and can assign cases to specific interviewers.

4. Describe efforts to identify duplication.  Show specifically why any similar information already available cannot be used or modified for use for the purposes described in item 2 above.

The information to be gathered through this survey of apprenticeship sponsors and from the site visits is not otherwise available. Regarding the surveys, there have been no systematic data collection activities on registered apprenticeship from the sponsors’ perspective, either on a national basis or in regard to sponsors in new and emerging high growth industries. While a few studies have considered costs and benefits associated with registered apprenticeship, they were limited to single states (such as Florida and Washington) and did not provide information from sponsor’s perspective.

Administrative data on registered apprenticeships does not provide information from the sponsors’ perspective on costs, benefits, challenges, and ways to improve the system. Rather such data, collected primarily for monitoring the apprenticeship system, includes basic information on programs, key characteristics of programs, and apprentices.

Regarding information from the process study conducted via site visits to five states: There are no other studies that provide information relating to general issues in administration of registered apprenticeship from the perspective of many different actors in the system. The site visits provided such information as well as a chance to clarify some issues that are addressed in the survey instrument.

5. If the collection of information impacts small businesses or other small entities (Item 5 of OMB Form 83-I), describe any methods used to minimize burden.

The collection of information involves some small businesses or other small entities that will be contacted in this one-time survey and that were contacted as part of the site visits. Keeping the questionnaire for the survey short and using the Internet are the two methods intended to minimize burden on these respondents in the survey. The site visits were to a limited number of small entities and involved face-to-face discussions which were conducted briskly and did not involve any record searches, so burden was minimized.

6. Describe the consequences to Federal program or policy activities if the collection is not conducted or is conducted less frequently, as well as any technical or legal obstacles in reducing burden. 

Without the survey, Federal initiatives to advance a demand-driven system for developing worker skills will not be informed by systematic, high quality data from one of the key providers and consumers of registered apprenticeship, i.e., sponsors. Developing new policy and adjustments to oversight, regulation, guidance, technical assistance, and promotional activities in response to real-world concerns and conditions will suffer from the use of anecdotal and piecemeal information. The policy and programmatic functions depend on DOL’s knowledge of current views and emerging challenges from those who develop and run apprenticeship programs. In addition, planning for possible studies of net impacts or returns on investment would be significantly hampered by lack of an accurate understanding of the types of data tracked by sponsors.

Collecting information via the site visits was necessary in order to refine issues asked in the survey and to gain more in-depth information about issues, concerns and problems as cited by other actors in the apprenticeship system.

The survey and site visits are a one-time data collection, so they cannot be conducted less frequently. There are no technical or legal obstacles in reducing burden.

  1. Explain any special circumstances that would cause an information collection to be conducted in a manner:

  • requiring respondents to report information to the agency more often than quarterly;

  • requiring respondents to prepare a written response to a collection of information in fewer than 30 days after receipt of it;

  • requiring respondents to submit more than an original and two copies of any document;

  • requiring respondents to retain records, other than health, medical, government contract, grant-in-aid, or tax records, for more than three years;

  • in connection with a statistical survey, that is not designed to produce valid and reliable results that can be generalized to the universe of study;

  • requiring the use of statistical data classification that has not been reviewed and approved by OMB;

  • that includes a pledge of confidentiality that is not supported by authority established in statute or regulation, that is not supported by disclosure and data security policies that are consistent with the pledge, or which unnecessarily impedes sharing of data with other agencies for compatible confidential use; or

  • requiring respondents to submit proprietary trade secrets or other confidential information unless the agency can demonstrate that it has instituted procedures to protect the information’s confidentiality to the extent permitted by law.

There are no special circumstances that would cause this information collection to be conducted in any manner listed above. 

8. If applicable, provide a copy and identify the date and page number of publication in the Federal Register of the agency’s notice, required by 5 CFR 1320.8(d), soliciting comments on the information collection prior to submission to OMB.  Summarize public comments received in response to that notice and describe actions taken by the agency in response to these comments.  Specifically address comments received on cost and hour burden.

Describe efforts to consult with persons outside the agency to obtain their views on the availability of data, frequency of collection, the clarity of instructions and recordkeeping, disclosure, or reporting format (if any), and on the data elements to be recorded, disclosed, or reported.

Consultation with representatives of those from whom information is to be obtained or those who must compile records should occur at least once every 3 years -- even if the collection of information activity is the same as in prior periods.  There may be circumstances that may preclude consultation in a specific situation.  These circumstances should be explained.

a. Federal Register Notice and Comments

The public was given an opportunity to review and comment (FR Notice Volume No. 71, No. 151, dated August 7, 2006, Page 44713 through 44714), with comments due October 6, 2006. Comments were received from two organizations. Both expressed their support for the goals of the survey. One of the commenters made numerous suggestions for changing the survey instrument. These proposed changes and DOL’s responses are discussed below:


The commenter suggested that survey respondents be asked if their programs are registered in multiple states, what are the states, and how their experiences differed among the states. Related to this, the commenter also recommended adding questions about sponsors’ attempts to obtain reciprocity in other states and the outcome of such efforts. Response: DOL agrees that it would be useful to understand the extent to which sponsors have tried to register programs in multiple states, and how well they succeeded. Brief questions on these topics will therefore be added to the instrument. However, questions regarding the specific states and asking for a comparison of experiences among these states would likely require complicated and lengthy responses. Questions on these last two topics will thus not be added.


The commenter noted that question 8 on the survey may be confusing to sponsors who do not employ apprentices. The commenter recommended that the questions be rewritten to ask if the sponsor has ever registered apprentices who completed apprenticeships with other programs or employers. Response: DOL agrees that the question could be confusing if asked of those sponsors who manage apprenticeship programs but are not responsible for employing the apprentices. DOL will reword the question to focus it on only sponsors who also are responsible for employing apprentices.


The commenter recommended adding questions (related to questions 11 and 17 in the survey instrument) asking for sponsors’ views on reasons that apprentices drop out before completion. The commenter also recommended that options for responses include: transferring to other programs, gaining their craft license before completion, or becoming employed in a state without reciprocity. Response: DOL is very concerned about apprentices not completing programs and the underlying causes for this. While sponsors may not know fully why individual apprentices drop out of programs, sponsors’ speculations are of interest to DOL; a short question on this will therefore be added to the survey.


The commenter recommended additional response categories for question 12 concerning changes sponsors would like to see in the system. The suggested options include: clear guidance from the apprenticeship system on program requirements, due process when program approval is delayed or rejected, a national system that allows multi-state registration, no variation in standards between joint or unilateral programs, and including interviews with sponsors and apprentices in SAC reviews. Response: Many of the issues raised by the commenter are of interest to DOL. Although adding more specific response categories will lengthen the questionnaire, DOL will add two new short response categories (one on due process and the other on multi-state registration) to test the concerns of sponsors regarding these issues. One of the issues raised by the commenter, concerning clarity of guidance, will be used to adjust a response category for question 15. Regarding variation in standards between joint and unilateral programs, please see the discussion below in regard to differential treatment of programs. Also, DOL will contact the commenter to learn more about concerns and recommendations regarding SAC reviews but will not add a response category on this issue.

The commenter recommended for question 15 (in which sponsors are asked to rate from excellent to poor their state apprenticeship agency on several dimensions) the addition of several new categories including: the processes for considering applications, providing an opportunity to correct problems, innovation in online training and performance-based testing, online and real-time registration, and efforts to allow or advance expansion of the apprenticeship program in the respondent’s industry. The commenter also recommended that sponsors be given an opportunity to provide a simple explanation for their answers to question 15 (regarding their ratings on the service quality factors) and suggested that more quantitative information on the issue of timeliness, in order to provide more comparable responses. Response: DOL formulated question 15, with a limited number of response categories and a simple rating scale, to permit quick responses that would result in general information on a limited number of areas of possible concern. DOL recognizes the importance of the issues cited by the commenter. Several of these are similar to suggestions made in regard to question 12 for which new response categories have already been added. DOL will alter the response categories for question 15 to address two of the recommended items here (on use of on-line registration and promoting expanded use of apprenticeship) and one mentioned in regard to question 12 (on providing clear guidance). DOL will not ask for simple explanations regarding the responses to question 15, however. Such explanations would greatly add to the length of the survey and yield a multitude of responses that would be difficult to interpret, code, and analyze in the aggregate.


Regarding question 23, the commenter recommended that it be reworded to use the word “where” rather than “who” in asking about the type of organization that provides training in order for the question to be clearer. The commenter also suggested revising the response category of “joint apprenticeship training program facility” to be “sponsor-owned or operated training program facility,” since there are some programs that operate their own training facilities but are neither company-sponsored nor joint programs. The commenter further stated that there was no reason to distinguish among joint, company, or other unilateral training facilities as long as they were run by sponsors. Response: In order to avoid any confusion, the question has been reworded to ask “What organization conducts” related instruction. The wording on the item regarding sponsor-operated training facilities has been changed to make it more general.

The commenter also recommended adding questions asking sponsors if they were treated differently than competitor programs in approval, auditing or other regulatory processes, and if so, why they thought they were treated differently. Response: DOL takes seriously any allegations of unjustified differential treatment and will promptly investigate specific problems that are brought to light. However, DOL believes such differential treatment is rare and should be handled through an administrative process rather than in this short survey. Also, asking sponsors about differential treatment presupposes they have sufficient knowledge about state agency treatment of other programs to competently compare it to their own experiences. This level of knowledge is not likely to be widespread, however. DOL will contact the commenter to learn more about specific reports of unfair differential treatment in some states but will not add questions on this to the survey instrument.


The commenter also proposed adding a question on whether sponsors believe that the apprenticeship system serves the apprentices first and foremost and if not, how the respondent would change the system to serve apprentices’ interests better. The commenter noted that the original 1937 National Apprenticeship Act was intended to set standards that would safeguard apprentices and expressed the hope that the data collection would be yield information to support the intent of the legislation. Response: DOL also hopes that the survey will provide information that will help support the central goal of the legislation and ensure that the U.S. apprenticeship system can adapt and prosper in a challenging global economy. The survey questions were formulated under the assumption that sponsors have concerns and interests distinct from those of apprentices, however. Questions asking whether the system is sufficiently focused on the interests of apprentices and how it could be changed to better serve apprentices would likely create confusion among respondents as to the intent of the survey. Such questions would also add to its length. For those reasons, the proposed questions will not be added.


b. Consultations Outside the Agency. Consultations on the research design, sample design, data sources, needs, and study reports have occurred during the study’s design phase and will continue to take place throughout the evaluation study. Senior technical staff members from the contractor, Planmatics, and its subcontractor, Westat, have provided substantial input to DOL for this study. These staff are listed in section B. 5 of this statement.

9. Explain any decision to provide any payment or gift to respondents, other than remuneration of contractors or grantees.

This item is not applicable.  No payment or gift to respondents will occur.

10. Describe any assurance of confidentiality provided to respondents and the basis for the assurance in statute, regulation, or agency policy.

Respondents to the survey will be assured that their cooperation is voluntary and that their responses will be held in confidence.  Data will be collected and analyzed by DOL’s contractor (Planmatics) and its subcontractor (Westat). Planmatics/Westat will follow rigorous procedures for assuring and maintaining respondents’ confidentiality and for protecting the data, consistent with provisions of the Privacy Act.  In keeping with this, access to any data with identifying and confidential information will be limited only to contractor and subcontractor staff directly working on this evaluation. Access to any hard-copy documents with identifying information will be strictly limited. Interviewers will be trained in confidentiality procedures and will be prepared to describe these procedures in full detail, if needed, or to answer any related questions raised by respondents. Data with personal and company identification, to be used in selecting the sample, will be destroyed at the conclusion of the research.


Similar safeguards were provided to individuals who interviewed in the site visits, though the names of the state directors were known to the Department of Labor, as their cooperation was requested.


Information in reports will be in aggregated form only, so that no specific sponsor, program, individual, or company will be identified or identifiable. Study results will appear only as grouped data and without individual identifiers. Findings will not be reported for cell sizes of less than five observations.

In keeping with the Confidential Information Protection and Statistical Efficiency Act of 2002, the following statement will be displayed on each questionnaire:

“The Department of Labor, its employees and agents, will use the information you provide for statistical purposes only and will hold the information in confidence to the full extent permitted by law. In accordance with the Confidential Information Protection and Statistical Efficiency Act of 2002 (title 5 of Public Law 107-347) and other applicable Federal laws, your response will not be disclosed in identifiable form without your informed consent.” A statement assuring confidentiality and the use of the information for statistical purposes will be read to phone respondents.

11. Provide additional justification for any questions of a sensitive nature, such as sexual behavior and attitudes, religious beliefs, and other matters that are commonly considered private. This justification should include the reasons why the agency considers these questions necessary; the specific uses to be made of the information, the explanation to be given to persons from whom the information is requested, and any steps to be taken to obtain their consent.

None of the questionnaire items will involve sensitive content, such as the topics referenced in this question.

12. Provide estimates of the hour burden of the collection of information.  The statement should:

Indicate the number of respondents, frequency of response, annual hour burden, and an explanation of how the burden was estimated.  Unless directed to do so, agencies should not conduct special surveys to obtain information on which to base hour burden estimates.  Consultation with a sample (fewer than 10) of potential respondents is desirable.  If the hour burden on respondents is expected to vary widely because of differences in activity, size, or complexity, show the range of estimated hour burden, and explain the reasons for the variance.  Generally, estimates should not include burden hours for customary and usual business practices.

This section has been revised from the information previously made available to the public during the pubic comment period ending October 6, 2006. The information has been changed to reflect the inclusion of site visits to five states, as a result of the clarification provided by OMB (please see above).

The total number of respondents for the survey and site visits is 1,338. The frequency of response for the survey and site visits is one time and the total annual hour burden for the survey and the site visits is: 622.

The burden was estimated by determining an average time to administer the survey and using actual average times for individual and group interviews during the site visits (as found in the chart below).

The average hour burden for the survey was calculated by administering the original instrument to three contractor staff familiar with apprenticeship. The highest value, 17 minutes, was used to develop the estimate of burden. However, because of new items added as a result of comments received (see the response on question 8 above) the average time to conduct each survey has been estimated to increase to be about 18.5 minutes. The expected 1,144 responses are multiplied by 18.5 minutes, and then divided by 60 minutes to get an annual hour burden of 353 total hours.

RESPONDENT HOUR BURDEN FOR THE APPRENTICESHIP EVALUATION


Activity

Total
Respondents

Frequency

Average Minutes
per Response

Burden Hours







Survey of Sponsors

1,144

One time

18.5

353


Site Visits





State apprenticeship directors and staff

19

One time

360

114

Providers of related education (community college and training program administrators)

29

One time

60

29

One-stop Center Directors and Staff

14

One time

60

14

Sponsors

37

One time

60

37

Other: WIB chairs and staff

15

One time

60

15

Apprentices

80

One time

45

60






Totals

1,338



622



Provide estimates of annualized cost to respondents for the hour burdens for collections of information, identifying and using appropriate wage and rate categories.  The cost of contracting out or paying outside parties for information collection activities should not be included here.  Instead, this cost should be included in Item 13.

The one-time costs to sponsors who are survey respondents is $14,466, based on the following calculation: 1,144 respondents x 18.5 minutes)/60 minutes = 353 hours x $40.98, the latest per hour estimate for managers as listed in the U.S. DOL/Bureau of Labor Statistics (BLS) National Compensation Survey database. Similarly, the cost to sponsors who responded during the site visits was calculated to be $1,516, based on 37 respondents x 60 minutes/60 minutes = 37 hours x $40.98.

The cost to public administrators in state apprenticeship offices was $3,596, based on the following: 19 respondents x 360 mintues/40 minutes x $31.54 (the average per hour rate for public administrators as found in the U.S. DOL/BLS National Compensation Survey database). The cost to administrators in One-Stop Center and in WIBs was $915, based on 29 respondents x 60 minutes/60 minutes = 29 hours x $31.54 (the average per hour rate for public administrators as found in the U.S. DOL/BLS National Compensation Survey database). The cost to administrators of related education programs was estimated at $1,079 based on: 29 respondents x 60 minutes/60 minutes = 29 hours x $37.21 (the hourly rate as per the BLS Compensation Survey).

The cost for apprentices’ time in focus group and individual interviews was estimated to be $809, as follows: 80 apprentices x 45 minutes/60 minutes = 60 hours x $13.48 (the average per hour first quarter wages for apprentices in the latest data available from the RAIS system).

Adding all these costs ($14,466 + 1,516 + 3,596 + $915 + $1,079 + $809) yields a total estimated cost for all respondents’ time of $22,381.

13. Provide an estimate for the total annual cost burden to respondents or record keepers resulting from the collection of information.  (Do not include the cost of any hour burden shown in Items 12 and 14).

The cost estimate should be split into two components: (a) a total capital and start-up cost component (annualized over its expected useful life) and (b) a total operation and maintenance and purchase of services component.  The estimates should take into account costs associated with generating, maintaining, and disclosing or providing the information.  Include descriptions of methods used to estimate major cost factors including system and technology acquisition, expected useful life of capital equipment, the discount rate(s), and the time period over which costs will be incurred.  Capital and start-up costs include, among other items, preparations for collecting information such as purchasing computers and software; monitoring, sampling, drilling and testing equipment; and record storage facilities.

The proposed information collection plan will not require the respondents to purchase equipment or services or to establish new data retrieval mechanisms. 

(a) We do not expect any total capital and start-up costs.

(b) We do not expect extensive time spent on generating, maintaining, and disclosing or providing the information.

If cost estimates are expected to vary widely, agencies should present ranges of cost burdens and explain the reasons for the variance.  The cost of purchasing or contracting out information collections services should be a part of this cost burden estimate.  In developing cost burden estimates, agencies may consult with a sample of respondents (fewer than 10), utilize the 60-day pre-OMB submission public comment process and use existing economic or regulatory impact analysis associated with the rulemaking containing the information collection, as appropriate.

The proposed information collection plan will not require the respondents to purchase equipment or services or to establish new data retrieval mechanisms.  These costs are not expected to vary.

Generally, estimates should not include purchases of equipment or services, or portions thereof, made: (1) prior to October 1, 1995, (2) to achieve regulatory compliance with requirements not associated with the information collection, (3) for reasons other than to provide information or keep records for the government, or (4) as part of customary and usual business or private practices.

We do not expect respondents to purchase equipment or services in order to respond to this information collection plan effort.

14.  Provide estimates of annualized costs to the Federal government.  Also, provide a description of the method used to estimate cost, which should include quantification of hours, operational expenses (such as equipment, overhead, printing, and support staff), and any other expense that would not have been incurred without this collection of information.  Agencies may also aggregate cost estimates from Items 12, 13, and 14 in a single table.

The total cost to the federal government of carrying out this study is $499,999 to be expended over 36 months ($166,666 averaged per year). Of this amount, data collection for the survey of sponsors will cost approximately $161,000 for senior research staff, programmers, database managers, interviewers, facilities, and indirect costs. The balance of approximately $338,999 is for the cost of developing a research design, carrying out analysis, conducting site visits to five states, preparing reports, and carrying out project management.

15. Explain the reasons for any program changes or adjustments reported in Items 13 or 14 of the OMB Form 83-I.

This evaluation represents a new collection of information and therefore a program change of 662 additional hours in the information collection budget for the Department of Labor.

16. For collections of information whose results will be published, outline plans for tabulation and publication.  Address any complex analytical techniques that will be used.  Provide the time schedule for the entire project, including beginning and end dates of the collection of information, completion of report, publication dates, and other actions.

Tabulation and Publication

The surveys will be administered once over the period covered by this clearance.  Results from the data collection will be tabulated, analyzed, and presented to DOL by the contractor in one report, displaying tabular presentations of results, such as the number and percentage of sponsors or programs that gave each of the various responses. More specifically, the report will provide: 1) an exposition of findings on key issues from site visits and the survey of sponsors, 2) a conclusion summarizing the key findings, areas for possible action, and implications for future research, 3) tables arraying findings from the data analysis, 4) description of the methods used in the data collection activities and analysis, and 5) question lists, protocols, and the survey instrument used in the study.

Analytical Techniques

Two types of analysis are anticipated for the survey data. One is descriptive, including computation of general descriptive statistics of both central tendency and variability. The analysis will primarily make use of frequency distributions (sample means and percentages) and cross-tabulations, which will provide basic information about apprenticeship sponsor’s responses. Categorical data analysis will be used to determine independence between variables of interest (in 2 x 2 or more general contingency tables), using Chi-square tests of independence with appropriate adjustments for the sample design (Rao and Scott 1981 and Rao and Scott 1984).

The second type of analysis is relational, or associative, which shows the extent of relationship among measures of interest. For example, the analysis would be used to explore differences in survey responses as related to the characteristics of sponsors or of the programs. The basic dependent measures will be categorical in nature. Approaches to the relational analysis allow separate terms for covariates and for interactions among variables, and for subsets of variables. Descriptive data will be used to evaluate any assumptions associated with the relational analysis modeling. Logistic regression will be used to measure the strength and direction of dependence relationships.

There are a number of instrument items that are ordinal: the instrument requests the respondent to choose one category from five choices, with the categories representing an ordering by strength of opinion (e.g., 1 = poor to 5 = excellent). These items can be analyzed using cumulative logit models (see for example Agresti 1984, Section 6.3).

In all cases a weighted analysis will be done (using the sample weights), and appropriate adjustments made in standard errors and tests for effects of the stratification and of nonresponse adjustments as described in section B.2 below.

Time Schedule

Activity

Start Date

End Date

Draw sample and collect data

January 2006

February 28, 2007

Edit, clean, weight, impute data and prepare data file

March 1, 2007

March 16, 2007

Analysis of data

March 19, 2007

April 13, 2007

Prepare draft final report

April 16, 2007

May 21, 2007

Prepare final report

June 22, 2007

June 29, 2007



17.  If seeking approval to not display the expiration date for OMB approval of the information collection, explain the reasons that display would be inappropriate.

ETA will display the OMB control number and expiration date for the survey under this clearance.   

18.     Explain each exception to the certification statement identified in Item 19, Certification for Paperwork Reduction Act Submissions, of OMB Form 83-I.

There are no exceptions to the certification statement.

B. COLLECTIONS OF INFORMATION EMPLOYING STATISTICAL METHODS

1.  Describe (including a numerical estimate) the potential respondent universe and any sampling or other respondent selection methods to be used.  Data on the number of entities (e.g., establishments, State and local government units, households, or persons) in the universe covered by the collection and in the corresponding sample are to be provided in tabular form for the universe as a whole and for each of the strata in the proposed sample.  Indicate expected response rates for the collection as a whole.  If the collection had been conducted previously, include the actual response rate achieved during the last collection.

The information below refers to the survey activity only, as the site visits did not utilize statistical methods.

Potential Respondent Universe

The potential respondent universe consists of all sponsors of active1 apprenticeship programs registered either with the state offices of the DOL’s Office of Apprenticeship (OA), or a recognized State Apprenticeship Agency or Council (SAA or SAC). The universe includes approximately 28,800 registered apprenticeship sponsors, as identified in the Registered Apprenticeship Information System (RAIS) database, which is managed by DOL and includes information on the number of programs and apprentices in all state entities.

The sampling frame will include states that fully participate in the RAIS database and seven additional states2. States that fully participate in RAIS provide names of individual sponsors and industry codes as well as information on the type of program (joint or unilateral), the number of active apprentices and completers in a program, and contact information for individual sponsors. Participation in the database is not mandatory for SAC/SAA states, however. The states that do not fully participate in RAIS provide summary information on apprentices and programs to RAIS and maintain their own databases at the state level.

At the time of this writing, 32 states fully participate in RAIS (of which 23 are OA states and nine are SAC/SAA states). In order to assure a sampling frame that is nationally representative of all sponsors, DOL will seek a one-time download of data from the seven states3 that have the largest numbers of programs among states that do not fully participate in the RAIS. With these seven additional states, there will be a total of 39 states in the sampling frame, with an estimated 26,098 sponsors or 90.6 percent of the universe. The universe and sampling frame are shown in Table 1 below.

Strata in the Sampling Frame

A stratified random sample with three strata will be drawn of registered apprenticeship sponsors sorted by the industrial code for the sponsor’s company. The strata include sponsors in: 1) smaller high growth industries4, 2) construction industries, and 3) all other industries5. The use of these strata will permit oversampling of sponsors in smaller high growth industries (which tend to have newer apprenticeship programs) as well as to easily identify differences in sponsors’ views among the strata.

The number of sponsors by each stratum is available only for states which currently fully participate in the RAIS database. These distributions and their expression as a percentage of the total number of sponsors in fully participating RAIS states are provided in Table 1 below. Based on these percentages, an estimate of the number of sponsors in each stratum both for the nine additional states and for the total sampling frame are included in Table 1. The size of each stratum will be adjusted when data from the additional states in the sampling frame are available, as the distribution by industry will likely differ somewhat from the 32 states currently fully participating in RAIS.

Table 1. Estimated counts of registered apprenticeship sponsors overall and by stratum in the sampling frame and universe (Data source: Final FY 2005 RAIS data). Italicized values are estimated.

Sponsor Industry Group

Sponsors in 32 fully participating RAIS states (current)

Percentage of sponsors by strata in 32 fully participating RAIS states

Estimated number of sponsors in 7 more states providing state data

Estimated total number of sponsors in sampling frame (39 states)

Total number of sponsors in all states (universe)

Total

16,127

100.0%

9,971

26,098

28,804

Stratum 1: Smaller High Growth

3,289

20.4%

2,034

5,323

5,874

Stratum 2: Construction

7,140

44.3%

4,417

11,557

12,753

Stratum 3: Other

5,698

35.3%

3,519

9,217

10,177



Expected Response Rate for the Collection as a Whole

This data collection has not been previously conducted.  However, we expect to receive a response rate of 80 percent. If the response rate is below the 80 percent level we intend to re-contact non-responders. A nonresponse bias analysis will also be conducted and the sample base weights will be adjusted for nonresponse, as described in the response to B.3 below. 

2. Describe the procedures for the collection of information including: 

        Statistical methodology for stratification and sample selection,

        Estimation procedure,

        Degree of accuracy needed for the purpose described in the justification,

        Unusual problems requiring specialized sampling procedures, and

        Any use of periodic (less frequent than annual) data collection cycles to reduce burden.

Statistical Methodology for Stratification and Sample Selection

The sample design will be a stratified random sample, with the three strata corresponding to the domains of interest described above. The allocation of sample size to the strata will not be proportional, but instead will assign roughly equal sample sizes to each stratum. Equal sample sizes will allow for maximal precision for stratum-level analyses and comparisons, which are a primary focus of the analysis, while still providing adequate precision for overall analyses (combining the strata together).

The desired goals of the stratification and sample design will be to achieve representative samples of the full population of apprenticeship sponsors and of sponsors within each stratum.

In the discussion of “degree of accuracy” below, the power calculations for comparing strata differences show that effective sample sizes (nominal sample sizes divided by finite population corrections) of 400 per stratum provide tests with reasonable power. Table 2 presents an overall sample design using these effective sample sizes. The corresponding standard error for the totals estimate is 1.5%.



Table 2. Proposed sample design and precision results.

Stratum

Percentage of population

Estimated population size

Assigned sample size

Effective sample size

Expected standard error for domains

High Growth

20%

5,323

372

400

2.50%

Construction

45%

11,557

387

400

2.50%

Other

35%

9,217

383

400

2.50%

Total

100%

26,249

1,1,142

1,104

1.50%



Note that the overall sample size is 1,1,142, with an effective sample size of 1,104, corresponding to a standard error for the overall population estimator of 1.5%. These sample design calculations will be updated when we receive the updated data files for sampling in August 2006.

Table 3 below presents 95% confidence intervals that would be calculated for sample percentages of 50% for each stratum and for the overall population.

Table 3. 95% confidence intervals for sample percentages equal to 50%.

Stratum

Sample percentage

Standard error for sample percentage

Lower bound 95% confidence interval

Upper bound 95% confidence interval

 

 

 

 

 

Smaller High Growth

50%

2.5%

45.1%

54.9%

Construction

50%

2.5%

45.1%

54.9%

Other

50%

2.5%

45.1%

54.9%

All Strata

50%

1.5%

47.1%

52.9%



We expect a response rate in the 80% range. We also expect some loss due to the presence on the frame of programs which are no longer in operation or active. An 80% eligibility rate is posited for this expected loss. Table 4 presents these calculations for an 80% response rate with the final yield equal to 1,144, as given in Table 4. A reserve sample of 50% more will be included to utilize if eligibility and/or response rates are lower than expected.





Table 4. Sample sizes for 80% eligibility and 80% response rates: final yield of 1,144 interviews.

Initial Sample

1,784

 

Expected eligibility rate

 

80%

Sponsors of Active programs

1,428

 

Expected response rate

 

80%

Completed Interviews

1,142

 



Where possible, we will utilize systematic sampling within each state and primary stratum. Using systematic sampling, the frame of sponsors will be ordered (within each stratum) by industry type and/or size of program, where these are available. This will be a form of ‘implicit stratification’: the sample proportions for differing industry types and sizes of programs will have less variability around the expected proportions than would occur under simple random sampling. This lower variability will reduce the sample variance for characteristics which differ by industry type and size of program. All of the above calculations assume simple random sampling within strata, so where systematic sampling can be utilized these calculations can be viewed as conservative (indicating lower precision than will be achieved). Cochran (1977) Chapter 8 discusses systematic sampling and its benefits.

Estimation Procedure

The primary descriptive statistics of interest are percentages of sponsors with particular characteristics. For example, the percentage of sponsors P with characteristic A (some characteristic of interest among sponsors) can be written in terms of the three strata as:

with Ni the number of sponsors in stratum i, and Pi the percentage of sponsors with characteristic A in stratum i. If we write pi as the sample percentage of sponsors in characteristic A, then the unbiased estimator of P is as follows (see for example Cochran 1977, Section 5.3):



ni is the sample size within stratum i and fi is the sampling fraction ni/Ni (the quantity 1-fi is the finite population correction). is the effective sample size. Also of interest are differences between the three strata, which in the population are equal to . The unbiased estimator dij of Dij and its standard error is as follows:

Note that this assumes that the sample percentages pi and pj are independent, which is the case under stratified random sampling.

There are many other parameters of interest such as descriptive statistics for ordinal and continuous variables, regression coefficients and correlations, for the full population and for domains. These all can be written as smooth functions (i.e., continuous and differentiable functions such as ratios) of weighted sums such as the weighted percentages discussed above. For example, a mean for a domain is the weighted sum over the domain divided by the sum of the weights within the domain. A simple regression coefficient is a weighted sum of an x-value (predictor variable) and a y-value (dependent variable), divided by the weighted sum of the x-value squared.


Variance estimation is an important part of the estimation process. We will generate replicate weights that will allow for consistent replicate variance estimates, matching for percentages the variance estimator . We will use a jackknife methodology. The jackknife methodology gives consistent estimators in general, except for ‘nonsmooth’ quantities such as quantiles which are not in demand here (see for example Shao and Tu 1995 Section 2.3). The variance strata and variance units from the jackknife procedure will be included in the public-use file to facilitate their use in Taylor series linearization variance estimation approaches. The jackknife easily allows one to account for effects of nonresponse adjustments on the variance of estimators (see for example Yung and Rao 2000).

Degree of Accuracy Needed

The targeted precision is to achieve standard 95% confidence intervals with half-widths of no more than 5 percentage points for sample percentages for each stratum, and to allow for sufficient power (80% power) to detect differences of 10 percentage points or more between the strata. These precision requirements can be stated in the traditional framework of null and alternative hypotheses. The null hypothesis is that there is no difference in the percentages of characteristic A across strata i and j, and the alternative hypothesis is that there is a difference in percentages:

Ho: Pi(0)=Pj(0) H1: Pi(1)≠Pj(1)

The optimal test statistic for testing this null against this alternative is dij, with the critical region (the values of dij for which we reject the null and accept the alternative) of the form |dij|>k, for some k>0. This critical region should have probability under the null less than or equal to the significance level (probability of Type I error) which is assigned (we will use 5%). In the discussion below, we show that effective sample sizes of 400 in each stratum will allow us to detect differences in stratum percentages of 10 percentage points or more (e.g., 50% with characteristic A in one stratum and 60% with characteristic A in the other stratum) with 80% power (a 4 in 5 chance of perceiving the alternative when the alternative is true) when using tests with significance level 5% (only a 1 in 20 chance of rejecting the null when the null is in fact true).

The null hypothesis and alternative hypothesis are as follows:

Ho: Pi(0)=Pj(0) H1: Pi(1)≠Pj(1)

The test statistic is dij. The expected value of dij under the null hypothesis is 0, with estimated standard error

.

is a pooled sample percentage computed over the two strata (putting the samples together). A critical region for a two-sided test of the null hypothesis is . This will have an approximate significance level of 5% (the critical region’s approximate probability under the null). Under the alternative, dij has expectation Pj(1) – Pi(1), and estimated standard error of

.

The power of the critical region under the alternative is approximately , where Φ is the standard normal cumulative distribution function. This is the probability of the upper part of the critical region under the alternative (note that the lower part of the critical region has negligible probability under the alternative). The critical region has approximately 80% probability under the alternative.

Table 5 presents three scenarios based on differing null population percentages, with effective sample sizes of 400. In all three cases an alternative with 80% power is presented, with the first stratum having a population percentage equal to the null percentage, and the second percentage a population percentage which is strictly larger. This second percentage is the smallest value for which there is 80% power. Note that the detectable difference (the difference of percentages under the alternative with 80% power) decreases as the common null percentage decreases, reflecting the smaller standard errors as the population percentages decrease towards zero.



Table 5. Power calculations for three scenarios (revised table. . . . )

 

Pooled null hypothesis sample percentage

Estimated standard error

Alpha = 5% two-sided acceptance region upper bound

Alternative hypothesis population percentages

Standard error of difference under alternative

Scenario 1

 

 

 

 

 

Stratum 1

50.00%

2.50%

 

50.00%

2.50%

Stratum 2

50.00%

2.50%

 

59.90%

2.45%

Difference

0.00%

3.54%

6.93%

9.90%

3.50%

Scenario 2

 

 

 

 

 

Stratum 1

30.00%

2.30%

 

30.00%

2.29%

Stratum 2

30.00%

2.30%

 

39.20%

2.44%

Difference

0.00%

3.24%

6.35%

9.20%

3.35%

Scenario 3

 

 

 

 

 

Stratum 1

10.00%

1.50%

 

10.00%

1.50%

Stratum 2

10.00%

1.50%

 

16.20%

1.84%

Difference

0.00%

2.12%

4.16%

6.20%

2.37%



Unusual Problems

In the event that data are not available from all seven of the states that will be contacted to provide data, the size of the sampling universe will be decreased and the resulting conclusions will be applicable only to the respondents in the covered states.

Use of Periodic Data Collection Cycles

The survey will be administered only once.

3.   Describe methods to maximize response rates and to deal with issues of non-response.  The accuracy and reliability of information collected must be shown to be adequate for intended uses.  For collections based on sampling, a special justification must be provided for any collection that will not yield reliable data that can be generalized to the universe studied.

Methods to Maximize Response Rates

To maximize response rates, the potential sponsors-respondents will: 1) receive a letter notifying them of the survey and asking them to participate; 2) be contacted by telephone for recruitment as participants in this short survey; 3) be given the opportunity to respond to the survey during the recruitment call or schedule another time for the interview; 4) respond to an online version of the survey that will be administered as a web-based instrument; and 5) receive an e-mail and telephone follow-up call in the event they do not complete the online version of the survey. In the event that these efforts do not produce an 80% response rate, we will conduct an additional round of telephone follow-up and focus heavily on refusal conversion. As discussed below, nonresponse bias analysis will be conducted and the sample base weights will be adjusted for nonresponse.

Methods to Deal with Issues of Non-Response

The nonresponse bias analysis will examine response rates by different subgroups of sponsors and comparison of non-respondents and respondents on frame variables. The sampling frame contains variables (such as industry, type of sponsorship, size of program sponsored, and geographic location) that are expected to be correlated with those being estimated in the survey. The difference between statistics from the full sample (using base weights) and statistics from respondents only will be taken as an indicator of nonresponse bias. Non-respondents will be classified by reason for nonresponse, such as no contact or refusal, and statistics will be computed for these classifications to identify sources of bias.

We plan to address the issue of nonresponse bias through the use of nonresponse adjustments in the weights. The theory behind our approach to nonresponse adjustment (and the generally used approach in nationally representative program surveys) is a ‘quasi-randomization’ paradigm (see for example Oh and Scheuren 1983), in which nonresponse is modeled as a pseudo-random process: programs within assigned nonresponse cells are assumed to have a uniform response rate (which is then an estimate of the theoretical propensity to respond), with the responding program sample then a simple random sample from the full program sample, with sampling rate for this simple random sample equal to the response rate. It is important in practice to find nonresponse cells in which response rates are uniform within the cell and heterogeneous across cells. The nonresponse bias analysis will inform this selection. Within the selected nonresponse cells c=1,…,C, the nonresponse adjustments are equal to , where ws is the base weight (the reciprocal of the probability of selection), Sc is the sample within cell c, and Rc are the set of respondents within cell c.

4. Describe any tests of procedures or methods to be undertaken.  Testing is encouraged as an effective means of refining collections of information to minimize burden and improve utility.  Tests must be approved if they call for answers to identical questions from 10 or more respondents.  A proposed test or set of test may be submitted for approval separately or in combination with the main collection of information.

Description of Tests of Procedures or Methods

The phone and internet versions of the survey instrument will be pretested with nine or fewer respondents to assess the clarity, wording, organization, format, and potential sources of response error. The pretest will be used to modify the questionnaire as appropriate.

5. Provide the name and telephone number of individuals consulted on statistical aspects of the design and the name of the agency unit, contractor(s), grantee(s), or other person(s) who will actually collect and/or analyze the information for the agency.

Individuals consulted on statistical aspects of the design include:

Louis Rizzo—Westat—301-294-4486

Frank Bennici—Westat—301-738-3608

Lalith deSilva –Planmatics—301-987-7441

Dave Auble—Planmatics—301-987-7441

DOL’s contractor, Planmatics will be responsible for administering the survey and collecting the information. Both Planmatics and its subcontractor, Westat, will be involved in the analysis of the information.

References

Agresti, A. (1984). Analysis of Ordinal Categorical Data. New York: John Wiley & Sons.

Cochran, W. G. (1977). Sampling Techniques, 3rd Ed. New York: John Wiley & Sons.

Oh, H. L., and Scheuren, F. S. (1983). Weighting adjustments for unit nonresponse, in Incomplete Data in Sample Surveys, Vol. II: Theory and Annotated Bibliography (W.G. Madow, I. Olkin, and D. B. Rubin, eds.), New York: Academic Press.

Rao, J. N. K., and Scott, A. J. (1981). The analysis of categorical data from complex sample surveys: Chi-squared tests for goodness of fit and independence in two-way tables. Journal of the American Statistical Association, 76, 221-230.

Rao, J. N. K., and Scott, A. J. (1984). On chi-squared tests for multiway contingency tables with cell proportions estimated from survey data. Annals of Statistics 12, 46-60.

Shao, J., and Tu, D. (1995). The Jackknife and Bootstrap. New York: Springer.

Yung, W., and Rao, J. N. K. (2000). Jackknife variance estimation under imputation for estimators using poststratification information. Journal of the American Statistical Association 95, 903-915.

Statutory Authority for Evaluations:



WORKFORCE INVESTMENT ACT OF 1998, Public Law 105-220--Aug. 7, 1998, 112 Stat. 936


SEC. 172. EVALUATIONS.


(a) Programs and Activities Carried Out Under This Title.--For the

purpose of improving the management and effectiveness of programs and

activities carried out under this title, the Secretary shall provide

for the continuing evaluation of the programs and activities, including

those programs and activities carried out under section 171. Such

evaluations shall address--

(1) the general effectiveness of such programs and activities

in relation to their cost, including the extent to which the

programs and activities--

(A) improve the employment competencies of participants in

comparison to comparably-situated individuals who did not

participate in such programs and activities; and

(B) to the extent feasible, increase the level of total

employment over the level that would have existed in the

absence of such programs and activities;

(2) the effectiveness of the performance measures relating to

such programs and activities;

(3) the effectiveness of the structure and mechanisms for

delivery of services through such programs and activities;

(4) the impact of the programs and activities on the community

and participants involved;

(5) the impact of such programs and activities on related

programs and activities;

(6) the extent to which such programs and activities meet the

needs of various demographic groups; and

(7) such other factors as may be appropriate.

(b) Other Programs and Activities.--The Secretary may conduct

evaluations of other federally funded employment-related programs and

activities under other provisions of law.

(c) Techniques.--Evaluations conducted under this section shall

utilize appropriate methodology and research designs, including the use

of control groups chosen by scientific random assignment methodologies.

The Secretary shall conduct as least 1 multisite control group

evaluation under this section by the end of fiscal year 2005.

(d) Reports.--The entity carrying out an evaluation described in

subsection (a) or (b) shall prepare and submit to the Secretary a draft

report and a final report containing the results of the evaluation.

(e) Reports to Congress.--Not later than 30 days after the

completion of such a draft report, the Secretary shall transmit the

draft report to the Committee on Education and the Workforce of the

House of Representatives and the Committee on Labor and Human Resources

of the Senate. Not later than 60 days after the completion of such a

final report, the Secretary shall transmit the final report to such

committees of the Congress.

(f) Coordination.--The Secretary shall ensure the coordination of

evaluations carried out by States pursuant to section 136(e) with the

evaluations carried out under this section.



1 To be active, a program must have had an apprentice within the last two years.

2 This plan is slightly revised from that included in the Supporting Statement presented in conjunction with the Federal Register notice of August 7, 2006. At the time when the statement was prepared, DOL was hopeful that its efforts to recruit more states to participate in the RAIS would be successful and that only three – rather than seven – additional states would have to be contacted for this survey. This supporting statement has been changed to reflect in the narrative that seven states are being contacted and to update all numbers in Part B that needed to be changed as a result of this contingency. The overall statistical methodology remains unchanged at the time of this writing.

3 These states are California, Connecticut, Delaware, New York, North Carolina, Virginia and Wisconsin.

4 These industries include: Automotive, Biotechnology, Energy, Financial Services, Geospatial, Health Services, Homeland Security, Hospitality, Information Technology, Retail Trade, and Transportation. Several of these industries are also considered to be a subset of Advanced Manufacturing.

5 These industries include Agriculture, Mining, Communication, Wholesale Trade, and those industries in Advanced Manufacturing, Public Administration, and Service industries not included in stratum 1. Of the 5,698 programs in this strata, Advanced Manufacturing represents 62 percent (3,558 programs), more than the total number of programs in stratum 1 (3,289).

26


File Typeapplication/msword
File TitleSUPPORTING STATEMENT FOR
Authorbennici_f
Last Modified Bynaradzay.bonnie
File Modified2006-12-15
File Created2006-12-15

© 2024 OMB.report | Privacy Policy