ATTACHMENT G: REVISED SUPPORTING STATEMENTS A & B
Supporting Statement A
Supporting Statement B
SUPPORTING STATEMENT A (Revised May 11, 2009)
(OMB
File No 53)
OMB No. 1615-NEW
E-Verify Non-User Survey and
Employee-Employer Survey in Arizona
A. Justification
The E-Verify Program is a free employment eligibility confirmation system operated jointly by U.S. Citizenship and Immigration Services (USCIS) and the Social Security Administration. The E-Verify Program allows participating employers to electronically confirm the employment eligibility of newly hired employees to help maintain a stable, legal workforce. Authorization for this program expires on September 30, 2009, and Congress will consider alternatives for its reauthorization this summer. One of the primary options for reauthorization is to make E-Verify a mandatory program for over 7 million U.S. employers to verify the employment authorization status of all new hires.
USCIS continually evaluates the E-Verify Program to meet the program goals of:
Reducing unauthorized employment,
Reducing verification-related discrimination,
Protecting employee privacy and confidentiality, and
Minimizing employer burden.
Congress has consistently relied on these evaluations as benchmarks for legislative action, and the USCIS Verification Division depends on the survey results to make necessary program improvements. As part of this effort USCIS plans to conduct two new studies so that it can provide important information to help in the deliberations on whether to reauthorize, and expand E-Verify Program.
The following provides a brief description of these two new surveys:
E-Verify Non-User Survey – This survey will identify barriers to participation in the E-Verify Program by surveying employers not participating in the E-Verify Program to learn why they: (1) have not chosen to participate, (2) what problems they foresee with participating, and (3) what changes would make it more attractive for them to participate. This survey is essential since past evaluations have found that employers who are required to participate in the E-Verify Program have a greater tendency to violate provisions designed to protect worker rights, and fail to prevent unauthorized employment.
Employee-Employer Survey in Arizona – This survey will identify strengths and weaknesses of the E-Verify Program in a mandatory setting from both the employer and employee perspectives. This will greatly assist in moving the E-Verify Program from a small percentage of employers to a national mandatory program should Congress take that step in the fall of 2009.
The use of these surveys provides the most efficient means for collecting and processing the required data. In this case USCIS will employ the use of information technology in collecting and processing information.
USCIS has a central review and approval process for all surveys, which prevents duplication. A review of USCIS Forms Inventory Report revealed no duplication of effort, and there is no other similar information currently available that can be used for these purposes.
The design of the survey will not have a significant impact on small businesses since it will only take a short time to complete. In addition USCIS is offering an incentive to all respondents to help offset the time required to complete the surveys. (See item 9 below)
Consequences of not collecting the Information
Without these surveys, decisions about the design of any proposed mandatory or widespread voluntary national employment eligibility verification program will be based on outdated information.
The special circumstances contained in item 7 of the supporting statement are not applicable to this information collection.
USCIS is requesting emergency review for this information collection. Any public comments will be reconciled and addressed in the justification package with the second submission.
Consultants knowledgeable about issues related to immigration, employment, discrimination, and privacy were also employed by the contractors in order to provide advice for the earlier evaluations. They are as follows:
Joseph Drew, Southeastern University, Washington, D.C.
Michael Leeds, Temple University
Alison Konrad, Temple University
Matt Huffman, University of California, Irvine
Janet Spitz, St. Rose College
Barry Chiswick, University of Illinois at Chicago
The literature on the effectiveness of response rates is extensive. We propose to offer workers $25 to increase the likelihood that they will complete the survey. (See Supporting Statement B for a justification of using incentives for workers.) Neither the employers who complete the web survey of nonusers nor the Arizona employers who participate in the interviews will receive a payment or gift.
Per the language in the contract the Contractor owns the survey data:
“All identifiable hard copy and automated survey data collected and databases containing such information maintained by the Contractor for sole purpose of organizing and analyzing files/records developed as part of the evaluation will be the property of the Contractor to ensure the confidentiality and anonymity of the respondents.”
The following safeguards will be taken to ensure respondent confidentiality:
The study contractor will maintain the survey instruments and the microdata files and will not share data with the DHS about individually identifiable organizations and individuals, as specified in the contract between DHS and the contractor.
All contractor personnel working on the data collection efforts will sign an Assurance of Confidentiality Statement.
No public use microdata files containing data from this study will be issued.
The instruments in this package include a number of questions about whether employers and employees are engaging in illegal behavior. These questions are necessary because they will provide important information about the effectiveness and costs of the E-Verify Program as well as the implications of the E-Verify Program for discrimination and privacy.
Type of form and type of respondent |
Anticipated respondents |
Number
of responses
|
Average Burden per Response (in hours) |
Total Burden in hours |
Web survey of nonusers |
2,250 |
1 |
.333 (20 min.) |
749 |
AZ interview with employers |
100 |
1 |
2.00 (120 min.) |
200 |
AZ interview with employees |
450 |
1 |
1.00 (60 min.) |
450 |
Total |
2,800 |
|
|
1,399 |
There are no capital or start-up costs associated with these collections. Any cost burdens to respondents as a result of this collection are identified in question 14. There is no fee associated with this collection of this information.
Printing Cost $ 0
Contract Cost $ 3,800,000
Collecting and Processing $ 100,000
Total Cost to Program $ 3,900,000
Fee Charge $ 0
Total Annual Cost to Government $ 3,900,000
The annual cost to the Government if $3,900,000. USCIS is obligated to pay $3.8 million for contractual services. This includes labor costs and operational expenses such as designing the surveys determining sample design and selection; recruiting participants; printing materials; programming the web survey and Arizona employer and employee interviews; training field interviewers; conducting interviews with employees and employers; coding responses; paying for overhead, support staff, travel for case studies, and costs for data processing; compiling secondary data; performing software tests; interviewing federal, state, and local (Arizona) officials; conducting analysis; and preparing reports. In addition, an estimated cost of $100,000 a year is required for federal salaries and related expenses.
The cost to the public (respondents) associated with this information collection is detailed below.
Collection |
Hourly wage |
Burden hours |
Total Cost |
Incentive |
Nonuser Survey |
$48.00 |
749 |
$35,952 |
$0 |
AZ employer interview
|
$37.18 |
200 |
$7,436 |
$0 |
Annualized costs to the public for hour-burden E-Verify AZ worker interview
Collection |
Hourly wage |
Burden hours |
Total Cost |
Incentive |
Number of Respondents |
Offset Cost |
Net Cost |
AZ employee interview |
$18.50 |
450 |
$8,325 |
$25 |
450 |
$11,250 |
01 |
1The incentive of $25 x 450 respondents = $11,250 offsets the annualized cost of $8,325 for the worker data collection.
Since this is a new information collection there has been an increase of 1,399 burden hours to the OMB inventory.
The evaluation of E-Verify will consist of two main components: (1) a web data collection from nonusers of E-Verify as of May 1, 2009, and (2) case studies of employers and a sample of their employees. The time schedule for the conduct of the data collection, tabulation, analysis, and preparation of reports on the E-Verify evaluation is shown below:
Activity |
Date to start |
Date to complete |
Data Collection Activities |
||
Collect data for web survey of nonusers |
6/2/09 |
7/17/09 |
Conduct nonresponse followup |
7/20/09 |
8/21/09 |
Close data collection for web survey of nonusers |
9/18/09 |
9/18/09 |
Recruit interviewers for Arizona case studies |
5/1/09 |
6/12/09 |
Revise & review training materials for field interviewers |
4/17/09 |
5/5/09 |
Recruit employers for case studies |
5/28/09 |
7/31/09 |
Train field interviewers to conduct case studies |
7/11/09 |
7/17/09 |
Conduct case studies in Arizona |
7/21/09 |
10/2/09 |
Report Writing (Web Nonuser Survey) |
||
Clean and analyze preliminary data |
9/21/09 |
10/16/09 |
Weight Web Survey Data |
10/6/09 |
10/20/09 |
Analyze weighted nonuser survey data |
10/21/09 |
11/25/09 |
Write first draft (Web survey) for USCIS review |
11/30/09 |
12/18/09 |
Prepare third &final draft & edit Web survey report |
2/15/10 |
3/9/10 |
Informal briefing for USCIS |
3/15/10 |
3/26/10 |
Report Writing (Case Studies) |
|
|
Clean, organize, and enter qualitative data into software |
10/19/09 |
10/23/09 |
Analyze data |
10/26/09 |
12/11/09 |
Write first draft for USCIS review |
12/14/09 |
1/22/10 |
Prepare third & final draft & edit case studies |
3/18/10 |
4/1/10 |
Informal briefing for USCIS |
4/14/10 |
4/21/10 |
The key research topics addressed by the data collection efforts outlined above and the types of analyses required to address them are restated here for completeness:
Has E-Verify in Arizona been properly implemented? This requires descriptive and normative analyses (i.e., a description of the verification process and a comparison to the verification process intended by DHS). This question will be addressed through the case studies.
What are the financial costs and other burdens imposed by E-Verify or by alternatives to E-Verify? This requires both descriptive and causal analyses.
What features are important in employers’ decisions not to use E-Verify? This requires both descriptive and comparative analyses.
What are employers’ perceptions of the value of potential changes in E-Verify? This requires both descriptive and comparative analyses.
How does the program affect levels of discrimination in the workplace? This requires both descriptive and causal analyses.
How does the program affect the privacy and security of information on employees and employers? This requires both descriptive and causal analyses.
The analyses proposed to address these topics are described below.
The descriptive phase of the analysis will consist of descriptive statistics (e.g., percentages, means, medians, and standard deviations, as appropriate), cross-tabulations, and graphical summaries to describe the employee verification process, the characteristics and employment verification experiences of employers in the target population, and the results of the verifications from the DHS and SSA transaction databases. In addition, the descriptive analysis will provide a starting point for subsequent analyses. While these analyses will not establish causality, they will provide preliminary insight on the hypothesized relationships.
Analyses of major data elements of the program implementation will result in an overall picture of how employers that do not participate in E-Verify conduct their work authorizations, their perceptions of E-Verify, and their opinions concerning different features of E-Verify that are being implemented or may be implemented. For example, the survey will help to quantify the percentages of employers that do not use E-Verify because they lack adequate staff skills and fast Internet connections. As a rule, the data to be collected are categorical; however, means and medians may still be used based on scales that combine multiple responses (e.g., the number of tests used as part of the hiring process).
Some types of employers may have different employment practices and perceptions than other employers. For example, smaller businesses may do little hiring and have little expertise or resources to apply to checking on work authorization, and farms employing large numbers of temporary migrant workers may face logistical difficulties in using E-Verify. Comparisons of employers based on such differences will help to identify whether special accommodations would be beneficial for certain types of employers, and whether different types of media/ communications should be targeted to particular categories of employers. Depending on the types of statistics being compared, tests of significance may be conducted using statistics such as chi-squared, t-tests, or logistic or multiple regression.
Modeling consists of statistical analysis involving a dependent or outcome variable and two or more independent or explanatory variables. In modeling, statistical control for confounding factors may be achieved by incorporating into the models one or more concomitant variables, in addition to the explanatory variables of interest. Partitioning out the variability in the dependent variable accounted for by the concomitant variables allows a more accurate assessment of the influence of the independent variables of interest.
The general approach to developing multivariate models will involve a series of steps. Preliminary determination of which variables would be of most theoretical interest and practical relevance for modeling will be based on a review of the findings from descriptive and comparative analyses. In addition, pairwise relationships between the independent variables and the dependent variables will be investigated using, as appropriate, chi-square analysis for categorical variables and correlation analysis for ratio and interval-level variables. Each variable of interest in the databases will be reviewed to determine its quality in terms of missing data. As appropriate, we will create composites of several items from the surveys by developing composite scales or combining items into new categorical variables. Scales can be created as weighted or unweighted sums of item scores, or factor analysis can be used to cluster items and develop weights. Examples of items that are suitable for scaling are employers’ perceptions of the program and experiences with the verification process.
Statistical modeling techniques include logistic regression for categorical dependent variables and linear regression for quantitative dependent variables. For example, we expect to use linear regression to investigate the factors related to employers’ perceptions of E-Verify.
Information collected from the case studies is not designed to provide statistically valid results, but rather to provide a more in-depth understanding of how the E-Verify program affects employees in Arizona. This information will, therefore, be summarized and presented as illustrative of the types of situations that employers and employees might encounter during the verification process. This information is designed to supplement the information obtained in prior evaluations.
We also anticipate using content analysis to analyze responses to open-ended questions on the employer and employee interview protocols. Content analysis is a general term covering a variety of techniques for making inferences from different textual sources. Done correctly, content analysis produces a series of themes and patterns that can yield an in-depth understanding of complex patterns of interaction and behavior.
USCIS will display the OMB Expiration date for this information collection.
USCIS does not request an exception to the certification of this information collection.
B. Collection of Information Employing Statistical Methods.
The agency should be prepared to justify its decision not to use statistical methods in any case where such methods might reduce burden or improve accuracy of results.
See Supporting Statement B
C. Certification and Signatures
PAPERWORK CERTIFICATIONS
In submitting this request for OMB approval, I certify that the requirements of the Privacy Act and OMB directives have been complied with including paperwork regulations, statistical standards or directives, and any other information policy directives promulgated under 5 CFR 1320.
______________________ ___________________
Stephen Tarragon Date
Deputy Chief,
Regulatory Products Division,
U.S. Citizenship and Immigration Services.
SUPPLEMENTAL
SUPPORTING STATEMENT B
OMB No. 1615-NEW
E-VERIFY DATA
COLLECTIONS
Revised May 11, 2009
B. Collection of Information Employing Statistical Methods
This section discusses the statistical methods that we will use for both the web survey of nonusers and the Arizona case studies. Section B1 describes the statistical methods that will be used for the web survey of nonusers. Although employers and their employees for the Arizona case studies will be sampled, we will not be able to generalize the results to the population studied. Please see Section B3 for information on the sample design and expected response rates for the interviews with Arizona employers and their employees and justification for the case study data collection.
The target
population of this survey includes all employers who are not enrolled
to E-Verify. Puerto Rico and other U.S. territories and the State of
Arizona, which mandates the use of
E-Verify for all employers,
are excluded from study. The domains of interest for the employer
population are based on employer size classes within three industry
sectors. The three industry sectors of interest are:
Employment agencies, temporary help services, and farm labor contractors;
Industries known to have relatively large percentages of undocumented workers;
All other industries.
Three industry sectors will be defined using the 2007 North American Industry Classification System (NAICS) codes. The size classes will be based on the number of employees (full-time and part-time) working in each company. The three size classes are:
Small (less than 15 employees),
Medium (15–99 employees), and
Large (100 or more employees).
In total, there are nine domains of interest established by three size classes within each of the three industry sectors.
The sampling frame will be MarketPlacePro, formerly known as the Dun’s market Identifiers (DMI) register maintained by Dun & Bradstreet (D&B). DMI covers all of the U.S. economy and its coverage of most industries is quite complete. DMI, the single comprehensive publicly available database to provide coverage of business establishments, is updated monthly and its coverage of the target population is relatively complete. The records contain the following fields: a D-U-N-S number; North American Industry Classification System (NAICS) code or Standard Industrial Classification (SIC) code; Federal Information Processing Standards (FIPS) state code; Standard Metropolitan Statistical Area (SMSA) code; number of employees at the location; total number of employees for the entire organization; status indicator, i.e., single location, headquarters, or branch; a subsidiary indicator; D-U-N-S numbers of the domestic topmost firm, headquarters, and parent (if a subsidiary); and hierarchy and DIAS codes to identify its location within the corporate structure.
DMI provides the option of choosing alternative organizational levels. The DMI list includes both headquarters and branch level records. DMI defines a headquarters as a business establishment that has branches or divisions reporting to it, and is financially responsible for those branches or divisions. We will include only the headquarters record for those employers with multiple branches. Therefore, the sampling units will be the single location companies (a business establishment with no branches or subsidiaries reporting to it) and the headquarters of the companies that have multiple branches. The headquarters record provides the total number of employees for the company, including the employees in the branches. It also provides the number of employees at that location.
Table B-1 shows the number of company records in the sampling frame by industry sectors and company employee size classes. Only the single location companies and headquarters of companies with multiple branches were used in this tabulation. That is, a company with a headquarters and multiple branches in different locations was included as a single unit in the tabulations. The number of employees for the headquarters refers to the total number of employees in the company, including the employees in the branches. The number of employees includes full-time and part-time employees.
Industry sector |
Number of employees |
Total |
||
Less than 151 |
15-992 |
100 or more |
||
1: Employment agencies, temporary help services, and farm labor contractors |
42,983 |
5,230 |
1,766 |
49,979 |
2: Industries known to have relatively large percentages of undocumented workers |
1,794,604 |
442,002 |
21,958 |
2,258,564 |
3: Other industries |
6,876,356 |
834,904 |
121,837 |
7,833,097 |
Total |
8,713,943 |
1,282,136 |
145,561 |
10,141,640 |
1 Since the D&B’s employee size includes owners/proprietors, the companies with an employee size of 1 are excluded.
2The employers with unknown employee size are included in size class 15–99.
The sample design will generate a national probability sample of employers that have not enrolled in E-Verify. The survey will utilize a stratified random sample design. The employers will be stratified on the basis of industry and number of employees. The employment agencies, temporary help services, and industries known to have relatively large percentages of undocumented workers will be oversampled. Larger employers will also be oversampled. However, all employers will be selected with equal probability within each industry by size stratum.
In total, a sample of 4,000 company records will be selected from the sampling frame. About 20 percent of the sampled companies are expected to be ineligible. The reasons for ineligibility include being out of business, having no employees, or being enrolled in E-Verify. The expected response rate is 70 percent. Thus, we expect to obtain a total of 2,250 completed surveys. In each industry by size domain, the target is, on average, to achieve 250 completed surveys (Table B-2). Note that the sample draw sizes displayed in Table 2, may be changed after we obtain updated frame counts (including the updated proportion of cases with unknown employee size) before we draw the sample.
Table B-2 shows the Census Bureau’s 2006 County Business Patterns (CBP), the number of establishment estimates by industry sector and employment size. The CBP estimates do not include federal, state, and local government establishments whereas D&B includes them. In Table B-2, small size class had to be defined as less than 20 employees instead of less than 15 employees as they are defined in Table B-1.
Industry sector |
Employment size of the enterprise |
Total |
||
Less than 20 |
20-99 |
100 or more |
||
1: Employment agencies, temporary help services, and farm labor contractors |
14,732
|
5,805 |
23,469 |
44,006 |
2: Industries known to have relatively large percentages of undocumented workers |
1,432,322 |
180,168 |
200,168 |
1,812,658 |
3: Other industries |
3,982,119 |
511,782 |
1,250,595 |
5,744,496 |
Total |
5,429,173 |
697,755 |
1,474,232 |
7,601,160 |
The survey of nonusers will be administered via the web to facilitate collection and data analysis processes. As described in Section B.3, we will use a variety of techniques to achieve a 70 percent response rate. The Arizona case studies will be conducted via computer assisted personal interviewing (CAPI) application administered by experienced, trained field interviewers. Section A.3 describes the advantages of using CAPI.
The sampling strata will be formed by three employee size classes within three industry sectors as described in Section B1. Three industry sectors will be defined based on the 2007 NAICS codes as shown in Table B-3.
The size classes, based on the total number of employees of each employer, will form a total of nine sampling strata.
Industry sector |
2007 NAICS code |
Description of the 2007 NAICS code |
1: Employment agencies, temporary help services, and farm labor contractors |
56131 |
Employment Placement Agencies and Executive Search Services |
56132 |
Temporary Help Services |
|
56133 |
Professional Employer Organizations |
|
115115 |
Farm Labor Contractors and Crew Leaders |
|
2: Industries known to have relatively large percentages of undocumented workers |
11 minus 115115 |
Agriculture, Forestry, Fishing and Hunting, excluding Farm Labor Contractors and Crew Leaders |
21 |
Mining |
|
23 |
Construction |
|
311 |
Food Manufacturing |
|
5617 |
Services to Buildings and Dwellings |
|
722 |
Food Services and Drinking Places |
|
812 |
Personal and Laundry Services |
|
3: Other industries |
All other NAICS codes |
All other industries |
The employers will be selected with equal probability within each size by industry stratum. The selection will be independent across the strata.
As mentioned earlier, the target sample size for the survey of nonusers is a total of 2,250 completed surveys. In each of the nine strata, the target is, on average, to achieve 250 completed surveys. However, the number of completed surveys realized can vary across the strata and thus be lower or higher than 250 in a given stratum.
The overall target response rate for the survey is 70 percent. Therefore, to obtain 2,250 completed surveys, we need to contact about 3,200 eligible employers. We expect to find about 20 percent of the employers selected from the DMI frame as ineligible (including those companies that are out of business, have no employees, or are already enrolled to E-verify). Therefore, a sample of 4,000 employers is expected to be sufficient to obtain 2,250 completed surveys.
The population parameters of interest are mostly in the form of totals or proportions. For example, in the survey of nonusers, one such proportion can be the percentage of employers that have heard of E-Verify in a given industry by size domain. An estimate of percentage of nonuser employers, who are familiar with E-verify, in industry by size stratum h, can be obtained as:
where:
Sh is the set of responding nonuser employers in stratum h;
whi is the nonresponse adjusted sampling weight attached to responding nonuser employer i in stratum h; and
yhi is the indicator that nonuser employer i in stratum h is responded as familiar with E-verify.
Note that we recommend computing the survey estimates using the sampling weights as described in the above example. The sampling weights, if properly adjusted for nonresponse, can reduce potential nonresponse bias in the survey substantially.
A sample size yielding 250 completed surveys in an industry by size stratum should be sufficient to provide reasonable precision for estimates of proportions in that stratum. The sampling error for a 50 percent proportion obtained from a sample of 250 employers should not exceed 6.2 percent with a 95 percent confidence interval (sampling error is obtained by multiplying the expected standard error by 1.96). The percent sampling errors depend on the sample size and the magnitude of the population percentage to be estimated. For a given sample size, percent error is the largest for a 50 percent population proportion and decreases as proportion moves further away from the 50 percent/50 percent split. For example, for a population proportion of 20 percent (or 80 percent) with a sample size of 250, the sampling error will be less than 5 percent. The sampling errors will be smaller for estimates of proportions produced by overall industry sectors.
The sampling weights will be attached to every eligible employer record with a completed survey (1) to account for differential probabilities of selection, and (2) to reduce the potential bias resulting from nonresponse. Each sample employer with a completed survey will be assigned a final weight.
Initially, we will assign a base weight to each sample employer record as the reciprocal of its probability of its selection. The base weights will then be adjusted for nonresponse in order to reduce potential biases resulting from not obtaining a completed survey with every employer in the sample. These adjustments will be made by redistributing the weights of nonresponding employers to responding employers with similar propensities for response. A predictive model for response propensity will be developed to identify subgroups of population with differential response rates. These subgroups will then be used as nonresponse adjustment cells and a separate weight adjustment will be applied in each cell. The potential predictors that can be used in this modeling effort have to be known for both respondents and nonrespondents. These include industry sector, employee size, single location or headquarters status, census region, and MSA/non-MSA status.
If response propensity is independent of survey estimates within nonresponse adjustment cells, then nonresponse-adjusted weights yield unbiased estimates. There are several alternative methods of forming nonresponse adjustment cells to achieve this result. We plan to use Chi-Square Automatic Interaction Detector (CHAID) software1 to guide us in forming the cells. CHAID partitions data into homogenous subsets with respect to response propensity. To accomplish this, it first merges values of the individual predictors, which are statistically homogeneous with respect to the response propensity and maintains all other heterogeneous values. It then selects the most significant predictor (with the smallest p-value) as the best predictor of response propensity and thus forms the first branch in the decision tree. It continues applying the same process within the subgroups (nodes) defined by the "best" predictor chosen in the preceding step. This process continues until no significant predictor is found or a specified (about 20) minimum node size is reached. The procedure is stepwise and creates a hierarchical tree-like structure.
Although nonresponse adjustment can reduce bias, at the same time it may increase the variance of estimates. Small adjustment cells and/or low response rates (or large nonresponse adjustment factors) may increase the variance and give rise to unstable estimates. In order to prevent an unduly large increase in variance and thereby an adverse effect on the mean square error of the estimates, we plan to limit the size of the smallest cell to a minimum and avoid large adjustment factors.
The estimates of standard errors in the nonuser survey can be obtained using a variance estimation software, such as SAS-callable SUDAAN or WesVar. SUDAAN provides variance estimation procedures using both Taylor series linearization method and replication methods. WesVar uses only replication methods. The replication method requires the development of a replication scheme and computation of the replicate weights. We propose to use SAS-callable SUDAAN with the Taylor linearization procedure, which requires less effort to obtain the standard errors of the survey estimates. The estimators in this survey are in the form of totals, means, and proportions. In Taylor linearization approach is appropriate to use with these types of estimators.
We do not anticipate any unusual problems requiring specialized sampling procedures.
USCIS requires more frequent data collections to evaluate a growing program that has critical implications for immigration policy and reform. However, the last survey of nonusers was conducted in 1999. The last data collection for users of the E-Verify program was conducted in 2007; however, only a few of the respondents resided in Arizona, a state where E-Verify is now mandated.
The techniques that will be used to achieve high response rates for the survey of nonusers are:
Motivational material
Obtain letters of endorsement from one or more national professional employer organizations such as the National Chamber of Commerce, the National Small Business Association, the National Payroll Association, and the National Association of Manufacturers;
Create a professional image for the study through a well designed and user-friendly website for the web survey of nonusers;
Emphasize the importance of participation towards shaping future directions in a mandatory or a continued voluntary Federal immigration policy;
Emphasize the steps that will be taken to ensure respondent confidentiality; and
Use language appropriate for the target population.
Aggressive followup. One of the major factors that increases study response rates is the use of aggressive followup procedures to gain cooperation with the study. The web survey of nonusers therefore includes multiple contacts with selected respondents. More specifically, the data collection procedures for nonusers consist of the following steps:
A
personalized letter will be sent to all contact people followed by
any letters of endorsement described above. This packet will be
from the contractor for the survey of nonusers since nonusers would
not necessarily be familiar with USCIS or
E-Verify. The letter
will stress both the importance of participation to future
employment verification efforts and the fact that DHS will only use
the information for research purposes.
If the mailing results in a response that the address is no longer valid, a letter or email will be sent to the alternate contact person, if any.
If no address or e-mail is provided for the contact person or if there is no alternate contact person for a non-valid e-mail address, phone interviewers will call the company to determine who is the correct contact person and, if possible, obtain the name and contact information for an alternate person who will be responsible for the study, if the primary contact person is not available.
A reminder e-mail or letter will be sent to contact persons approximately one week after the initial contact.
Approximately two weeks after the reminder email, phone interviewers will contact nonrespondents. Reasons for nonresponse will be requested and participation will be encouraged. Information on how to access the web survey site will be provided, if necessary.
A second phone reminder will be made approximately two weeks after the first phone reminder. At that time, the interviewer will offer to complete the survey by phone if the respondent prefers to answer in this fashion.
Approximately four weeks after the second phone reminder, a Federal Express packet will be sent to the remaining contacts who have not responded to any of the previous mail or e-mail correspondence or phone calls and who are not hard refusals.
Training. All individuals who will be contacting potential respondents by phone or email and conducting telephone interviews will be trained in ways to optimize response. In addition to general survey procedures, they will be trained to respond to specific questions that are likely to be raised in this study.
Nonresponse conversion. Experienced interviewers who are particularly skilled in nonresponse conversion will re-contact initial refusals. The major exception to this rule is for hard refusals.
Editing and Data cleaning. A number of editing features will be built into the web survey. For example, if the respondent attempts to provide multiple answers to a question requiring a single response, the respondent will be asked to select only one response. Additional editing checks will be done subsequent to survey completion to check for completeness, inter-item consistency, extraneous remarks, and proper adherence to any skip instructions.
Pretesting. A combination of focus groups and individual interviews has been used to obtain input on what factors are likely to motivate response to the surveys in the target populations,. In addition, lessons learned in the earlier data collections will be incorporated in the E-Verify data collections to improve respondent cooperation.
In addition to using the above procedures to increase response rates, for the Arizona case studies, an incentive of $25 will be offered to workers to complete the interviews. Based on our previous data collection experiences with similar workers, we expect a large number of them to be undocumented immigrants, who may fear their identity and status will be disclosed. This could occur especially since their co-workers may share with them that we are asking questions about their work status and experiences in obtaining employment. Since this population is difficult to locate, once they are found, it is especially important to be able to offer them tangible encouragement to participate in the study.
As mentioned above, another one of the most challenging aspects of achieving good response rates for the case studies is to locate workers who are no longer employed at the sampled companies. Therefore we will use the employer’s records and a tracing service (e.g., Peachtree. Accurint) to locate the most recent contact information. Additionally, we learned during the conduct of last year’s case studies that the contractor’s experienced field interviewers and supervisors were resourceful in searching the Internet for contact information and making discreet inquiries of neighbors and friends, etc. about how to reach employees. Finally, as E-Verify users, these employers have signed an MOU with the DHS and have agreed to cooperate with DHS and SSA designees’ inquiries about the E-Verify program. Specifically, the MOU states the employer’s responsibilities as follows:
The Employer agrees to cooperate with DHS and SSA in their compliance monitoring and evaluation of E-Verify, including by permitting DHS and SSA, upon reasonable notice, to review Forms I-9 and other employment records and to interview it and its employees regarding the Employer’s use of E-Verify, and to respond in a timely and accurate manner to DHS requests for information relating to their participation in E-Verify.
Please see section B2, Sample Weights and Estimation Procedures for a description of the approach to dealing with nonresponse bias.
For the case study portion of the evaluation, we expect to sample 450 workers having at least three tentative nonconfirmation findings within 3 months prior to sample selection. Based on our experiences in the fiscal year 2008 evaluation, this should yield a completed sample of approximately 100 employer cases. The sample will be stratified based on the number of employees and industry. Interviews and record reviews for employees with tentative nonconfirmation findings within 3 months prior to sample selection will be conducted for each of the employers selected for the employer sample.
Additionally, a random sample of 20 employees will be selected from each sampled employer. For employers with 20 or fewer tentative nonconfirmation employees in the 3 months prior to review, all such workers will be selected for record review and employee interviews. For employers with more than 20 eligible employees, a random sample of 20 employees will be selected. We anticipate that approximately 2,000 workers will be sampled and that we will conduct 450 employee interviews. Table B-4 shows the universe, sample and response rates expected for each of the interviews to be conducted in Arizona.
Collection |
Universe* |
Sample |
Response rates |
|
Number |
Percent |
|||
EmployER interview |
540 |
540 |
100 |
18 |
Worker Interview |
50,000 |
2,000 |
450 |
22 |
*Universe of employers is defined as Arizona employers that have received at least 3 TNCs in the 3 months prior to sample selection. Data collected from case studies will not be generalized to the universe.
Based on our prior experience in which we used incentives and extensive follow-up procedures, we do not believe that it is feasible to obtain a sufficiently high response rate to permit inferences from the sample to the entire population. In the 2008 evaluation, we achieved an unweighted 37 percent response rate for employees due to the inability to locate the sampled employees. Employee contact information either was missing or incorrect and accurate updated information was unavailable from the employer, the tracing service, or neighbors. In a few cases, interviewers were fairly certain that the person they were trying to interview was the sampled employee, but the person denied that the identification was correct. Finally, a few workers refused to participate because they were afraid of employer retribution (i.e., they would be fired if their employer discovered they participated in the interview).
The purpose of the case studies is to examine in depth the procedures that employers and workers follow in the verification process, not to produce representative statistics. We are using sampling to ensure that a variety of employer/employee situations are examined, but do not require the statistics to be generalized in order to identify problems and potential solutions in the verification process. Also, we do not have a particular interest in providing statistics on the State of Arizona, but rather have chosen Arizona because it is the first state to fully implement a mandate that all employers use E-Verify. In this context, the case studies will help identify the problems and situations that would occur if/when using E-Verify is mandated in other states or in the entire nation. Statistics that are representative of Arizona may not necessarily be representative in other states, and thus our interest is in identifying problems and solutions rather than providing statistics that can be generalized. A nationally representative sample of E-Verify users is planned for next year, and will be discussed in a separate OMB submission at that time.
The web survey and the Arizona case study interview instruments submitted in this request for clearance are largely based on instruments used in last year’s evaluations, though some changes have been made to accommodate the differences in programs and scope of the current studies. Since the instruments were effective last year, we have considerable evidence that the questions will again be effective this year. In addition, Westat conducted focus groups with nine participants with selected employers on the survey of non-users. Through that pretest, we identified minor issues involving the wording of particular questions, and have revised the instruments accordingly. We also conducted a stakeholders conference in Arizona to examine reactions of employers to the new mandate, and have used the information to further improve the case study instruments. After the CAPI programming of the case study instruments is completed, we will pretest the CAPI instruments with an E-Verify employer. The primary focus of that pretest will be on whether there are any difficulties with the CAPI programming, but data from that pretest will also provide one last additional test of the instruments.
Huseyin Goksel
Senior Statistician
Westat
1650 Research Blvd., RE 488
Rockville, MD 20850
301-251-4395
Denise Glover
Senior Study Director
Westat
1650 Research Blvd., TA 2128
Rockville, MD 20850
301-251-2269
Joan Michie:
Senior Study Director
Westat
1650 Research Blvd., TA 2102
Rockville, MD 20841
301-294-2014
Bradford Chaney
Senior Study Director
Westat
1650 Research Blvd., TA 2002
Rockville, MD 20850
301-294-3946
Carolyn Shettle
Senior Study Director
Westat
1650 Research Blvd., TA 2058
Rockville, MD 20850
301-251-4324
1 SPSS for Windows: CHAID, Release 6.0, User’s Guide, Jay Magidson/SPSS Inc., 1993.
File Type | application/msword |
File Title | ATTACHMENT D: Revised Supporting Statements A and B |
Author | Haydee Gonzalez |
Last Modified By | Haydee Gonzalez |
File Modified | 2009-05-12 |
File Created | 2009-05-12 |