1205-0040 SupportPartB_9_30_10

1205-0040 SupportPartB_9_30_10.pdf

Senior Community Service Employment Program (SCSEP) Performance Measurement System

OMB: 1205-0040

Document [pdf]
Download: pdf | pdf
Part B

Customer Satisfaction Survey Supplemental Supporting Statement
Title: Using the American Customer Satisfaction Index to Measure Customer
Satisfaction in the Senior Community Service Employment Program
Abstract:
The 2006 amendments to Title V of the Older Americans Act (OAA-2006, Pub. L.109-365) require that
customer satisfaction surveys be conducted for all three customer groups: participants, host agencies,
and employers. The Employment and Training Administration (ETA) is using the American Customer
Satisfaction Index (ACSI) to meet the customer satisfaction measurement needs of several ETA
programs including the Senior Community Service Employment Program (SCSEP). SCSEP has been
conducting these surveys nationwide since 2004. The survey approach allows the program flexibility
and, at the same time, captures common customer satisfaction information that can be aggregated and
compared among national and state grantees. The measure is created with a small set of core
questions that form a customer satisfaction index. The index is created by combining scores from three
specific questions that address different dimensions of customers' experience. Additional questions
that do not affect the assessment of grantee performance are included to allow grantees to effectively
manage the program.
The ACSI is a widely used customer satisfaction measurement approach. It is used extensively in the
business communities in Europe and the United States, including more than 200 companies in 44
industries. In addition, over 100 Federal government agencies have used ACSI to measure citizen
satisfaction with more than 200 services and programs.
The ACSI allows the SCSEP program to not only look at performance within the system, but also to gain
perspective on SCSEP’s performance by benchmarking against organizations and industries outside of
the workforce system. The ACSI also has a history of being useful in tracking change in customer
satisfaction over time, making it an ideal way to gauge grantees’ progress toward continuously
improving performance.
Since the ACSI trademark is the property of the University of Michigan and the Claes Fornell
International Group (CFI), SCSEP has established a license agreement with the University of Michigan
that allows the use of the ACSI for samples of participants, host agencies, and employers at the
nationwide and grantee levels.
The 2000 amendments to the OAA designated customer satisfaction as one of the core SCSEP
measures for which each grantee had negotiated goals and for which sanctions could be applied. In the
first year of the surveys, Program Year (PY) 2004, baseline data were collected. The following year, PY
2005, was the first year when evaluation and sanctions were possible. Because of changes made by
the 2006 amendments to the OAA, starting with PY2007, the customer satisfaction measures have
become additional measures (rather than core measures), for which there are no goals and, hence, no
sanctions.

Compliance with 5 CFR 1320.8 Yes _X_ No ____
Consultation with persons outside the Department of Labor:
Name
Telephone No.
Agency/Company
Barry A. Goff, Ph.D.
(860) 659-8743
The Charter Oak Group, LLC

Pretest Conducted: 18 people in 2003; over 140,000 respondents have completed the surveys
since 2004.

1

Assurances of Confidentiality:

A statement assuring confidentiality is included as follows:

For employers: “The Older Worker Program, also known as the Senior Community Service
Employment Program (SCSEP), wants to provide the highest quality services to its customers. You can
help us improve our services by answering the following questions. Please be completely honest. Your
answers will be strictly confidential. Unless the question directs you otherwise, please answer each
question on the basis of your most recent experience with the Older Worker Program.”
For participants: “The Older Worker Program, also known as the Senior Community Service
Employment Program (SCSEP), wants to provide the highest quality services to its customers. You can
help us improve our services by answering the following questions. Please be completely honest. Your
answers will be strictly confidential. No one in the agency will see your individual responses.”
For host agencies: “The Older Worker Program, also known as the Senior Community Service
Employment Program (SCSEP), wants to provide the highest quality services to its customers. You can
help improve services by answering the following questions. Please be completely honest. Your
answers are strictly confidential. No one in the agency will see your individual responses. Unless
directed otherwise, please answer based on your most recent experience with the Older Worker
Program.”
In addition, no individual grantee reports are provided where the number of respondents is fewer than
20, and no results are reported for any individual question where there are fewer than 10 respondents.
Given the sample selection criteria, small sample size that could impact confidentiality is only a potential
problem at the grantee level for the employer survey.

Annual Federal Cost:
Activity Category

Cost

Administration, mail house, license,
and postage

$425,000

Scanning and processing of
completed surveys

$20,000

Analysis and reporting of results
TOTAL:

$40,000
$485,000

Burden Estimates:
There are three customer groups being surveyed with separate survey instruments. The surveys vary
slightly in length depending on the customer group. The estimates are based on two different methods
of administration. A central mail house mails surveys to the participant and host agency respondents on
behalf of the grantees. The sub-grantees hand-deliver the employer surveys. Details of these methods
are included in the methodology section.
Participants and Host Agencies
Number of Respondents: 27,000
Frequency: Annually
Average Minutes/Hours per response: 6 minutes
Estimated burden hour costs: 0
Total burden hours: 2,700

2

Employers
Number of Respondents: 3,800
Frequency: Ongoing throughout the year
Average Minutes/Hours per response: 6 minutes
Estimated burden hour costs: 0
Total burden hours: 440

Requested expiration date:

Three years from approval of ICR package

Statistical Methodology
A. Measuring SCSEP Participant and Host Agency Customer Satisfaction
Participants
The weighted average of participant ratings on each of the three questions regarding overall satisfaction
is reported on a 0-100 point scale. The score is a weighted average, not a percentage.
Host Agencies
The weighted average of host agency ratings on each of the three questions regarding overall
satisfaction is reported on a 0-100 point scale. The score is a weighted average, not a percentage.
1. Who Will Be Surveyed?
Participants
All SCSEP participants who are active at the time of the survey or have been active in the preceding
12 months are eligible to be chosen for inclusion in the random sample of records.
Host Agency Contacts
Host agencies are public agencies, units of government and non-profit agencies that provide
subsidized employment, training, and related services to SCSEP participants. All host agencies that
are active at the time of the survey or that have been active in the preceding 12 months are eligible
for inclusion in the sample of records.
2. How Many (number to be surveyed)?
For each state grantee, two hundred and fifty completed surveys should be obtained each year for
both participants and host agencies. At least 250 completed surveys for both customer groups
should be obtained for each national grantee, depending on the number of states in which each
national grantee is operating. It is anticipated that a sample of 370 will yield 250 completed
interviews at a 70% response rate. In the event the number eligible for the survey is small (where
250 completed interviews are not attainable), the sample includes all participants or host agencies.
The surveys of participants and host agencies are conducted through a mail house once each
program year.
Design Parameters:
•
•
•
•

There are 18 national grantees operating in 49 states and territories
There are 56 state/territory grantees
There are three customer groups to be surveyed (participants, host agencies, and employers)
Surveying each of these customer groups should be considered a separate survey effort.

3

•

•

For the participant survey:
o A point estimate for the ACSI score is required for each national grantee, both in the
aggregate and for each state in which the national grantee is operating.
o A point estimate for the ACSI score is required for each state grantee.
o A sample of 370 participants from each national and state grantee will be drawn from the
pool of participants who are currently active or have exited the program during 12 months
prior to the survey period.
o Some state grantees may not have a total of 370 participants available to be surveyed. In
those cases, all participants who are active or who have exited during the 12 months prior to
the survey will be surveyed.
o As indicated above 370, participants will be sampled from each national grantee. With an
expected response rate of 70%, this should yield 250 usable responses. However, these
may not be distributed equally across the states in which a national grantee operates.
Where there are fewer than 70 potential respondents in the sample and there are additional
participants who have not been sampled, we will over-sample to bring the potential
responses to at least 50. To determine the impact on the standard deviation of the ACSI for
differing sample sizes, a series of samples was drawn from existing participant data. The
average standard deviation for samples of 250 was 18.5 in PY2007. The average standard
deviation for samples of 50 was 18.7 for the same year.
For the host agency survey:
o A point estimate for the ACSI score is required for each national grantee, both in the
aggregate and for each state in which the national grantee is operating.
o A point estimate for the ACSI score is required for each state grantee.
o A sample of 370 host agency contacts from each national and state grantee will be drawn
from the pool of agencies hosting participants during 12 months prior to the survey period.
o Some state grantees may not have a total of 370 host agency contacts available to be
surveyed. In those cases, all agencies hosting participants during the 12 months prior to the
survey will be surveyed.
o As indicated above, 370 host agencies will be sampled from each national grantee. With an
expected response rate of 70%, this should yield 250 usable responses. However, these
may not be distributed equally across the states in which a national grantee operates.
Where there are fewer than 70 potential respondents in the sample and there are additional
host agencies that have not been sampled, we will over sample to bring the potential
responses to at least 50. To determine the impact of different sample sizes on standard
deviations, a series of samples was drawn from existing host agency data. The average
standard deviation for samples of 250 was 18.8 for PY2007. The average standard
deviation for samples of 50 was 20.7 for the same year.

3. How Will the Data be Collected?
The responses are obtained using a uniform mail methodology. The rationale for using mail surveys
includes: individuals and organizations that have a substantial relationship with program operators,
in this case, with the SCSEP sub-grantees, are highly likely to respond to a mail survey; mail
surveys are less expensive when compared to other approaches; and mail surveys are easily and
reliably administered to potential respondents. The experience in administering the surveys by mail
since 2004 has established the efficacy of this approach.
As with other data collected on the receipt of services, the responses to the customer satisfaction
surveys must be held confidential as required by applicable state law. Before promising
respondents confidentiality of results, grantees must ensure that they have legal authority under
state law for that promise.
To ensure ACSI results are collected in a consistent and uniform manner, the following standard
procedures are used by grantees to obtain participant and host agency customer satisfaction
information:

4

•

ETA’s survey research contractor, The Charter Oak Group, determines the samples based
on data in the SCSEP Performance and Results QPR (SPARQ) system. As with WIA, there
are smaller grantees where 250 completed surveys will not be achievable. In such cases, no
sampling takes place and the entire population is surveyed.

•

Grantees are required to ensure that sub-grantees notify customers of the customer
satisfaction survey and the potential for being selected for the survey.

•

•

o

Inform participants at the time of enrollment and exit.

o

Inform host agencies at the time of assignment of a participant.

o

Inform via mail all chosen participants that they will be receiving a survey in
approximately one week.

o

When discussing the surveys with participants for any of the above reasons, refresh
contact information, including mailing address.

Grantees are required to ensure that sub-grantees prepare and send pre-survey letters to
those participants selected for the survey.
o

Grantees provide the participant sample list to sub-grantees about 3 weeks prior to the
date of the mailing of the surveys.

o

Letters are personalized using a mail merge function and a standard text.

o

Each letter is printed on the sub-grantee’s letterhead and signed in blue ink by the subgrantee’s director.

Grantees are responsible for the following activities:
1. Provide letterhead, signatures, and correct return address information to DOL for use in
the survey cover letters and mailing envelopes.
2. Send participant sample to sub-grantees with instructions on preparing and mailing presurvey letters.

•

Contractors to the Department of Labor are responsible for the following activities:
1. Provide sub-grantees with list of participants to receive pre-survey letters.
2. Print personalized cover letters for first mailing of survey. Each letter is printed on the
grantee’s letterhead and signed in blue ink with the signatory’s electronic signature.
3. Generate mailing envelopes with appropriate grantee return addresses.
4. Generate survey instruments with bar codes and preprinted survey numbers.
5. Enter preprinted survey numbers for each customer into worksheet.
6. Assemble survey mailing packets: cover letter, survey, and pre-paid reply envelope,
stamped mailing envelope.
7. Mail surveys on designated day. Enter date of mailing into worksheet.
8. Send survey worksheet to the Charter Oak Group.

5

9. From list of customers who responded to first mailing, generate list for second mailing.
10. Print second cover letter with standard text (different text from the first letter). Letters are
personalized as in the first mailing.
11. Enter preprinted survey number into worksheet for each customer to receive second
mailing.
12. Assemble second mailing packets: cover letter, survey, pre-paid reply envelope, stamped
mailing envelope.
13. Mail surveys on designated day. Enter date of mailing into worksheet.
14. Send survey worksheet to the Charter Oak Group.
15. Repeat tasks 9-14 if third mailing is required.
4. What are the Core Questions?
The core questions to be included in the mail surveys for participants and host agencies are detailed
below. The other questions may be viewed in the survey forms submitted through ROCIS.
Utilizing the scale of 1 to 10 below, what is your overall satisfaction with the services provided by the
Older Worker Program? (Choose one number)?
Very
Very
Didn’t receive
Dissatisfied
1
2

3

4

5

6

7

8

9

Satisfied
10

90

Considering all of the expectations you may have had about the services, to what extent have the
services met your expectations? A 1 now means “Falls Short of Your Expectations” and 10 means
“Exceeds Your Expectations.”
Falls Short of
Expectations
1
2
3

4

5

6

7

8

9

Exceeds Didn’t receive
Expectations
10
90

Now think of the ideal program for people in your circumstances. How well do you think the services
you received compare with the ideal set of services? A 1 now means “Not very close to the Ideal”
and 10 means “Very Close to the Ideal.”
Not Close
To Ideal
1
2

3

4

5

6

7

8

Very Close Didn’t receive
To Ideal
9
10
90

B. Measuring SCSEP Employer Customer Satisfaction
The weighted average of employer ratings on each of the three questions regarding overall
satisfaction is reported on a 0-100 point scale. The score is a weighted average, not a percentage.
1. Who Will Be Surveyed?
Employers that hire SCSEP participants and employ them in unsubsidized jobs. To be considered
eligible for the survey, the employer: 1) must not have served as a host agency in the past 12
months; and 2) must have had substantial contact with the sub-grantee in connection with hiring of
the participant; and 3) must not have received another placement from this program during the

6

current program year.
All employers that meet the criteria in B1 are surveyed at the time the sub-grantee conducts the first
case management follow-up, which typically occurs 30-45 days after the date of placement.
2. How Many (number to be surveyed)?
It is necessary to survey all employers that meet the criteria to ensure an adequate response rate.
No sampling is used.
3. How Will the Data be Collected?
The responses are obtained using a uniform mail methodology. The rationale for using mail surveys
includes: employers that have a substantial relationship with program operators are highly likely to
respond to a mail survey; mail surveys are less expensive when compared to other approaches; and
mail surveys are easily and reliably administered to potential respondents.
As with other data collected on the receipt of services, the responses to the customer satisfaction
surveys must be kept confidential as required by applicable State law. Before promising
respondents confidentiality of results, grantees must ensure that they have legal authority for that
promise. Such authority can be found in State privacy laws, for example.
To ensure ACSI results are collected in a consistent and uniform manner, the following standard
procedures are to be used by grantees to obtain employer customer satisfaction information:
•

Grantees are required to ensure that sub-grantees notify employers of the customer
satisfaction survey and the potential for being selected for the survey. Employers should be
informed at the time of placement of the participant.

•

Grantees and sub-grantees are responsible for the following activities:
1. Sub-grantee identifies employer for surveying the first time there is a placement with
that particular employer in the program year. Employer is selected only if it is not also
a host agency and the sub-grantee has had substantial communication with the
employer in connection with the placement. Each employer is surveyed only once
each year.
2. Sub-grantee generates customized cover letter using standard text.
3. Sub-grantee hand delivers survey packet (cover letter, survey, stamped reply
envelope) to employer contact in person at time of first follow-up (Follow-up 1). Mail
may be used if hand delivery is not practical.
4. Sub-grantee enters pre-printed survey number and date of delivering packet into
database. (Survey instruments with pre-printed bar codes and survey numbers, reply
enveloped, and mailing envelopes are provided to the grantees by DOL.)
5. A contractor, responsible for processing the surveys, sends weekly e-mail to all
grantees and sub-grantees listing the survey numbers of all employer surveys that
have been completed.
6. Sub-grantee reviews e-mails for three weeks following the delivery of the survey to
determine if survey was completed.
7. If the contractor lists a survey number in the weekly email, the sub-grantee updates
database with date. If survey not received, sub-grantee calls employer contact and

7

uses an appropriate script to either encourage completion of the first survey (if the
employer still has it) or indicate that it will send another survey for completion.
8. If needed, sub-grantee generates second cover letter using same procedures as for
first cover letter.
9. Sub-grantee follows procedures as for first survey.
10. If survey received, sub-grantee updates database with date.
11. If third mailing needed, sub-grantee repeats steps 2-7.
12. Grantee monitors process to make sure that all appropriate steps have been followed
and to advise sub-grantee if third effort at obtaining completed survey is required.
4. What are the Core Questions?
The core questions to be included in the mail surveys for all three customer groups are detailed
below. The other questions may be viewed in the survey forms submitted through ROCIS.
Utilizing the scale of 1 to 10 below, what is your overall satisfaction with the services provided by the
Older Worker Program? (Choose one number)?
Very
Dissatisfied
1
2

3

4

5

6

7

8

9

Very
Satisfied
10

Didn’t receive
90

Considering all of the expectations you may have had about the services, to what extent have the
services met your expectations? A 1 now means “Falls Short of Your Expectations” and 10 means
“Exceeds Your Expectations.”
Falls Short of
Expectations
1
2
3

4

5

6

7

8

Exceeds Didn’t receive
Expectations
9
10
90

Now think of the ideal program for people in your circumstances. How well do you think the services
you received compare with the ideal set of services? A 1 now means “Not very close to the Ideal”
and 10 means “Very Close to the Ideal.”
Not Close
To Ideal
1
2

3

4

5

6

7

8

Very Close Didn’t receive
To Ideal
9
10
90

C. Definition of Terms
Sample. A group of cases selected from a population by a random process where everyone has an
equal probability of being selected.
Response rate. All of those who respond to all three of the core questions on the survey divided by
the percentage of people surveyed.
D. Calculation of the ACSI
The ACSI model (including the weighting methodology) is well documented. (See
http://www.theacsi.org/government/govt-model.html.) The ACSI scores represent the weighted sum of
the three ACSI questions’ values, which are transformed into 0 to 100 scale value. The weights are

8

applied to each of the three questions to account for differences in the characteristics of the state’s
customer groups.
For example, assume the mean values of three ACSI questions for a state are:
1. Overall Satisfaction
2. Met Expectations
3. Compared to Ideal

= 8.3
= 7.9
= 7.0

These mean values from raw data must first be transformed to the value on a 0 to 100 scale. This is
done by subtracting 1 from these mean values, dividing the results by 9 (which is the value of range of a
1 to 10 raw data scale), and multiplying the whole by 100:
1. Overall Satisfaction
2. Met Expectations
3. Compared to Ideal

= (8.3 -1)/9 x 100 = 81.1
= (7.9 -1)/9 x 100 = 76.7
= (7.0 -1)/9 x 100 = 66.7

The ACSI score is calculated as the weighted averages of these values. Assuming the weights for the
example state are 0.3804, 0.3247 and 0.2949 for questions 1, 2 and 3, respectively, the ACSI score for
the state would be calculated as follows:
(0.3804 x 81.1) + (0.3247 x 76.7) + (0.2949 x 66.7) = 75.4
Weights were calculated by a statistical algorithm to minimize measurement error or random survey
noise that exists in all survey data. State-specific weights are calculated using the relative distribution of
ACSI respondent data for non-regulatory Federal agencies previously collected and analyzed by CFI
and the University of Michigan.
Specific weighting factors have been developed for each state. New weighting factors are published
annually. It should be noted that the national grantees have different weights applied depending on the
state in which their sub-grantees’ respondents are located.
E. Response Rate Estimate
Response rates achieved for the participant and host agency surveys since 2004 have ranged from
56% to over 70%. Participant and host agency response rates for PY 2008 are slightly over 64%.
Response rates for the employer survey have been difficult to track because of grantee non-compliance
with the requirement to enter the survey number and date of mailing into the SPARQ system. Where
the administrative requirements have been followed, employer response rates have been very high.
Even with such response rates, non-response bias is still a possibility. Survey data will be analyzed and
compared in two “waves” as they arrive. If there is little or no difference in the two waves, this can
indicate that non-response bias is less likely. This assumes that late responders may share
characteristics with non-responders. A second approach to determining non-response bias will be a
comparison of the respondents to non-respondents using the administrative data and the characteristics
of the customer groups contained therein. If differences are evident from either analysis, responses in
the last wave or differences in characteristics can be used in a weighted mean estimate of the nonresponse group. 1

1 Henry, page 132.

9

• B.3. Additional mathematical detail regarding the proposed nonresponse bias analyses. In keeping with Section II.9 of the checklist, the OMB will expect to see clear
justification of the statements on anticipated response rates listed in Section E of Part
B, e.g., citation of the applicable technical reports from the comparable previous surveys. Also, in light of the anticipated 64% response rate cited in Section E of part B,
the OMB will expect to see a detailed statement, including the relevant mathematical
formulas, on the nonresponse bias analysis methods, and related nonresponse adjustments, that will be used. The written materials from Dipak Dey refer to a jackknife
procedure to adjust for bias. If I understand his brief write-up, this would appear to
address order 1/n bias terms that are primarily of interest for samples of relatively
small size. However, the primary interest by the OMB in nonresponse bias analysis
will center on bias terms that are of a larger order O(1), and arise from potential
correlation between response probabilities and the outcome variables of interest. There
is a substantial body of literature on nonresponse bias analyses, generally focused on
(a) comparison of auxiliary-variable parameters for the respondent and nonrespondent
groups; (b) regression and logistic regression modeling related to the response indicators and the survey variables of interest; (c) comparison of response variables for early
and late respondents; and (d) comparison of alternative weighted estimators. The
current OMB package briefly refers to (c) and (d). The OMB will expect to see a
considerably more detailed methodological write-up, including applicable mathematical
formulas, along the lines of (a)-(d).

I.

RESPONSE TO THE REVIEW
A.

Methodology Overview

In this proposal, the response rate is calculated as the number of respondents with complete customer satisfaction information divided by the total number of samples. If the
response rate goes below 70%, the following procedures will be followed.
1. To compensate for item nonresponse, we will use imputation procedures to replace
missing values with values that occur in the sample. We will use various approaches
and compare them to ensure bias has been limited. These approaches will include hot

deck imputation, fractional hot deck imputation(Kim and Fuller, 2004) and multiple
imputations (Little and Rubin, 1987).

Hot deck imputation is an imputation procedure in which the value assigned for
a missing item (response or covariates) is taken from respondents in the current sample. We will use the auxiliary variables known for both the respondents as well as the
nonrespondents to divide the sample into so-called imputation cells. The record providing the value is called donor and the record with missing values are called recipient.
We will also consider random hot deck imputation as needed where the nonrespondents are assigned values at random from respondents in the same imputation cell.

Two major issues arise owing to the imputation procedures: bias in point estimators and inflation of the variances of point estimators. To handle these issues, we will
consider two approaches and compare the performances through simulation. The first
approach is suggested by Kim and Fuller(2004) and is based on fractional hot deck
imputation and the second one based on the approach by Oh and Scheuren (1983).
The detail formulas for point estimates, estimates of variances and bias corrections
are provided in those papers.

2. In order to handle missing data mechanisms, we will consider situations where nonreponse is for units or items. We will consider two scenarios: missing completely at
random (MCAR) or not missing at random (NMAR). The approach here will not
be design based but instead be model based, where normality assumptions will be
made directly on the data. However, if the normality assumptions are violated, we
will consider an appropriate Box-Cox transformation. The exact methodology and
the formulas are given in Little and Rubin (1987).

For unit nonresponse, we will use the response propensity approach, where we will use
regression on the background variables, using the combined data for respondents and
nonrespondents and a method such as logistic regression for categorical variables. This
is a very effective way of reducing nonresponse bias attributable to the background
variables as suggested in Rosenbaum and Rubin (1983) and Little (1986).

3. In order to compare between early and late responses, we will use the panel survey
structure. Again we will develop a logistic regression model involving early against
late responses using the auxiliary variables and then use a longitudinal structure to
predict the pattern of late responses.
4. In order to compare the weighted estimators, we will use the post stratification method,
for the general regression model. This weighting technique is often known as raking
or iterative proportional fitting. We will further consider linear and multiplicative
weighting which are special cases of calibration estimation. The post stratification
method, we will adopt will be based on Little (1993). The detail formulas for post
stratifications, linear and multiplicative weighting as well as calibration estimation are
given in Bethlehem(2002).

B.

Methodological Details

Here we consider a random response model of Oh and Scheugren(1983), where the
model assigns to each element k in the population an unknown probability pk of
response when contacted in the sample. Let
m

1 X
yi
y¯ =
m i=1
∗

denote the mean of the m(m < n) available observations. Then, it can be shown that

E(¯
y∗) ≈

where p¯ =

1
N

N
X

N
1 X pk y k
N k=1 p¯

pk .

k=1

Further, the population covariance between the response probabilities and the values of
the target variable can be shown to be

C(P, Y ) =

N
1 X
(pk − p¯)(yk − y¯)
N k=1

Due to the presence of auxiliary variables, which we have for both the host agency as well
as for participant satisfaction, we can propose a general regression estimator as given below.
Suppose there are p auxiliary variables (regression factors) and we represent the values of
these variables for elements k by the vector(Xk1 , Xk1 , · · · , Xkp )0 . If the auxiliary variables
are correlated with the target variable, then for a lyle chosen vector β = (β1 , · · · , βp )0 of
regression coefficients, we obtain a best fit of Y on X. For full response, β can be estimated,
asymptotically unbiased by
b = (X0 X)−1 X0 y
which is the normal equation. The general regression estimator for the full response can be
written as
y¯Reg = x
¯0 b∗

(1)

in which b∗ is the analogue of b based on the available data.

The form of the general regression estimator with qualitative variables can be obtained
using the approach of Bethlehem (1987). We will then compare the modified regression estimator for the respondent and the non-respondent groups, which can be done by a Student’s
t-test. In our survey, the auxiliary variables for the participant fraction are: X1 = education,
X2 = age (those 75 and over are compared to those under 75), X3 = literacy level (those
with or without low literacy), X4 = barriers disabled (those with higher number of barriers
versus those with lower number of barriers) and X5 = f railty (those who are frail versus
those who are not). For host agency satisfaction scores, the auxiliary variable is categorical
with 5 levels, where the levels indicate how the quality of services to the community has
been affected by virtue of its participation in the older worker program. The levels are given
as “signif icantly decreased”, “somewhat decreased”, “neither decreased nor increased”,
“somewhat increased” and “signif icantly increased”.

Now, we consider the case when the response probabilities pi ’s are unknown. Then, from the
observed response indicators ri and available element-level auxiliary variable xi , we consider
a logistic regression model

ln

pi
1 − pi



= x0i β

(2)

and compute the regression estimators βˆ of β by standard methods like pseudo-likelihood or
Bayesian and then compute pˆi . After obtaining such regression estimates, we will compare
response variables for early and late respondents by using Student’s t-test to determine
whether there are any differences. Since, such comparisons involve multiple Student’s ttests, we will also use Tukey’s multiple comparison procedure to control for the overall
significance level.

Our next approach to the nonresponse adjustment will be based on imputation. First,
we restrict attention to a simple form of hot deck single imputation. Specifically, we assume
that our population is partitioned into C cells. Suppose S is the set of indices on n sample
units selected through the design. Then, given a nonresponding sample unit i contained in
cell c with c = 1, 2, · · · , C, from among the responding units in cell i randomly select one
unit to be the ‘donor’ for the missing unit i, with selection probabilities proportional to the
initial sample selection probabilities pj , we define the imputed value Yj∗ = Yj . In addition,
for all responding sample units i, define Yi∗ = Yi . Then, an imputed-data estimator of the
population mean µ is

X

wi Yi∗

i∈S

µ
ˆ= X

(3)
wi

i∈S

where wi ’s are weights which can be obtained from all the above methods. It can be
shown that µ
ˆ is approximately unbiased for µ. Again, for exploration of all aspects of the
design for nonresponse adjustments, we will develop the Student’s t-test and obtain the
power curves. The test statistic for such a test has the form

µ
ˆ − µ0
t0 = √
vˆmis
where µ0 is a prespecified value and vˆmis is a conservative variance estimator. Finally, we
will use the imputation model of Robbins and Wang(2000) to obtain the estimate of mean
and variance of the regression coefficient β. We will compare the performance of all the
above mentioned estimators in our survey data and choose the best one for our final report.

II.

REFERENCES

J.G. Bethlehem (2002), “Weighing Nonresponse Adjustments Based on Auxiliary Information” in Survey Nonresponse,eds. D.A. Dillman, J.L. Eltinge, R. Groves and R.J.A.
Little, Wiley Inter-science, John Wiley and Sons, 275-288.

D.A. Dillman, J.L. Eltinge, R. Groves and R.J.A. Little (2002), “Survey Nonresponse”,
Wiley Inter-science, John Wiley and Sons.

J.K. Kim and W.Fuller (2004), “Fractional Hot Deck Imputation”, Biometrika 91, 559578.

R.J.A Little (1986), “Survey Nonresponse Adjustments for Estimate of Means”, International Statistical Review, 54, 139-157

R.J.A Little (1993), “Pattern Mixture Models for Multivariate Incomplete Data”, Journal of American Statistical Association, 88, 125-134

R.J.A Little and D.B. Rubin (1987), “Statistical Analysis with Missing Data”, Wiley,
New York.

H.L. Oh and F.J.Scheuren (1983), “Weighting Adjustment for Unit Nonresponses” in
W.G.Madow et.al. eds Incomplete Data in Sample Surveys, Vol 2, Academic PRess, New
York, 143-182

J.M. Robbins and N. Wang (2000),“Inference for Imputation Estimators”, Biometrika,
87, 113-124.

P.R. Rosenbaum and D.B. Rubin (1983),“ The Central Role of the Propensity Score in
the Observational Studies for Causal Effects”, Biometrika 70, 41-55


File Typeapplication/pdf
File TitleOMB 83-I
Authorrbenedict
File Modified2010-09-30
File Created2010-09-30

© 2024 OMB.report | Privacy Policy