SUPPORTING STATEMENT
U.S. Department of Commerce
U.S. Census Bureau
2010 Census Coverage Measurement Person Interview, Person Interview Reinterview Operations and Recall Bias Study
B. Collections of Information Employing Statistical Methods
Sample Design
The 2010 Census Coverage Measurement (CCM) sample design is a general purpose sample designed to support the various objectives of the program, which includes the new objective of estimating components of census coverage (including erroneous enumerations and omissions). The CCM will continue to estimate net error for the 2010 Census. The CCM is designed to measure the coverage of housing units and persons, excluding group quarters and persons residing in group quarters. Remote areas of Alaska are out-of-scope for CCM.
The CCM sample consists of two parts. The Population Sample, P sample, and the Enumeration Sample, E sample, have traditionally defined the samples for dual system estimation. Both the P sample and the E sample measure the same housing unit and household population. However, the P-sample operations are conducted independent of the census. The E sample consists of census enumerations in the same sample areas as the P sample.
The CCM P-sample housing unit size is 170,000 for the nation and 7,500 for Puerto Rico. These sample sizes are reduced from earlier design plans as part of efforts to reduce nonsampling error in the CCM. The sample allocation retains the original housing unit sample sizes of 4,500 for the state of Hawaii and 10,500 for the American Indian Reservations. The remainder of the national sample is distributed to the other states roughly proportional to size, with smaller states getting about 1,000 housing units.
The CCM is a multi-phase sample designed to measure the net coverage and components of coverage for the household population and housing units in the 2010 Census. The CCM sample design comprises a number of distinct processes from forming block clusters, creating the sampling frame, selecting sample block clusters, and selecting addresses for the P and E samples. After the CCM block clusters are selected, an address list is created independent of the census for each CCM sample block cluster. The approximate CCM listing workload was 11,835 block clusters for the nation and 529 for Puerto Rico. Overall, the expected listing workload was approximately one million housing units. Following the independent listing, block cluster sample size is reduced to 6,148 for the nation and 268 for Puerto Rico. Then, housing units are selected for the Person Interview sample.
Table 1 summarizes the National and Puerto Rico universe size from Census 2000, along with the CCM expected listing workloads and the P-sample size. The E-sample size is expected to be the same as the P sample.
Table 1: 2010 CCM Universe and Sample Housing Unit Summary
Geography |
2000 Census |
Expected Listing Sample Size |
Expected P-sample Size |
U.S. |
115,904,641 |
950,000 |
170,000 |
Puerto Rico |
1,418,476 |
50,000 |
7,500 |
Total |
117,323,117 |
1,000,000 |
177,500 |
The CCM sample design has several phases of sampling. In the first phase, we form block clusters from contiguous collection blocks. The block clusters in each state are classified by size into mutually exclusive and relatively homogeneous groups known as sampling strata. These strata are based on the block cluster size and whether the block cluster is located on an American Indian Reservation. The four major strata, which are the same as those used for the 2000 Accuracy and Coverage Evaluation (A.C.E.) first phase of sampling are (1) block clusters with 0 to 2 housing units (small stratum), (2) block clusters with 3 to 79 housing units (medium stratum), (3) block clusters with 80 or more housing units (large stratum), and (4) block clusters on American Indian Reservations with three or more housing units (American Indian Reservation stratum). Using 2000 Census data, the medium and large strata are further split into renter and owner block clusters, resulting in up to six sampling strata being formed in each state and Puerto Rico.
Block clusters with 80 or more housing units are selected with higher probability than medium block clusters in this phase because housing units in large block clusters will be subsampled in a later operation, bringing the overall probability of selection – the inverse of the sampling weight – for housing units in these block clusters more in line with the overall selection probabilities of housing units in medium block clusters. Block clusters from the renter strata are selected at a higher rate than block clusters from the owner strata. Within each sampling stratum, block clusters are sorted and a systematic sample is selected with equal probability.
Next, block clusters are reduced using different methods. The medium and large block clusters are randomly reduced to coincide with the reduced P-sample size. The small block clusters are reduced as originally planned to improve operational efficiency to reduce costs while also attempting to minimize the variance impact. Conducting interviewing and followup operations in block clusters of this size is more costly per housing unit than in medium or large block clusters. Using housing unit counts from the independent list and the updated census address list, we re-stratify the small block clusters selected in the first phase within each state by size and select systematic samples from each stratum with equal probability. All block clusters from the small sampling stratum with 10 or more housing units based on the updated information are retained. All block clusters from the small sampling stratum that are on American Indian country are also retained. (American Indian country includes American Indian Reservations and associated trust lands, as well as the American Indian statistical areas.)
In the third phase of CCM sampling (the sample used for Person Interview), we select a subsample of independent housing units within large block clusters to be in the P sample. If a block cluster contains 79 or fewer housing units, all the housing units are included in the CCM sample. For block clusters with 80 or more housing units, a subsample of these housing units is selected to facilitate data collection in the field and to reduce the impact of intraclass correlation on the variance. This phase of sampling results in more similar overall selection probabilities for housing units because the large block clusters have a higher probability of selection at the first phase. This subsampling is done by forming groups of adjacent housing units, called segments. A systematic sample of segments within each block cluster is selected. All housing units in the selected segments are included in the CCM sample.
For the third phase of CCM sampling, the sampling frame for the P-sample housing units is the result of the CCM initial housing unit matching and followup operation. The intent of this housing unit operation is to resolve differences between the independent housing unit list and an early census housing unit list that can result in housing units being removed from the independent listing, but no units can be added to the independent listing. In addition to sending the P sample to the Person Interview, a sample of census units that were missed during the independent listing operation will be sent to the Person Interview. If there are 79 or fewer of this type of census unit in the block cluster or selected segments, all of these census units are selected for interview. If there are 80 or more of these census units, then a systematic sample is selected for interview. The estimated workload is 27,000 units. While not part of the P sample, these census units are likely to be in the E sample. The P-sample persons result from the person interviewing in the P-sample housing units.
The sampling frame for the E-sample housing units consists of the housing units in CCM sample areas from the Census Unedited File (CUF), which is available after the P sample is selected. While these two samples are selected at different points in time, we attempt to geographically overlap them to the extent possible. The E-sample persons are the census enumerations in the E-sample housing units that have at least two characteristics, of which name can be one; they are referred to as census-defined enumerations. The Census Bureau expects a response rate of 90-95 percent for the PI and PI RI operation.
Person Interview Reinterview - Quality Control
The person interview reinterview universe will consist of a sample that is approximately 15 percent of the original PI enumerator workload (approximately 30,750 cases). This sample includes random, supplemental, and outlier reinterview cases.
Recall Bias Study
The recall bias study has two universes and as such is really two studies of the potential effect of recall bias on responses to residence questions and movers around Census Day. The two studies planned are as follows: (1) a random digit dialing (RDD) study split between a landline frame and a cell phone frame, and (2) a targeted mover study using the Master Address File with indicators of movers from the National Change of Address file (NCOA). The plan is to conduct 4 panels of 10,000 interviews: May 2010, June 2010, September 2010, and February 2011. The RDD study May 2010 panel will consist of a sample from the general population designed to estimate the proportion of the population that moved in and around April 2010. The true proportion of the population that moved in April 2010 is a fixed quantity. If there were no recall bias, the proportion of the population interviewed in June of 2010 that claims to have moved in April 2010 will be the same as the proportion of the population interviewed in, for example, September 2010 that claims to have moved in April 2010.
The targeted mover study will contain a sample of households on the Master Address File (MAF) that also filled out a U.S. Post Office COA form indicating they moved in April or May 2010. Group quarters on the MAF will be excluded. For each panel, the sample allocated will be asked about any move they have made in the last 12 months. In the RDD sample, the proportions of reported movers will be compared to the May (control) panel. In the mover study, the responses will be compared to the NCOA to determine the proportion that reports the move in the matching timeframe. The first panel in May 2010 will be allocated entirely to the RDD study due to operational concerns in preparing the NCOA sampling frame. The two studies universes will be kept completely separate and weighted and analyzed separately.
An inmover is defined as a resident of the P-sample address on P-sample interview day that lived at another address on Census Day. An outmover is defined as a person who lived at the P-sample address on Census Day but has moved out by interview day. If the move is prior to Census day or no move is reported, the person is a non-mover.
An important component of Dual System Estimation is the P-sample match rate. People who move around April 1 (Census Day) can cause difficulties in estimating the proportion of P-sample population persons who match to the Census. For the 2010 CCM, we will be dealing with movers using methodology such that inmover interview day residents of a P-sample address are in the P-sample and Census Day residents who have moved out by P-sample interview day are not in the P-sample. There is an exception to this for outmovers who have moved to a Group Quarters address. These outmovers are in the P-sample since they would have no chance for selection otherwise. Group Quarters addresses are out of scope for the P-sample. For this discussion, let us assume there are no outmovers into a Group Quarters address.
Several errors are possible. The error of classifying an inmover as an outmover or an outmover as an inmover doesn’t seem likely to occur often. For this discussion we will ignore this error. A false inmover will have matching attempted at their reported previous address. They could actually be an inmover prior to Census Day in which case they should have been classified as a non-mover and matching should be attempted at the P-sample address since they were living there on Census Day. They could also actually have never moved at all in which case matching should also be at the P-sample address.
A false non-mover will have matching attempted at their P-sample address. They could actually be an inmover after Census Day in which case matching should be attempted at their previous address. They could also be an outmover prior to Census Day or an outmover after Census day in which case they should not be in the P-sample.
A false outmover after Census Day will be classified as not in the P-sample so no matching will be attempted. Recall that we are ignoring here any possibility of moving into a Group Quarters address. They could actually be a non-mover in which case they should be in the P-sample and matching attempted at the P-sample address.
Thus, if the move date is reported as after April 1 when the actual move was prior to April 1 or the move date is reported as prior to April 1 when the actual move was after April 1, the matching results may have errors. These errors will cause bias in dual system estimates. There are other responses that are important for estimation. For example, alternative possible address information is collected to aid in establishing residence status. Clearly problems with recall can affect a respondent’s responses on alternative addresses, which can bias the estimates. For this sample design analysis, we will consider a statistic based on error in move date response.
Allocation of Sample to RDD versus Targeted Mover Study
For the RDD study, assume each panel is a sample of households from the general population. We want to test the null hypothesis that the proportion of the general population reporting a move in April 2010 is the same for any two panels. We are thinking that perhaps due to recall bias that maybe something like telescoping will result in a lower proportion of reported moves in April 2010 for later panels. This would cause a rejection of the null hypothesis. For a 10% type I error rate, the detectable difference is 1.64 times the standard error of the difference between the two estimated proportions. Under the null hypothesis, the true population proportion is assumed to be 1% for each panel. The sample for each panel is independent and assumes equal respondent sample sizes, r, for each panel. Thus the detectable difference DD is as follows:
For the mover study, assume each panel is a sample of households that filled out and indicated a move for March and April 2010. We want to test the null hypothesis that the proportion of the population reporting a move in March or April 2010 that reports a move in the correct time frame during the Recall Bias Study is the same for any two panels. We are thinking that perhaps due to recall bias later panels will have a lower proportion of correct reporting than earlier panels. This would cause a rejection of the null hypothesis. As for the RDD study, for a 10% type I error rate, the detectable difference is 1.64 times the standard error of the difference between the two estimated proportions. Under the null hypothesis, the true population proportion of correct reporting is assumed to be 50% for each panel. This is a conservative assumption producing the largest possible detectable differences. The sample for each panel is independent and assumes equal respondent sample sizes, r, for each panel. Thus the detectable difference DD is as follows:
Table 1 shows the detectable difference (DD) and required split of the 10,000 panel sample to the RDD Study and mover study. Here a response rate of 40% is assumed for the RDD Study (allocated to landline and cell phones as discussed in the section below, but treated as one sample here). A response rate of 50% is assumed for the mover study.
For example, for a panel sample of 10,000 with 5,500 allocated to the RDD Study and 4,500 allocated to the COA Study, we would expect 2,200 RDD respondents (overlap between the landline and cell phones samples ignored) and 2250 mover respondents. The detectable difference for the RDD study is 0.49% and the detectable difference for the mover study is 2.44%. For the RDD study this would mean, for example, that if one panel reported that 1% moved in April 2010 the other panel would need to report that more than 1.49% or less than 0.51% moved in April 2010 in order to reject the null hypothesis of no difference. For the mover study this would mean, for example, that if one panel had 50% reporting their move in the correct time frame, the other panel would need to report that more than 52.44% or less than 47.56% reported their move in the correct time fame in order to reject the null hypothesis of no difference.
Given the 10,000 total sample size restriction for each panel (due to operational constraints), a sample split of 5,500 to the RDD Study and 4,500 to the mover study was selected in order to keep the detectable difference under 0.5% for the RDD study.
Table 1
Detectable Differences for CCM Response Bias Study |
|
||||
|
RDD Study |
|
COA Study |
||
|
|
40% |
|
|
50% |
DD (%) |
sample |
respondents |
DD (%) |
sample |
respondents |
1.63 |
500 |
200 |
1.68 |
9500 |
4750 |
1.15 |
1000 |
400 |
1.73 |
9000 |
4500 |
0.82 |
2000 |
800 |
1.83 |
8000 |
4000 |
0.73 |
2500 |
1000 |
1.89 |
7500 |
3750 |
0.69 |
2800 |
1120 |
1.93 |
7200 |
3600 |
0.67 |
3000 |
1200 |
1.96 |
7000 |
3500 |
0.62 |
3500 |
1400 |
2.03 |
6500 |
3250 |
0.58 |
4000 |
1600 |
2.12 |
6000 |
3000 |
0.54 |
4500 |
1800 |
2.21 |
5500 |
2750 |
0.52 |
5000 |
2000 |
2.32 |
5000 |
2500 |
0.49 |
5500 |
2200 |
2.44 |
4500 |
2250 |
0.47 |
6000 |
2400 |
2.59 |
4000 |
2000 |
0.45 |
6500 |
2600 |
2.77 |
3500 |
1750 |
0.44 |
7000 |
2800 |
2.99 |
3000 |
1500 |
0.42 |
7500 |
3000 |
3.28 |
2500 |
1250 |
0.41 |
8000 |
3200 |
3.67 |
2000 |
1000 |
0.40 |
8500 |
3400 |
4.23 |
1500 |
750 |
0.38 |
9000 |
3600 |
5.19 |
1000 |
500 |
0.37 |
9500 |
3800 |
7.33 |
500 |
250 |
The land line sample is a list-assisted selection. The list is known as a working bank that is the set 100 telephone numbers with the same first eight digits. Working banks are historically the unit that telephone companies use to manage telephone numbers. We complete the sampling by using the GENESYS sample selection software created by the Marketing Systems Group. We use the GENESYS software for a variety of reasons, including the quarterly updating, cost effectiveness, and flexibility. Once the initial sample is selected, the final is selected. This sample is a single stage equal probability selection of sample telephone numbers. After selecting the sample of telephone numbers, we will send the sample to MSG for additional screening of businesses and unused telephone numbers to increase the efficiency of the sample.
The cell phone sample is selected in the same fashion except the bank is limited to numbers allocated to cell phone use and it will not be sent to MSG for additional screening.
The
target population is all households in the general population. Many
households have a landline phone and one or more individual cell
phones for household residents. Some households have either a
landline phone or individual cell phones but not both. Some
households have no phone at all but this is considered to be a
negligible amount. A Google search found an estimate that in 2008
there were 233 million cell phones and 146 million landline phones in
the U.S. However, we have no available information on how these
are distributed to households. Here, we assumed that with no overlap
50% of the target population is represented in stratum 1 (landline
phone only) and 50% in stratum 2 (cell phone only). There are
several options available for appropriately weighting the observed
sample. We will ask the respondents if they have both a cell phone
and landline. A simple estimate can then be calculated by weighting
each respondent based on their approximate probability of selection.
Assuming 50% of the sample is allocated to landlines, 50% to cell
phones, and the cell phone and landline universes are about the same
size (they are both extremely large compared with panel sample
sizes), respondents who have both a cell phone and landline have
approximately twice as large a probability of selection than
respondents who do not have both. Thus, for estimated proportions,
respondents who have both a cell phone and landline can be given a
weight of 1/2 while respondents who do not have both a landline and
cell phone can be given a weight of 1. Note that the two samples are
independent and the sampling fractions are very small. Although the
frame overlap is expected to be substantial so that a sizeable
proportion of the overall sample will have both a cell phone and
landline, the actual number of respondents who were selected in
sample in both frames is expected to be very small. Other weighting
possibilities that may produce more reliable estimates than this
simple approach will be examined.
To consider the allocation of the RDD sample in each panel to cell phones and landline phones, minimize the variance of the estimated proportion who moved in April subject to the constraint of C1n1+C2n2+n1+n2-(C+n) = 0. For stratum j, Cj is the cost per sample piece and nj is the designated total sample. C is the total cost and n is the total sample size in a panel. The minimization is done using the Lagrange method. The simplified assumption is no overlap, and no weight adjustment, for increased probability of selection for those represented by both a cell phone and landline. Thus, a simple two stratum approach can be applied.
The
sample allocation to each stratum is proportional to the stratum
standard error, pq, and inversely proportional to the square root of
the response rate times (cost+1).
Stratum 1:
Landline
Stratum 2: Cell Phone
We
assume a 50% response rate for Stratum 1 and 30% for Stratum 2.
For
several combinations of the proportion of a stratum that moved in
April and the cost per sample piece, the optimal allocation was
computed. Results are in Table 2 below. Although we might
expect a higher mover rate among cell phone users, we have no data on
this. Even assuming a 50% higher move rate (1% to 1.5%), the
allocations don't change much. From this analysis given the
limitations of unknown assumptions, the suggestion is to allocate one
half of the RDD sample to the landline stratum and one half to the
cell phone stratum.
Table 2
Allocation of RDD sample to Stratum 1 (Landline) and Stratum 2 (cell phone) |
||||||
P1 |
P2 |
C1 |
C2 |
n1/n |
n2/n |
|
0.01 |
0.01 |
20 |
20 |
0.436 |
0.564 |
|
0.01 |
0.015 |
20 |
20 |
0.388 |
0.612 |
|
0.01 |
0.01 |
20 |
40 |
0.520 |
0.480 |
|
0.01 |
0.015 |
20 |
40 |
0.470 |
0.530 |
|
Overall, for the Recall Bias Study we plan on a 10,000 sample size for each of the four panels. For the first panel (May 2010), the entire sample will be allocated to the RDD Study. For the other three panels (June 2010, September 2010, and February 2011), 5,500 of the sample will be allocated to the RDD Study and 4,500 will be allocated to the mover study. The detectable difference when comparing two panels is 0.49% for the RDD study (estimate proportions about 1%) and 2.44% for the mover study (estimated proportions around 50%). For the RDD study the detectable difference will be somewhat lower when one panel being compared is Panel 1 (May 2010) since the entire sample of 10,000 is allocated to the RDD Study for Panel 1.
Alaska, Hawaii, and Puerto Rico will be excluded from both studies due to operational limitations on collecting data in those areas.
2. Data Collection
During PI, interviewers use a computer assisted data collection instrument to obtain information about the current residents of the sample housing unit and certain persons who moved out of the sample housing unit between Census Day and the time of the CCM interview. The instrument collects names, addresses, where they lived on Census Day and where they lived at the time of the interview. Demographic information (as in the census) is also collected. The expected workload for the operation is about 205,000 housing units, including the 170,000 P-sample cases for the nation, 7,500 P-sample cases in Puerto Rico and the estimated 27,000 census-only cases.
Person Interview Reinterview (PIRI)
For the Person Interview Reinterview (PIRI) operation, cases will be selected from the Person Interview work as it is returned. All cases are eligible for reinterview except cases where the outcome of the PI was a refusal, language problem, no knowledgeable respondent, or more than one enumerator completed the original PI case.
We will select cases systematically within each enumerator's workload, starting with one of the first three cases he or she completes that is eligible and every nth case, thereafter, to get approximately a 12 percent random sample. We will allow the Regional Census Center to mark enumerators for supplemental reinterview or to select specific cases for supplemental reinterview. We will also select cases based on outlier characteristics. For example, an enumerator with a significantly high number of vacant cases compared to his or her crew leader district average will have two cases selected for reinterview, preferably vacant, if available. We expect the outlier reinterview to consist of about 2 percent of the PI workload. We also allow 1 percent of supplemental reinterview cases. As a result, on average, each enumerator will have about 15 percent of their workload reinterviewed.
After PI RI cases are selected, the case is made available to the Regional Census Center for assignment to a reinterviewer. The reinterviewer will complete the PI RI case with the original PI household respondent, the original proxy respondent, or a new respondent, depending on the unit status, respondent type, and the availability of contact information from the PI case. Reinterviews can be done by telephone or personal visit. Based on testing it is expected 60 percent of cases will be done by telephone and 40 percent will be done by personal visit. The reinterview instrument will collect information about the unit status on the PI date, whether the original respondent was contacted, and household roster for cases determined to be occupied on the PI date.
Data collected from the original interview and the reinterview are evaluated to determine if the original interviewer falsified data. The Census Bureau utilizes a three-stage approach to match the original interview and reinterview to evaluate the cases. The first stage uses an automated system to determine if the original interview and reinterview information matches. If it doesn’t match, the case is forwarded to National Processing Center (NPC). In this second stage, clerks in NPC review the case information and try to determine if the discrepancies can be explained (such as discrepancies created by the transposition of letters or misspellings). If the NPC clerks can’t explain the differences by only reviewing the information, the cases are forwarded to the Regional Census Centers (RCCs). In the third stage, clerks in the RCCs will followup with respondents, if necessary, to determine the reasons for the discrepancies on those cases. The clerks will use a brief telephone script to corroborate the original interview information to determine if the original enumerator falsified the data. Based on the clerks’ followup with respondents, they will code these cases accordingly.
Recall Bias Study
During the Recall Bias Study, interviewers use a computer assisted telephone collection instrument to obtain information about the current residents of the sample housing unit and certain persons who moved out of the sample housing unit from the beginning of the year (2010) to the time of the CCM interview. While all data collected is similar to Person Interview, information collected about possible movers is more open ended to better allow for probing to get details on the situation. (See Attachment E)
The study is aimed at measuring respondents’ ability to recall alternative addresses, moves and residence as of Census Day.
Professional interviewers who are experienced in conducting telephone interviews will carry out interviews.
One eligible adult (18 years or older) will be interviewed per household.
Interview will take approximately 10 minutes to complete.
The collected interview will include names, demographic data, possible alternate address information and dates of moves.
Data collection will be for 2-3 weeks during the months of May, June, and September 2010 and February 2011. The May panel will be used as the control; June was the timeframe of the Person Interview (PI) in the 2000 Accuracy and Coverage Evaluation Survey; September is the midpoint of 2010 CCM PI; and February is the midpoint of the CCM Person Followup (PFU).
3. Methods to Maximize Response
The PI and PIRI questionnaires contain the minimum number of questions necessary to obtain the data required for the evaluations, and the interviewer will make multiple contact attempts in order to obtain an interview (six attempts by personal visit interviews unless a telephone interview is requested). The interviewer will explain the reason the Census Bureau is conducting this operation and respondents will be informed of their legal responsibility to answer the questions. In addition, respondents will be assured that their answers are confidential. If a respondent refuses to answer the questions, the interviewer’s Crew Leader may attempt to obtain an interview or may assign the case to another interviewer (refusal conversion - personal visit interviews only).
For the Recall Bias Study, phone numbers will be contacted a number of times within the 3 week panel time frame. No advance letters will be sent. The number of contact attempts is based on the result of the previous contacts. Every number will be contacted a minimum of two times if a complete interview is not collected on the first try. After two times, noninterviews with a small chance of being completed (such as language barriers, bad phone numbers, etc.) are dropped.
Other type of noninterviews are divided into contact and no contact. If no contact has been made (such as no answers, busy signal, fax), then calls are limited to 5 contacts. If contact has been made (such as hang-ups, answering machines, will call back) then the average number of attempts will be 10. The maximum allowable number of attempts is 20. We will attempt a one-time refusal conversion and some Spanish speaking interviewers. When making contact, the interviewer will explain the reason the Census Bureau is conducting this operation. In addition, respondents will be assured that their answers are confidential. The telephone interviewers will be able to schedule appointments to complete the interview later if requested. Interviewers will also be trained on how to get reluctant respondents to reply and how to handle cell phone specific issues.
4. Testing of Procedures
We conducted cognitive testing on the 2006 Person Interview questionnaire (English in 2006 and Spanish in 2007) in order to help us improve the efficacy of the 2009 instrument and questions. An operational test of CCM Person Interview and Person Interview Reinterview Operations was held in 2009 and was the main basis of the 2010 operation. For the Recall Bias Study, the questionnaire with only minor modifications was used in 2006 with success for collecting the same type of information. The 2006 instrument was successful, but to insure updating was correct, the instrument was tested separately as well as in a full systems test that included the call center.
5. Contacts for Statistical Aspects and Data Collection
Gia Donnalley
Coverage Measurement Design for Data Collection Operations Branch Chief
Decennial Statistical Studies Division
(301) 763-4370
Definition of Terms
Alternate Addresses
These are respondent provided addresses obtained during the CCM PI for other places where household members may have been counted on Census Day.
Components of Census Coverage
The four components of census coverage are census omissions (missed persons or housing units), erroneous inclusions (persons or housing units enumerated in the census that should not have been), correct enumerations, and whole person imputations (census person enumerations on which we did not collect sufficient information). Examples of erroneous inclusions are housing units built after Census Day and persons or housing units enumerated more than once duplicates).
Net Coverage Error
This is the difference between the estimate of the true population count and the actual census count. A positive net error indicates an undercount, while a negative net error indicates an overcount.
A. Introductory Letter, (Privacy Act Notice),
B. Computer Specification for the Person Interview
Computer Specification for the Person Interview Reinterview Instrument
E. Computer Specification for the Recall Bias Study Instrument
Comments from Brookings Institution on Supporting Statements, December 24, 2009
Attachment A – Introductory Letter
See Separate File ATTACHA – PI Letter
See Separate File ATTACHB – 2010 PI Instrument Spec.doc
Attachment C - Computer Specification for the Person Interview Reinterview Instrument
See Separate File ATTACHC – 2010 PIRI Instrument Specification.doc
See separate file ATTACHD - Brookings FR Response 2010 CCM PI RI 08-03-09.pdf
See separate file ATTACHE-RBS Specification.doc
Attachment F – Brookings Institution Comments on the Supporting Statement, December 24, 2009
See separate file ATTACHF-Brookings FR Response 2010 CCM PI RI 12-24-09.pdf
File Type | application/msword |
File Title | SUPPORTING STATEMENT |
Author | linse002 |
Last Modified By | smith056 |
File Modified | 2010-03-11 |
File Created | 2010-03-08 |