NSV Supporting Statement Part B

NSV Supporting Statement Part B.doc

Pilot Study for the National Survey of Veterans (NSV), Active Duty Service Members, Activated National Guard and Reserve Members, Family Members and Survivors

OMB: 2900-0726

Document [doc]
Download: doc | pdf









Supporting Statement




Collections of Information



Employing Statistical Means


Part B









March 20, 2009



TABLE OF CONTENTS





PAGE

1. Potential Respondent Universe 1

2. Procedures for Collection of Information 6

3. Methods to Maximize Response Rates and Deal with Non-response 8

4. Tests of Procedures and Methods 12

5. Name of Individuals Consulted on Design 15



Tables

Table 1. Relative Ranking of Operational and Data Quality for Mail and RDD

Designs for the National Survey of Veterans 5

Table 2. Expected Allocation and Sample Sizes for the NSV Pilot Study 6

Table 3. Expected Standard Errors 10

Table 4. Standard Errors and Power of Tests Comparing 11

Table 5. Individuals Consulted 15






Supporting Statement B - Collections of Information Employing

Statistical Methods


1. Describe (including a numerical estimate) the potential respondent universe and any sampling or other respondent selection methods to be used. Data on the number of entities (e.g., establishments, State and local government units, households, or persons) in the universe covered by the collection and in the corresponding sample are to be provided in tabular form for the universe as a whole and for each of the strata in the proposed sample. Indicate expected response rates for the collection as a whole. If the collection had been conducted previously, include the actual response rate achieved during the last collection.


The target population for this pilot study is the population of Veterans living in households in the U.S. The survey population excludes Veterans in institutional or non-institutional group quarters. Based on the 2006 results from the American Community Survey, we expect this population to be about 23.4 million persons living in about 22.5 million households.


The pilot will be based on a random sample of households. The sample households will be contacted first by mail, and a household respondent will be asked to complete the short screening questionnaire, either on paper or on the Internet, for all household members. All Veterans identified by the screening questionnaire will then receive a short version of the extended questionnaire. In the full 2009 NSV, the extended questionnaire will collect most of the variables of interest to the survey.


Justification of Design


The sample for NSV 2001 used a dual frame approach. One frame was the VA Health Care file and the Pension and Compensation beneficiaries file. The other frame was a random digit dial (RDD) process. The VA files permitted oversampling certain groups that could not be interviewed in large numbers from the RDD. The RDD provided a general population frame that included all veterans. The two frames were combined by attempting to estimate the overlap between the two.


For several reasons, it was decided not to repeat this design in 2009. Instead, the VA is proposing to use an address frame to conduct a mail survey in two stages. The first stage is a short screening instrument which asks if there any Veterans, spouses of Veterans or surviving spouses of Veterans living in the household. Based on the returned screening survey for the household, a second longer survey is sent to eligible populations. This longer survey contains the primary substantive questions that are of interest (e.g., awareness of benefits; knowledge on how to access benefits).




Comparison between RDD and Address-based approaches


One reason to move away from the 2001 approach is the growing problems associated with the RDD frame. One issue is declining response rates. This decline began in the late 1990’s and is continuing up to the present time (Curtin, et al., 2005; Battaglia, et al., 2007). For example, the Behavior Risk Factor Surveillance Survey (BRFSS) is a major national RDD survey done on an annual basis. In 2007, the median overall response rate across all states was 33.5%. In 2001, the overall response rate was 51.1%. This represents a drop of almost 20 percentage points. One would expect this drop in response rates to be continuing through 2009. More recent RDD studies are now routinely getting response rates below 30%, some even lower than 20%. As a point of comparison, in 2001, the NSV achieved an RDD response rate of 51%. We expect that this rate would follow the same trend as the above surveys by significantly dropping to 30% or below.

A second issue with the frame is coverage of individuals who have a cell phone, but no landline. The standard RDD frame does not include these individuals and this population is growing as cell phones become more widespread. While for the general population this percentage is still not thought to be large enough to significantly bias many different types of estimates1, this is not true for certain subpopulations such as young persons, a group of interest to the NSV. The most recent estimates are that between January and June of 2008, approximately 35% of persons age 18 – 29 had a cell phone and lived in households without a landline. Perhaps just as importantly, these “cell-only” individuals significantly differ from the landline populations along a number of important indicators, such as health, access to health care and risk behaviors (Blumberg and Luke, 2008).


Tied to the above trends is the growing cost associated with telephone surveys. As response rates decline, greater effort is needed to complete interviews. The NSV 2001 had to screen approximately 300,000 households in order to complete 13,000 interviews. We believe proportionately more households would have to be screened in 2009 using an RDD survey. Conducting surveys of cell phone users is more expensive than landline users. It also introduces an additional population that has to be screened. A cell phone survey that screened on both the absence of a landline and the presence of a veteran would require approximately 20 completed screening interviews for each Veteran located.2


The use of an address-based approach offers several advantages over an RDD frame. If the address-frame is comprehensive, then it should cover all individuals in the population, including those that do not have access to a landline. With the availability of the USPS list, it is now possible to conduct a general population survey using a mail survey. Early work by Link and colleagues (2006) has found that this approach provides comparable, or even better, response rates than RDD. For example, this research found that a mail version of the Behavioral Risk Surveillance Survey (BRFSS) achieved comparable or higher response rates than a parallel RDD survey in six states. More recently, Westat has conducted mail surveys as part of two RDD efforts. One, conducted in the state of Minnesota, had a response rate that was slightly less than a parallel RDD survey. A second, national survey had a higher response rate for the mail survey (Westat, 2009).


Based on this experience, as well as the response rate from the 2001 NSV, we have assumed that the approach discussed below could achieve an overall response rate of approximately 30% to 35% for the targeted populations (i.e., Veterans, spouses of Veterans and survivors of Veterans). A metric associated with the Pilot study is the extent that this goal is achieved.


A second issue associated with an RDD approach is the use of VA and DoD administrative information. The use of these lists offers significant advantages with respect to data collection efficiencies. The 2001 NSV used a dual frame estimator to combine the two frames. However, this assumed it would be relatively easy to resolve the overlap between the two frames. This was problematic for several reasons. One was the fact that there were many individuals who could not be located. Twenty percent of the vets could not be located at the screener and 34% could not be located when trying to do the extended interview. This made it difficult to combine the RDD and VA lists together into a single estimate.


When used in conjunction with other lists available through the VA and DoD, the Pilot design will allow for use of the VA/DoD lists. The VA and DoD lists will be used to stratify the sample by matching addresses from the USPS list to the VA/DoD files. This provides efficiencies when screening households for whether there are any Veterans, spouses or survivors. However, it does not require that the procedures locate Veterans identified on the list. This should allow for calculating the correct selection probabilities. An important metric of the Pilot study will be to assess the effectiveness of this sampling approach.


One stage vs. two stage approaches


Two different methods for the mail survey were considered. One was the proposed two-stage approach. The first stage is a short survey which asked whether there were any veterans in the household. If there was at least one veteran, the questionnaire would ask for basic information on the period of service and personal characteristics needed to identify subpopulations of interest (e.g., age, race, gender). The survey would also ask for contact information for the veterans (e.g., e-mail). The second stage would ask the identified veteran(s) to fill out the full NSV survey.


The second approach considered would be to ask respondents to complete the survey in one stage. The full NSV survey would be sent to all sampled households. The beginning of the survey would resemble the screening survey sent in the first stage above. If no veterans (or other subpopulations) are identified, the respondent would be asked to send the survey back. If a veteran (or other subpopulation) is identified, the respondent would be asked to pass the survey along to the Veteran, spouse or survivor, who would fill it out and send it back.


We are recommending the two stage approach for three reasons. One significant problem with the one-stage design is the need to send out full questionnaires to all households in the sample. Given our current estimates, this would mean that approximately 130,000 20-page questionnaires would be sent at the first mailing. This would carry through to subsequent follow-up mailings to non-respondents. Once considering there are Veterans in approximately 20% of the households, a large number of ineligible individuals will be sent the full survey. Compounding this problem is the need to include other surveys in the package for spouses of Veterans and surviving spouses of Veterans.


A second issue with the one stage approach is that it would rely on whoever opens the mail to decide on who should fill out the questionnaire. For the mail surveys described above by the CDC and Westat, there were issues related to how respondents are selected within the household. Respondents to a mail survey do not generally follow instructions very closely (e.g. Battaglia, et al., 2008). The two-stage approach is more straightforward. Anyone in the household can be asked if there are any Veterans, spouses or survivors in the household. If there are, hen assignment of the appropriate respondent can be completed.


A third advantage of the two-stage approach is that it provides a way to subsample the subpopulations of interest. The first stage survey (or screener) will provide information that identifies key subgroups, as well as Spouses and Survivors. Once identified on the screener, a follow-up survey can be sent to the individual.


The disadvantage of the two stage approach is the need to send two separate requests to the household to complete the survey. This may depress the overall response rate, relative to a one-stage approach. However, given the issues with the one stage approach described above, this methodology did not seem practical for the NSV. The two-stage design will be evaluated as part of the Pilot study (See response to question 16 in Part Q and questions 3 and 4 in Part B.).



Summary of RDD and Address-based Designs


Table 1 provides a summary of the relative advantages of an address-based design which uses a mail/web survey vs an RDD design that relies on the telephone as the primary mode. These rankings represent our assumptions about the performance of designs that use each of these two different methodologies. With respect to response rates, the mail surveys and the RDD without a cell phone component are very close. Our assumption is that an RDD survey would yield a response rate for veterans somewhat below 30%. This is based on observed drops in response rates since 2001, when the last NSV was conducted using RDD. We are assuming that veterans will respond to a mail survey more positively. An RDD survey with a cell phone component would achieve the lowest response rate because it is harder to get cooperation from cell-phone users.



Table 1. Relative Ranking of Operational and Data Quality for Address+ and RDD Design for the National Survey of Veterans

Mode

Approach

Response Rate

Coverage

Respondent Selection

Use of VA Files

Cost

Mail

1-Stage

1

1

3

1

>1

2-Stage

1

1

2

1

1

RDD

Without Cell

2

3

1

2

2

With Cell

2

2

2

2

3

+ Address Design includes the use of the mail and Web as the primary mode. An RDD includes the use of the telephone as the primary mode.



The coverage of the mail is better than any of the RDD alternatives, since neither includes households without any telephone. With respect to respondent selection, there are clear difficulties with the 1-stage mail survey approach. Multiple questionnaires would have to be included in the package and household members would have to decide who would have to fill out a survey. RDD is probably the best in this category because the interviewer has direct control over the process. However, the 2-stage approach also provides control over this process. Respondent selection of cell phone users is a bit more complicated for a landline RDD because of the uncertainty with who else is attached to a cell phone (e.g., is it shared by others in the household?). As noted above, the mail surveys are best for both the use of the VA administrative data, as well as the overall cost. The lowest cost is the 2-stage survey.



Expected Response Rates


Because a primary purpose of the pilot is to test assumptions for the design of the main study, the pilot will provide measures including response rates for three proposed sampling strata in addition to the overall rates. The three sampling strata are further described in response to Question 2. Based on our preparation of the sampling frame for this study, we now estimate that the strata sizes are 5.8 million, 0.3 million, and 114 million households, respectively.


Table 2 provides our planned allocation of the sample to the sampling strata. We have assumed that veteran households will respond to the screener at 50% and non-veteran households at 30%. We have also assumed that stratum 1, stratum 2, and stratum 3 contain 80%, 80%, and 15.8% veteran households, respectively. Under these assumptions, Table 2 gives the total number of households expected to respond to the screener, the number of veteran households expected to respond to the screener (the households eligible for the pilot version of the extended questionnaire), and the expected respondents to the extended questionnaire. The Pilot results will permit us to refine these estimates to guide the design of the 2009 study.




Table 2. Expected Allocation and Sample Sizes for the NSV Pilot Study

Sampling Strata



Sampled Households

Expected Screening Response Rate

Expected Screening Response (# of HHs)

Expected Vet HH Screening Response (# of HHs)

Expected Veteran Extended Response (# of Veterans)

Stratum 1

2,160

46.0%

994

864

630

Stratum 2

255

46.0%

117

102

74

Stratum 3

7,585

33.2%

2,516

600

438

Total

10,000

37.1%

3,627

1,566

1,142


A previous cycle of the NSV was conducted in 2001. The initial sampling was based on overlapping RDD and list frames. The RDD component had response rate of 67.6% to the screener, which identified eligible or potentially eligible Veterans. For Veterans determined to be eligible in the RDD component, the response rate to the extended interview was 76.4%. For the list sample, 54.0% were eligible Veterans who completed the interview and 8.8% were determined to be ineligible, but eligibility could not be determined for 34.0%. We have partially based our estimate of a 70% response rate to the extended interview from the 2001 RDD results.



2. Describe the procedures for the collection of information including:


  • Statistical methodology for stratification and sample selection,


  • Estimation procedure,


  • Degree of accuracy needed for the purpose described in the justification,


  • Unusual problems requiring specialized sampling procedures, and


  • Any use of periodic (less frequent than annual) data collection cycles to reduce burden.


The primary frame for the sample will be the Delivery Sequence File (DSF) of the U.S. Postal Service. We will obtain a probability sample of addresses from the DSF from a commercial vendor.


For purposes of stratification, we will also assemble a separate list of addresses based on administrative records from the VA and DoD. VA will help with the initial processing of the VA and DoD/DMDC files by encrypting the Social Security Number across the files into a unique ID. VA will then provide the VA and DoD files containing the unique ID to Westat for additional processing. For purposes of this study, we are using only the Health Care Enrollment file from the VA and the Prior Service Military Address File (PSMAF) from DoD. We will merge the DoD and VA files on the basis of the unique ID to create a file of unique individuals, setting flags to indicate which files contained records for each individual. Finally, we will convert the merged file of individuals into a file of unique addresses where Veterans may live.


As noted in the answer to Question 1, a goal of the pilot survey will be to evaluate response rates and the relative proportions of Veterans in three primary sampling strata planned for the full 2009 NSV. The three strata are the following:


  1. DSF addresses that can be matched to addresses identified on the VA file;

  2. DSF addresses that can be matched to addresses on the DoD file, but not the VA file;

  3. DSF addresses that cannot be matched to addresses on the DoD and VA files.


These sampling strata are of interest for the main study because (1) stratum 1 represents a high-yield stratum of households likely to include Veterans who are users of VA services, and (2) stratum 2 should include a number of households with Veterans, particularly young Veterans, who may not yet use VA services. Determining the characteristics and needs of non-users is the primary rationale for the 2009 NVS.


Using an address-matching approach, we will merge the DSF sample obtained from the vendor with the merged VA/DoD files to differentiate strata 1, 2, and 3 on the DSF sample file. We will then subsample the resulting file at varying rates by stratum to yield the designated sample sizes in Table 1. Technically, the design is a two-phase sample, with the selection of a sample from the DSF by the vendor constituting the first phase, and the merging of stratification information from the VA/DoD and subsequent subsampling representing the second phase.


We will be able to compute initial weights based on the inverse probability of selection, and use of these weights will inform us on the effective coverage of the survey, including the impact of non-response. These weights will also be useful in creating weighted non-response rates for the separate strata. We will also implement standard non-response adjustments to the weights for other parts of the analysis.


The sampling methods are standard. Our response to question 16 of Part A describes how the pilot-survey data will be used to estimate the sample design parameters of the 2009 NSV. The pilot will be a one-time survey intended to yield results to improve the quality of the 2009 NSV.


3. Describe methods to maximize response rates and to deal with issues of non-response. The accuracy and reliability of information collected must be shown to be adequate for intended uses. For collections based on sampling, a special justification must be provided for any collection that will not yield "reliable" data that can be generalized to the universe studied.


The purpose of the data collection is to refine the methodology for the National Survey of Veterans. The data collection will consist of implementing a two-phase sampling approach. The first phase will send out a short (screening) survey to each sampled address. This survey will ask a household respondent to report on whether there are any Veterans, spouses of Veterans, or survivors of Veterans living in the household As noted in response to Question 4 below, we are proposing an experiment that compares two variations of the screener (see Attachments 1 and 2 for the two versions).


Once this first-phase survey is returned, a second (extended) survey will then be sent to those households where a veteran was identified. One questionnaire will be sent for each person identified as a veteran. This questionnaire will contain questions about the veteran’s knowledge about VA benefits, as well as their use of services

(Attachment 3).


Procedures and Methods


The pilot study will be using and/or testing a number of methods to maximize the response rate and data quality. To maximize the response rate, the survey will make multiple contacts with the household, following the Total Design Method recommended by Dillman (2000). The process to be followed is shown in Exhibit 1. The first mailing will be a pre-notification letter that alerts members of the household of the survey (Attachment 4). Approximately 4-5 days later, the survey instrument will be mailed, with a cover letter providing the details about the study (Attachment 5). As noted in response to Question 4 below, the study will experiment with two methods at this initial contact. One method will encourage the use of the web at the initial contact. If a respondent in this experimental group does not have access to the internet, he/she can call a toll-free number to get a paper version of the instrument. The second method will ask respondents to fill out the paper version of the survey. Attachment 5 includes two versions of the letter, one for each of these approaches. In addition, Attachment 5 provides the insert that will be included in half of the survey requests (see discussion of experiment in response to Question 4 below).


One week later, a postcard will be sent that thanks those that return the questionnaire. For those that have not returned a questionnaire, it reminds them to do so (Attachment 6). Attachment 6 provides two postcards, one for the group that asked for a response on the web and one for the group that requested a response by a paper survey. If the questionnaire is not returned in two weeks, then a third mailing will be sent to the address (Attachment 7). Attachment 7 includes two versions of the cover letters. For the group that was assigned to the website experimental group, the mailing will contain a cover letter that provides the URL and password for the web and a paper version of the instrument. The mailing to the paper survey group will contain a cover letter and a paper questionnaire.


Once a screener is returned, it will be scanned and entered into a database. If a Veteran is identified, the mode preference of the household will be used to decide on the type of request to send to the sampled household. The screener contains a question which asks the respondent whether they would prefer to respond by the web or by mail. If they choose the web, they are asked to provide an e-mail address. Those that choose the mail will be mailed a paper copy of the survey. See Exhibit 2 for the flow of contacts for the extended interview.


Web preference. For those choosing the web, an e-mail and a letter will be sent to the respondent. Attachment 8 provides this e-mail and letter. The letter will also contain an insert that encourages their use of the Internet to complete the survey (also in Attachment 8). One week after these materials are sent, a reminder postcard will be sent to the respondent (Attachment 9). Two weeks after the postcard, a second letter will be sent which requests the respondent to fill out the survey (Attachment 10). This request will include a paper survey, while also giving the respondent the option to answer on the web.


Mail Preference. The sequence of mailings for this group will be very similar to the web preference group. The only difference is that this group will not be given an option to complete the survey on the Internet. They will be sent a paper questionnaire at each of the mailings. Attachments 11 – 13 provide the letters for this group of respondents.


Reliability


The proposed data collection is being done to address two questions. The first is to assess the response rates and the overall proportion of Veterans responding to the survey under the proposed approach outlined in responses to Questions 2 and 3 above. To evaluate this question, two different analyses will be done. One will be to estimate the overall response rate to the screening interview, as well as the response rates for the different sampling strata. The response rates observed in the pilot serve to estimate the future response rates for the full 2009 study. To evaluate the reliability of this prediction, however, the response rates observed in the pilot can be viewed as random variables, and their standard errors estimated. The resulting standard errors measure the potential variation in the response rates over repetitions of the pilot sample design.


Under the sampling plan and assumed response rates discussed in response to Question 1, the standard errors for different estimates of the response rates are shown in Table 3. The precision of the response rates is well within what is needed to make decisions about the main survey.




Table 3: Expected Standard Errors for Estimates of Response Rates


Expected Screening Response Rates

Standard Errors

Overall Response Rate

36.3%

0.5%

By Stratum

Stratum 1

Stratum 2

Stratum 3



46.0%

46.0%

33.2%



1.1%

3.1%

0.5%



Because Veterans are the target population for the pilot study and the primary target population for the 2009 NSV, a key purpose of the pilot is to measure how well the sampling and survey strategy represent the veteran population. As previously noted, the design anticipates a higher response rate by households with Veterans than those without. To measure the degree of success in covering the veteran population, the identified Veterans in responding households will be weighted by their inverse probabilities of selection, without further adjustment for non-response. The weighted estimate of the number of Veterans in responding households will fall short of the estimated 23.5 million Veterans in the civilian non-institutional population. The shortfall will reflect three primary limitations: (1) any shortfall in estimating the number of households in the U.S. due to coverage errors in the frame, (2) the effect of non-response by veteran households, and (3) any net effect of reporting error by households that might cause some Veterans to be missed. The comparison to the weighted estimate of Veterans from the pilot to the 23.5 million will be termed the effective coverage rate, and it assesses the combined effect of the three limitations. If frame coverage errors and errors in reporting are both minimal, and if the assumption of 50% response by veteran households is correct, then the pilot is expected to produce an estimate of approximately 11.75 million Veterans. But the observed effective coverage rate—a number close to 50%—will be a sample-based estimate and therefore subject to sampling error. Under the design assumptions, the standard error of the observed effective coverage rate is 1.6%, which suggests that the sample size is quite adequate in this respect.


In addition, however, the effective coverage rate for some subgroups, such as female Veterans, will also be of interest. A similar calculation can be performed for subgroups of the veteran population who can be identified from both the screening questionnaire and what is known from external sources, such as the number of female Veterans or Veterans by age. For subgroups of Veterans that are 20% of the total veteran population, the standard error of the effective coverage rate will be 3.6%, and for a 10% subgroup, the resulting standard error will be 5.2%.


The second question of interest to the study will be to assess the different methods for contacting and interviewing Veterans. The design involves three experimental factors, each of which has two treatments. The experimental factors are (1) whether Internet or mail response is initially promoted as the primary response option, (2) whether or not an insert is included in the initial survey request, and (3) two questionnaire variants to collect veteran status. These factors are described further in response to Question 4 below. The experimental design will permit measurement of the main effect of each of these factors and provide additional information on their interactions. Table 4 provides the standard errors and expected power for testing the main effect of each factor in this experiment. For example, when comparing the response option conditions, the standard error of the difference in response rates will be 1.0% and the standard error of the difference in effective coverage rates for Veterans will be 3.1%. The results can also be expressed in the form of the power of the experiment to detect a statistically significant difference between the treatments. If the true difference in expected response rates (over repetitions of the pilot design) is 1.9%, then the experiment will yield a statistically significant difference at the .05 level 50% of the time. Similarly, if the true difference is 2.7%, the rejection rate (power) rises to 80%. The precision is less for the effective coverage rate of Veterans, where there will be a standard error of 3.1% on the difference. We will be able to detect a real difference of 6.2% and 8.8% with 50% and 80% power, respectively.


Table 4. Standard Errors and Power of Tests Comparing Response and Effective Coverage Rates Under Two Experimental Treatments

Standard Errors

and

Detectable Differences

Response Rate

Effective Coverage Rate of Veterans

Standard Error of Difference

1.0%

3.1%

Detectable Difference with 50% Power

1.9%

6.2%

Detectable Difference with 80% Power

2.7%

8.8%


Data Quality


Data quality will be assessed in several different ways. One will be to review the completed screeners, checking for missing information and unusable information. A second method will be to conduct 100 debriefing interviews with respondents and 20 debriefing interviews with non-respondents. The interviews with the respondents will ask how the survey came to their attention, how they determined how to fill out the survey, whether the correct individuals were selected for the survey and actually filled out the second questionnaire. The non-respondents will be asked on the reasons why they did not respond.


4. Describe any tests of procedures or methods to be undertaken. Testing is encouraged as an effective means of refining collections of information to minimize burden and improve utility. Tests must be approved if they call for answers to identical questions from 10 or more respondents. A proposed test or set of tests may be submitted for approval separately or in combination with the main collection of information.


The purpose of the Pilot is to test two main questions:


1. What are the response rates for the screening interview?

2. What are the best methods to screen Veterans?


Question 1: What are the response rates for the screening interview?


This question concerns refining estimates of the productivity of the screening survey to identify Veterans. The assumption is that the initial screening interview can be completed by a mail contact, with the screening survey being completed either by web or paper survey. The pilot will test our assumptions about what response rates to expect from this design. Specifically, there are at least four specific questions that are of interest:


1. What will be the overall response rate to the screening interview?

2. How will the response rate vary by households that are in the different sampling strata?

3. What is the return rate from Veterans?

4. How many surveys are returned for important subgroups of Veterans?


Each of these questions relates to the ability of the two-phase sampling approach to generate national estimates for the veteran population. The pilot will provide specific answers for planning the full survey. For example, the pilot will provide the response rate for addresses in the different sampling strata (i.e., addresses that matched to the VA file, addresses matched to the DoD files and addresses where there were no matches). In addition, the pilot will provide information on how many Veterans respond within each of these strata. This provides information on how the study will need to allocate the sample in each of the strata for the national survey.


In addition to testing the response rates, the pilot will also yield baseline estimates of the representation of the survey of the veteran population. The pilot sample will be a nationally representative sample. Estimates from the pilot can therefore be compared to the national estimates of the number of Veterans in the country.


The reliability of the estimates from the pilot for each of these questions is discussed in answer to Question 3 above.


Question 2: What are the best methods to screen Veterans?


The pilot will be used to test, experimentally, different methods to screen Veterans. The screening methods affect both the number of Veterans that are identified, as well as the number that may return a full NSV survey when sent a request during the second phase of the interviewing process.


The pilot will experiment with three different aspects of the screening methodology. These factors include:


Factor 1: Mode to use when conducting the screening. This factor will have two conditions:


1. Contact the household by mail and ask a household respondent to fill out the screener on the web. If the respondent cannot (or does not want to) use the web, he/she can call to receive a paper questionnaire. A non-response mailing would provide the respondent with a paper version of the questionnaire.


2. Contact the household by mail and ask a household respondent to fill out a paper screener.


This approach is based on prior research that has found that giving respondents the choice between two modes does not increase response rates, and may in fact decrease them (Schneider, et al., 2005; Griffith, et al., 2001; Gentry and Good, 2008). Condition 1 above is designed to maximize the number of responses that would be done on the web. It also uses the least expensive method first, potentially saving money. However, it may still be less efficient than asking for a response to the paper questionnaire. Some respondents may get frustrated if they cannot, or do not want to, fill out the survey on the web.


Factor 2: Inclusion of an insert to encourage returning the screener. This factor will have two conditions:


1. Inclusion of an insert. To encourage response to the screener, an insert will be attached to the cover letter of the survey (Attachment 5). The insert highlights the time needed to complete the survey and the purpose of the survey. The intent is to peak the respondent’s interest in the study without having to read the advance letter. .


2. Do not include an insert.


Factor 3: Questionnaire design to measure veteran status. This factor will have two conditions:



1. ACS (American Community Survey) Military Status and era questions. These items classify individuals into either “military”, “veteran” or “No military experience”. Collecting information on active duty status is similar to the approach taken by the American Community Survey, which collects information on all three of these groups.


2. Abbreviated questions that restrict questions on veteran status. This version would ask directly whether the individual is a veteran (yes or no). It would not ask about active duty status. By being more direct about asking veteran status, respondents may be less likely to be confused by the goal and purpose of the survey.


The full experimental design will have 8 cells (2 x 2 x 2). With a sample of 10,000, there would be approximately 1250 screeners mailed out for each combination of conditions represented by the experiment. The reliability and power of this design is discussed in response to Question 3. Our response to question 16 of Part A provides additional details about the planned analysis of the screener data collection experiments.



5. Table 5: Provide the name and telephone number of individuals consulted on statistical aspects of the design and the name of the agency unit, contractor(s), grantee(s), or other person(s) who will actually collect and/or analyze the information for the agency.


Agency

Name

Title

Phone

Number

Statistical

or

Analytical

Westat

John Helmick

NSV Project Director

301-294-2010

Analytical

Westat

Kimya Lee

Senior Study Director

301-610-5522

Analytical

Westat

Wayne Hintze

NSV Associate Project Director

301-517-4022

Analytical

Westat

Robert Fay

Senior Statistician

240-314-2318

Statistical/Analytical

Westat

Richard Sigman

Senior Statistician

240-453-2783

Statistical/Analytical

Westat

David Cantor

Senior Methodologist

301-294-2080

Analytical

Westat

Pamela Giambo

Senior Study Director

240-453-2981

Analytical

Westat

Michele Harmon

Senior Study Director

301-294-3814

Analytical

Westat

Marianne Winglee

Senior Statistician

301-517-4169

Analytical

Westat

David Morganstein

Vice President, Director of Statistical Staff

301-251-8215

Statistical

Westat

J. Michael Brick

Vice President, Statistical Group

301-294-2004

Statistical

Westat

Wendy Hicks

Senior Research Associate

301-251-2299

Statistical

Westat

Douglas Williams

Research Associate

240-453-2934

Statistical

Westat

Brett McBride

Research Analyst

301-517-8068

Statistical


References

Battaglia, M.P., Link, M.W., Frankel, M.R., Osborn, L. and A. H. Mokdad (2008) An Evaluation of Respondent Selection Methods for Household Mail Surveys “ Public Opinion Quarterly 72: 459 - 469.


Battaglia, M.P., Khare, Meena, Frankel, M.R., Murray, M.C., Buckley, P. and S. Peritz (2007) “Response Rates: How have they changed and where are they headed?” pp. 529 – 560 in J. M. Lepkowski, C. Tucker, J. M. Brick De Leeuw, E., Japec, L., Lavrakas, P. J., Link, M. W., & Sangster, R. L. (Eds.), Advances In Telephone Survey Methodology, New York: J.W. Wiley and Sons, Inc.


Blumberg, S.J., and Luke, J.V. (December 17, 2008). Wireless substitution: Early release of estimates based on data from the National Health Interview Survey, January - June 2008. National Center for Health Statistics. Available from http://www.cdc.gov/nchs/nhis.htm.


Curtin, Richard, Stanley Presser, and Eleanor Singer. 2005. “Changes in Telephone Survey Nonresponse over the Past Quarter Century”. Public Opinion Quarterly 69:87-98.


Link, M., Battaglia, M.P., Frankel, M.R. Osborn, L., and A. H. Mokdad (2006) “Address-based versus Random-Digit-Dial Surveys: Comparison of Key Health and Risk Indicators” American Journal of Epidemiology 164: 1019 – 1025.


Westat, (2009) National Health Information National Trends Survey, Final Report. Prepared for the National Cancer Institute, Bethesda, MD. Accessed on March 13, 2009, at: http://hints.cancer.gov/hints/docs/HINTS2007FinalReport.pdf,



1 Approximately 20% of the population has a cell phone and lives in households without a landline

2 It is possible to reduce this screening if one could use all of the interviews with cell phone users into the survey estimates. However, to date, the procedures for weighting a sample like this have not been resolved.


File Typeapplication/msword
File TitleSupporting Statement B - Collections of Information Employing
AuthorTheora Hawkins
File Modified2009-03-31
File Created2009-03-31

© 2024 OMB.report | Privacy Policy