Rev SF83iSupporting Statement PartB 6-29-07 (2)

Rev SF83iSupporting Statement PartB 6-29-07 (2).doc

2007 Veteran Burial Benefits Survey

OMB: 2900-0705

Document [doc]
Download: doc | pdf

Supporting Statement For Paperwork Reduction Act Submission

Veterans Burial Benefits Program Evaluation Survey

Department Of Veterans Affairs Office Of Policy And Planning

B. COLLECTIONS OF INFORMATION EMPLOYING STATISTICAL METHODS

1. Provide a numerical estimate of the potential respondent universe and describe any sampling or other respondent selection method to be used. Data on the number of entities (e.g., households or persons) in the universe and the corresponding sample are to be provided in tabular format for the universe as a whole and for each stratum. Indicate expected response rates. If this has been conducted previously include actual response rates achieved.

We will draw a large, representative sample of veterans from all major Service periods back to World War II, to include those who intend to participate in the VA Burial Program and those who do not. We will use VA’s Beneficiary Identification and Records Locator Subsystem, or BIRLS for the initial sample, and validate the addresses through a data match with databases containing 19 billion records available through ChoicePoint. The sampling plan will employ two major strata:

  • The five geographically based Memorial Services Networks (MSN)

  • Periods of service, divided into five categories:

    • World War II

    • After World War II and including Korean War

    • Peacetime between the Korean War and Vietnam conflict

    • The Vietnam conflict

    • After Vietnam conflict to current Gulf War

The following discussion addresses the issues of stratification, selection, and statistical power. These assumptions are applicable to all research questions that will use data from the survey

The populations of veterans in each “cell” of these two strata are shown in Exhibit 10:

Exhibit 10: Veteran Populations by MSN and Period of Service
(Sampling Universe)

MSN

After Vietnam to Present Gulf War

Vietnam War

Peace Time between Korea and Vietnam War

After WWII and Including Korea

World War II

MSN Totals

1

830,419

2,533,962

668,477

802,188

931,348

5,766,393

2

1,058,705

2,562,412

571,382

740,443

770,444

5,703,386

3

660,276

1,564,500

324,357

387,417

400,806

3,337,355

4

791,266

2,474,339

627,559

731,183

793,318

5,417,665

5

764,133

2,126,880

476,901

577,922

622,700

4,568,536

Period of Service Totals

4,104,799

11,262,092

2,668,677

3,239,153

3,518,615

24,793,336

The requisite sample sizes (Exhibit 11) are determined by two factors: (1) the desired size of the confidence interval, and (2) the Standard Error (SE). The formula for calculating the sample size is:

Exhibit 11: Formula for Calculating Sample Size

Where is the population size
is the sample size
is the probability of a dichotomous variable
is the complementary probability such that
+ = 1.00 (We used
y = .50)
and is the Standard Error squared

For the purposes of calculating sample size, we use 0.50 for P(y) and P(1-y), a very conservative estimate.

The formula for the standard error squared is:

We are choosing a 95% Confidence Interval, a customary band in applied survey research. The Z or t distribution for a 95% confidence level is 1.96, and the acceptable level of error is plus or minus 5 percentage points. Therefore, the standard error is calculated as:

Once the sample size needed to reach the required confidence level is computed, the number of surveys to be distributed is calculated by dividing the sample size by the estimated percentage of surveys that would be returned, or the effective response rate. The formula is:

Exhibit 12 (below) presents an example of sample size calculation for a particular stratum from the previous exhibit (Exhibit 11) as well as calculating the number of surveys that must be distributed in order to achieve the final sample size.

Exhibit 12: Example of Sample Size Calculation

The number of Vietnam Era veterans in MSN I is 2,533,962. Applying the formula for the calculation of sample size, we get:

Apply the formula to calculate the number of surveys to be distributed:



Our target population includes veterans living in private households in the US and Puerto Rico whose service separation took place between World War II and present day.. The sampling frame is the BIRLS database of all veteran beneficiary records available to us having an SSN. The sampling frame will be stratified first by period of service (World War II, between World War II and the Korean Conflict, post Korean Conflict and the Vietnam Era, the Vietnam Era and after, the Gulf War and after), and then by MSN (the five geographically based Memorial Services Networks).

Our sample design calls for 95 percent confidence intervals for estimates of proportion of 0.4 for each of the veteran population subgroups. This results in the table shown in Exhibit 13 below.

Exhibit 13: Respondent Sample Sizes Needed for 95% Confidence Interval
by MSN and Period of Service

MSN

After Vietnam to Present Gulf War

Vietnam War

Peace Time between Korea and Vietnam War

After WWII and Including Korea

World War II

MSN Totals

1

385

385

385

385

385

1,925

2

385

385

385

385

385

1,925

3

385

385

385

385

385

1,925

4

385

385

385

385

385

1,925

5

385

385

385

385

385

1,925

Totals

1,925

1,925

1,925

1,925

1,925

9,625



We are estimating a response rate of 40% given the general reluctance that many people have in considering burial issues and have increased the target sample increases proportionately. This is depicted in Exhibit 14 below.

Exhibit 14: Sample Sizes of Target Sample Assuming 40% Response Rate

by MSN and Period of Service

MSN

After Vietnam to Present Gulf War

Vietnam War

Peace Time between Korea and Vietnam War

After WWII and Including Korea

World War II

MSN Totals

1

963

963

963

963

963

4,815

2

963

963

963

963

963

4,815

3

963

963

963

963

963

4,815

4

963

963

963

963

963

4,815

5

963

963

963

963

963

4,815

Totals

4,813

4,813

4,813

4,813

4,813

24,065



We considered three approaches to allocate the total sample across the cells: equal allocation, proportional allocation, and mixed allocation (Kish, 1981).

  1. Equal allocation. Under this approach, the sample is allocated equally to each of the thirty cells. The equal allocation approach achieves roughly the same reliability for the different sample cells. This allocation would not capture the population variation between the cells which would result in large variances for the national level estimates. We believe this allocation would not be very efficient for our situation.

  2. Proportional allocation. For this approach, the sample is allocated to the cells based on the proportion of the veteran population that each cell represents. Thus, the cells with larger veteran populations would receive the larger share of the sample. The proportional allocation would be the most efficient allocation for the national level estimates because the probabilities of selection are the same for all veterans irrespective of the cell. We propose this allocation because we have reliable proportion estimates based on total population proportions.

  3. Compromise allocation. As the name implies, the compromise allocation is aimed at striking a balance between producing reliable cell estimates (a) and reliable national level estimates (b). Since we already have reliable national level estimates and reliable estimates for each of the cells, we decided not to use the “square root” compromise allocation to allocate the sample across the twenty-five stratified sample cells.

To obtain the cell proportions we divided the population number for each cell by the total population. The resultant proportions were multiplied by the total sample size to obtain the sample size for each cell as shown in Exhibit 15 below.

Exhibit 15: Proportional Sample Size Allocation by MSN and Period of Service

MSN

After Vietnam to Present Gulf War

Vietnam War

Peace Time between Korea and Vietnam War

After WWII and Including Korea

World War II

MSN Totals

1

806

2,459

649

779

904

5,597

2

1,027

2,486

555

719

748

5,535

3

641

1,519

315

376

389

3,240

4

768

2,402

609

710

770

5,259

5

742

2,064

463

561

604

4,434

Totals

3,984

10,930

2,591

3,145

3,415

24,065

The sample that we have selected is a stratified sample with systematic sampling of veterans from within strata. The strata were defined on the basis of period of service and Memorial Service Network. The first level of stratification was by period of service and then each group was further stratified by Memorial Service Network. Thus, the sample has 25 strata (or cells).

Our plan is to sample veterans within strata at a rate that makes the overall selection probability approximately constant within a single stratum. It is also advisable to select an additional 20 percent reserve sample (4,813) to be used, in the event that the yield assumptions do not hold, for a total sample size of 28,878. Stratified sample plus reserve sample numbers per stratum are shown in Exhibit 16 below.

Exhibit 16 Proportional Sample Size Allocation Plus 20% Reserve
by MSN and Period of Service

MSN

After Vietnam to Present Gulf War

Vietnam War

Peace Time between Korea and Vietnam War

After WWII and Including Korea

World War II

MSN Totals

1

967

2,951

779

935

1,085

6,717

2

1,232

2,982

666

863

898

6,641

3

769

1,823

378

451

467

3,888

4

922

2,882

731

852

924

6,311

5

890

2,477

556

673

725

5,321

Totals

4,780

13,115

3,110

3,774

4,099

28,878

Successful execution of the veteran survey will require not only an effective sample design but also careful management of the entire sampling process, from creating the sampling frame to completing data collection. Before each sampling step, project staff will identify the goals, design the process, and prepare detailed specifications for carrying out the procedures. At each stage, quality control procedures are to be designed and carried out to guarantee survey data integrity.

To ensure that the sample remains unbiased during the data collection process, we intend to partition the total sample population into 25 collection batches so that each collection batch will represent random sample. The small size and independence of sample batches will give us precise control over the sample. During data collection, we will monitor sampling and progress toward our targets. When we notice that a sufficient number of sample cases from a batch have reached the cell target, we will move on to the next stratum of the sample.

2. Describe the procedures for the collection of information, including: Statistical methodology for stratification and sample selection; the estimation procedure; the degree of accuracy needed for the purpose in the proposed justification; any unusual problems requiring specialized sampling procedures; and any use of periodic (less frequent than annual) data collection cycles to reduce burden.

As stated in the previous section, we are using a 95% confidence interval for categorical variables.

The previous section presents the stratification and allocations of this sample. The sample sizes are designed to obtain point estimates for proportions with a 95 percent confidence interval under the worse case scenario.

Since there is no “master database” of all U.S. veterans with reliable contact information that includes individuals from all Service periods regardless of their level of past or current interaction with VA, we anticipate the need to sample records from several different sources. These sources include the following databases, each of which provides partial coverage of veterans within certain key strata:

  • VBA Administrative records to examine the cost of funerals for veterans, and the relationship of the costs to the burial and plot allowances

  • BIRLS database to obtain the sample veterans as described in response to the preceding question

  • AMAS and BOSS to obtain a sample of veterans that selected private and national cemeteries for burial

To collect qualitative data and to conduct the conjoint analysis, the contractor, working closely with the project Contracting Officer’s Technical Representative (COTR) and other VA officials, will arrange and conduct two focus groups within each of the five Memorial Service Networks

(one veteran next of kin group and one funeral directors group).

The selection of focus group participants will not be random, nor is it meant to be. Focus groups are constructed to elicit in-depth discussions, impressions, and opinions from a group of individuals who are from a specific target group but do not necessarily represent that group.

3. Describe methods used to maximize the response rate and to deal with issues of non-response. The accuracy and reliability of information collected must be shown to be adequate for intended uses. For collections based on sampling, a special justification must be provided for any collection that will not yield “reliable” data that can be generalized to the universe studied.

Caliber/ICF, in collaboration with VA, will develop a survey that will be mailed to 24,065 veterans (note: the survey will also be made available to mail recipients via the World Wide Web). This is the sample size without the “reserve sample”. The reserve sample will only be used if required.

Caliber/ICF will distribute the mailed surveys using a modified version of the Dillman method in order to maximize the response rate. The distribution will be conducted as follows:

  1. Mailing 1 – A pre-notification letter.

  2. Mailing 2 – A survey package (i.e., cover letter, survey, business reply envelope, and Web survey instructions).

  3. Mailing 3 – A reminder postcard. package.

  4. Mailing 4 – A second wave survey package mailed to non-respondents following the reminder postcard (i.e., second wave cover letter, survey, business reply envelope, and Web survey instructions).

  5. Mailing 5 – A final reminder postcard mailed after the second wave survey package.

Data collection (i.e., the receipt and logging of surveys for analysis) will cease two weeks after the final reminder postcard.

The contractor will closely monitor the number of surveys completed and returned via postal mail, and completed via the World Wide Web, and keep VA informed. VA with the contractor’s input will decide midway through the data collection period whether or not the “reserve sample” should be “released” and made part of the survey, in order to reach or exceed the sample size needed and given in the previous exhibits. If the decision is made to release the reserve, an additional 4,813 will be mailed to those in the reserve sample.

The paper surveys will be scanned using Optical Mark Read (OMR) technology. The data from Web respondents will be merged with the data from the paper surveys to create one consolidated data file.

This data file will be cleaned in several stages. The first stage will examine data that are erroneous and treat them as “missing.” The second stage will look for patterns in responses (e.g., those where the respondent always picks the first choice) and identify the surveys that need to be corrected. The nature of the correction will depend on the pattern discovered. The third stage will look for logic errors on the part of respondents (e.g., mismatches between dates of military service and statements of total years of military service). After cleaning, the data will be ready for analysis.

To augment the results of the survey, Caliber/ICF will conduct a number of focus groups (and/or interviews in some cases) with next of kin and funeral directors. Focus group sessions will last from one to two hours with each group consisting of eight to ten participants, a facilitator, and a note taker/transcriber.

As is true for qualitative research in general, the data from these focus groups are not intended to represent the attitudes of the entire population under study, or to answer “statistical” questions. Rather, the focus group data will be used to provide an in-depth understanding of the major themes (e.g., attitudes toward cremation) collectively expressed by participants, as well to illustrate trends that emerge from quantitative analyses.

The focus group interactions will be tape recorded and transcribed into electronic form to facilitate quantitative analysis.

Non-response bias refers to the error expected in estimating a population characteristic based on a sample of survey data that under-represents certain types of respondents. Stated more technically, non-response bias is the difference between a survey estimate and the actual population value. Non-response bias associated with an estimate consists of two components – the amount of non-response and the difference in the estimate between the respondents and non-respondents. While high response rates are always desirable in surveys, they do not guarantee low response bias in cases where the respondents and non-respondents are very different. Still, low response rates will further magnify the effects of the difference between respondents and non-respondents that contributes to the bias. Given the increasing use of survey data to inform assessments and performance indicators, it is crucial that we know who completes surveys.

Two types of non-response can affect the interpretation and generalizability of survey data: item non-response and unit non-response. Item non-response occurs when one or more survey items are left blank in an otherwise completed, returned questionnaire. Unit non-response is non-participation by an individual that was intended to be included in the survey sample. Unit non-response – the failure to return a questionnaire – is what is generally recognized as survey non-response bias.

Non-response follow-up (NRFU) analyses can help identify potential sources of bias and can help reassure data users, as well as the agency collecting and releasing the data, of the quality of the data collected. One approach is to conduct a follow-up survey by telephone of a sample of non-respondents to assess differential responses to key survey items. Another approach is to conduct record linkage – using demographic variables from the mailing address file to analyze whether non-respondents differ demographically from respondents. Caliber/ICF will use these proven methods to examine non-response bias in the VA Burial Survey if warranted at the conclusion of the data collection period.

Since it is not always possible to measure the actual bias due to unit non-response, there are strategies for reducing non-response bias by maximizing response rates across all types of respondents. In the face of a long-standing trend of declining response rates in survey research (Steeh, 1981; Smith, 1995; Bradburn, 1992; De Leeuw & Heer, 2002; Curtin & Presser, 2005), these strategies include:

  1. Use of notification letters, duplicate survey mailings, reminder letters and postcards.

  2. Use of novelty in correspondence such as reminder postcards designed in eye-catching colors.

  3. Use of an extended survey field-period to afford opportunities to respond for subgroups having a propensity to respond late (e.g., males, young, full-time employed).

  4. Use of well-designed questionnaires and the promise of confidentiality.

  5. Providing a contact name and telephone number for inquiries.

Employing these strategies to the administration of the VA Burial Benefits Survey will be crucial for maximizing high response rates across all respondent types. Some of the special challenges associated with this survey include the fact that the questions are of a personal and sensitive nature. Second, a survey about burial intentions will be linked for some to social taboos about death and will encounter the reluctance that many people have in considering these issues. Finally, the survey is likely to have differing degrees of salience – an important factor in inducing survey completion – for respondents depending on age and history of interaction with the VA-supplied services and benefits. Despite these challenges, Caliber/ICF remains confident in our ability to obtain valid and reliable data from which to answer the nine research questions.

4. Describe any tests of procedures or methods to be undertaken. Testing is encouraged as an effective means of refining collections to minimize burden and improve utility. Tests must be approved if they call for answers to identical questions of 10 or more individuals.

Prior to any pretests, the survey instrument was reviewed twice at formal meetings with VA stakeholders. Based on those meetings, questions were dropped, others were added, wording of questions changed significantly, and the order to questions changed as well. The cover letter was put through the same vetting process.

A pretest was conducted with nine veterans. Based on that pretest, the survey instrument was further refined into its present form.

5. Provide the name and telephone number of individuals consulted on statistical aspects of the design and the name of the agency unit, contractor(s), grantee(s), or other person(s) who will actually collect and/or analyze the information for the agency.

  • Dr. Christopher Spera, Caliber, an ICF International Company, 703-934-3446

  • Dr. Bradford Booth, Caliber, an ICF International Company, 703-934-3164

  • Mr. Boris Rachev, Caliber, an ICF International Company, 703-934-3618

  • Mr. John Kunz, Caliber, an ICF International Company, 703-934-3627

  • Dr. Ronald Szoc, Caliber, an ICF International Company, 202-362-2586



Appendix A:

Copy of 38 CFR 1.15

Reference the GAO/OMB PART review



Appendix B: Justification for Survey Question about Religious Affiliation.



Appendix C: The Analysis Plan as revised for OMB submission



Appendix D: The cover letters and the survey instrument



Appendix E: The focus group protocols




Appendix F: Description of the Conjoint Analysis Task



As stated earlier, a conjoint analysis task will be conducted with nine or fewer volunteers recruited from the focus groups. In this appendix we provide a more complete description of why we intend to use conjoint analysis, and our rationale and approach for this task.

Conjoint analysis is a statistical technique (and a methodology) for measuring the value and significance of various tradeoffs that respondents “calculate” when presented with multiple choices.

A simple example of a conjoint analysis scenario is that of purchasing an automobile. If rock bottom price were the only factor involved in deciding on which automobile to purchase, all respondents would be driving KIAs (at the time of this writing) because KIAs are significantly cheaper than any other brand. On the other hand, if safety were the only factor in deciding on the car to purchase, then all consumers might be buying full-size HUMMERs. The reason that the automobile marketplace can support so many different car models, and variations of those models, is that the automobile manufacturers try to appeal to as many factors for as many people as possible.

Conjoint analysis is one technique that permits the measurement of the relative desirability of a product’s attributes when faced with multiple choices.

In the current project, we will use conjoint analysis to measure the optimal blend of various objects and attributes to identify additional symbolic expressions of remembrance and gratitude as well as variations of current symbolic expressions. The objects and attributes are listed in the two tables below:



Exhibit F-1: Current and Potential Symbolic Expressions and Their Attributes

Memorial Medal

Level

Metal

Packaging

Inscription

Level 1

Gold

None

None

Level 2

Silver

Case

Name

Level 3

Bronze

Framed

Dates

Level 4



President’s name

Level 5



War period



Exhibit F-2: Current and Potential Symbolic Expressions and Their Attributes

Additional Flags

Level

Size

To Whom

Level 1

Same as original

One extra to family

Level 2

Smaller

One for each family member

Level 3

Larger

One to each family member and friend

We will conduct a full profile conjoint analysis task for the memorial medal and for the additional flag symbolic expressions. This is a task that each participant performs individually. The steps include:

  1. A participant rates the variations within an attribute first. For example, the participants would first rate the importance of the type of metal used for the memorial medal relative to the other metals. This establishes the internal ranking within an attribute.

  2. Then the participant will rate the differences between levels within an attribute.

  3. Then, participants are presented with pairs of attributes/levels, and are asked to rate which of the pairs they are more likely to be satisfied with, or to want.

  4. This will be done for each of the two symbolic expressions.

  5. Finally, participants are presented with choices that contain all the attributes and levels and asked to rate the extent to which they would be most satisfied with a particular combination. (Note: the difference between this step and step 3 is that in step 3, participants are asked to choose one of a pair of alternatives. In task 5, participants assign a number to each possible choice).

  6. All data files resulting from the conjoint analysis procedure will be submitted to VA in SAS.

17

File Typeapplication/msword
File TitleSUPPORTING STATEMENT
AuthorRonald Szoc, PhD
Last Modified Byvacowilea
File Modified2007-07-10
File Created2007-07-10

© 2024 OMB.report | Privacy Policy