Draft Longitudinal Survey Supporting Statement B 8-14-19

Draft Longitudinal Survey Supporting Statement B 8-14-19.docx

Post Separation TAP Assessment Survey

OMB: 2900-0864

Document [docx]
Download: docx | pdf

Office of Management and Budget (OMB)

Clearance Request


for the


Pre-Separation Transition Assistance Program (PSTAP)

Assessment


Supporting Statement B:


Collections of Information Employing

Statistical Methods



  1. Describe (including a numerical estimate) the potential respondent universe and any sampling or other respondent selection methods to be used. Data on the number of entities (e.g., establishments, State and local government units, households, or persons) in the universe covered by the collect and in the corresponding sample are to be provided in tabular form for the universe as a whole and for each of the strata in the proposed sample. Indicated the expected response rates for the collection as a whole. If the collection has been conducted previously, include the actual response rate achieved during the last collection.





The design of this study is to conduct a longitudinal survey of Veterans who agreed to participate by opting into the study through the cross-sectional survey (OMB Clearance 2900-0864). The cross-sectional survey is a census of three cohorts of Veterans separating from service for approximately six (6) months, one (1) year, and three (3) years. Veterans who respond “yes” for being contacted for future surveys in the cross-sectional survey will be the participants for the longitudinal survey.



For years two and three of the longitudinal survey, the six (6) month group from the previous years’ cross-sectional survey will also be added. The cohorts are outlined graphically below.



Cohort 1: Veterans that separated 5-6 months prior to fielding the 2019 cross-sectional survey in the previous year and agreed to participate in the longitudinal survey.



Cohort 2: Veterans that separated 11-12 months prior to fielding the 2019 cross-sectional survey in the previous year and agreed to participate in the longitudinal survey.



Cohort 3: Veterans that separated 35-36 months prior to fielding the 2019 cross-sectional survey in the previous year and agreed to participate in the longitudinal survey.



Cohort 4: Veterans that separated 5-6 months prior to fielding the 2020 cross-sectional survey in the previous year and agreed to participate in the longitudinal survey.



Cohort 5: Veterans that separated 5-6 months prior to fielding the 2021 cross-sectional survey in the previous year and agreed to participate in the longitudinal survey.



Figure A below provides a visual representation of the cohorts for better understanding. While the cross-sectional study surveys three separate cohorts each year, only the 6-month cohort is invited into the longitudinal study in the following year.

Figure A. Cross-Sectional and Longitudinal Cohorts by Survey Year



Annually, there are roughly 198,784 total separations from the Armed Forces. Each cohort in the longitudinal study consists of a two-month window, meaning roughly 33,130 separated servicemembers are in each cohort. Responses for Cohorts 1-3 of the cross-sectional study are being collected using postcards and paper surveys. For Cohorts 1-3, it is estimated that 15 percent of the cohort will respond to the PSTAP cross-sectional survey and agree to be contacted for future surveys. This percentage is expected to be larger, at 20 percent, for Cohorts 4 and 5 since email and other electronic methods will also be used to compile the longitudinal sample. It is estimated that Cohorts 1-3 will include roughly 4,970 Veterans while each of Cohort 4 and Cohort 5 will include 6,626 for the initial longitudinal survey.



The estimated response rate for the longitudinal survey is higher than for the cross-sectional survey, since the sample has already responded to the cross-sectional survey and agreed to be contacted for future surveys. A 20% response rate is estimated for the initial Web invitation and a 10% response rate for the planned paper survey follow-up, for overall estimates of =4,970*.2 + 3,976*.1, or 1,392 responses in each of Cohorts 1-3, and =6,626*.2 + 5,301*.1, or 1,855 responses in each of Cohorts 4-5.



Based on this information, a power analysis was conducted to assess the statistical power of responses using Minimal Detectible Differences (MDDs). The MDD defines the difference in proportions for an outcome measure (e.g., employment) for the treatment group and the control group that must exist to detect a statistically significant relationship. The tables assume a 95% confidence level (alpha = .05), a one-tailed significance test, and a treatment group proportion of 70 percent, since the primary outcome measures (e.g., employment) should be in the range of 70 percent or higher. For this study, an MDD of 10 percent or less will be acceptable to draw conclusions from the survey responses.



Each table below provides MDDs for two scenarios. In the first row (Scenario 1) we assume 80 percent of cases are in the treatment group (i.e., participated in TAP) and 20 percent in the control group (did not take TAP). In the second row (Scenario 2) we assume 85 percent of cases in the treatment group and 15 percent in the control group. For both scenarios, we provide MDDs for response rates of 35 percent, 28 percent, and 20 percent. These rates correspond to a best-case scenario, expected response rate, and worst-case scenario, respectively.

Table A. Minimum Detectable Differences, 95% confidence level (alpha = .05) for one of Cohorts 1-3*

Subgroup proportions

(treatment vs. control)

Number of respondents, assuming 35%, 28%, and 20% response rates from a starting sample of 4,970

N = 1,740

N = 1,392

N = 994

Scenario I:
80%, 20%

7.9%

8.8%

10.5%

Scenario II:
85%, 15%

8.8%

9.9%

11.7%

*MDDs are for one-tailed comparisons

Table B. Minimum Detectable Differences, 95% confidence level (alpha = .05) for one of Cohorts 4-5*

Subgroup proportions

(treatment vs. control)

Number of respondents, assuming 35%, 28%, and 20% response rates from a starting sample of 6,626

N = 2,319

N = 1,855

N = 1,325

Scenario I:
80%, 20%

6.8%

7.6%

9.1%

Scenario II:
85%, 15%

7.6%

8.6%

10.2%

*MDDs are for one-tailed comparisons

As Tables A and B show, the assumed response rates will result in MDDs of 10 percent or less under nearly all scenarios. Based on the power analysis, the number of respondents per cohort needs to be roughly 1,080 to ensure statistically valid data analysis. This threshold will be met even after accounting for attrition, as shown in Table C. In cases where the Contractor does not believe that the threshold will be met, additional individuals will be added to the sample as discussed in the following section.



The PSTAP Longitudinal Assessment will be fielded no more than once in a 12-month period. For the purposes of this information collection request (ICR) approval of data collection, the three-year average annual burden calculation is shown in Table C below. Attrition is a concern in all longitudinal studies. Attempts to boost participation rates can often be mitigated by using techniques such as incentives, personal interviews, telephone reminders, etc., which carry an additional monetary burden. Therefore, the second and third waves of data collection (i.e., Year 2 and Year 3) are estimated using data from another longitudinal VA survey of Veterans.

Table C. Average Annual Hourly Burden Calculation

 

Cohorts 1-3

Cohort 4

Cohort 5

Minutes

Hourly Burden


 Year

Retention Rate*

Responses

Retention Rate*

Responses

Retention Rate*

Responses


Year 1

n/a

4,100

 

 

 

 

18.5

1,264


Year 2

88%

3,608

n/a

1,855

 

 

18.5

1,684


Year 3

81%

3,321

88%

1,632

n/a

1,855

18.5

2,099


 

 

 

 

 

Average Annual Burden

1,683

*Retention rates have been calculated based on a previous longitudinal study conducted for the VA’s Vocational Rehabilitation and Employment program.

Based on expected response rates and retention rates for the study, the average annual burden for the PSTAP Longitudinal Assessment is 1,683 hours. The retention rates for this study were developed based on a review of the retention rates of the Vocational Rehabilitation and Employment (VR&E) longitudinal study currently being conducted by VA.





  1. Describe the procedures for the collection of information including:

    • Statistical methodology for stratification and sample selection;

    • Estimation procedure;

    • Degree of accuracy needed for the purpose described in the justification;

    • Unusual problems requiring specialized sampling procedures; and

    • Any use of periodic (less frequent than annual) data collection cycles to reduce burden.



As discussed in the previous question (Item 1), the longitudinal study will solicit responses from Veterans who agreed to participate by signing up through the cross-sectional survey and indicated on the cross-sectional survey that they would be willing to be contacted for future surveys (OMB Clearance 2900-0864). The cross-sectional survey is a census of three cohorts of Veterans separating from service for approximately six (6) months, one (1) year, and three (3) years. As a result, the Contractor will not utilize sampling or stratification procedures.



Due to the longitudinal study’s dependence on responses to the cross-sectional survey to generate a set of potential respondents, it is possible that lower than expected response rates to the cross-sectional survey could result in a longitudinal sample that is not large enough to meet the necessary sample size determined by the power analysis. To ensure an adequate longitudinal sample size, the Contractor will monitor the responses to the cross-sectional survey and, if response rates are not large enough, follow a contingency plan to recruit more potential respondents for the longitudinal study. The contingency plan will consist of two strategies. First, the Contractor will re-contact cross-sectional survey respondents who opted out of the longitudinal study to ask them if they would reconsider their decision. This re-contact will consist of an email that reiterates the importance of the study to Veterans. Second, the Contractor will attempt to re-contact non-respondents to the cross-sectional survey using the post-separation email address. This additional survey mode should generate additional responses beyond those generated by the postcard and mail survey, particularly among Veterans who may have recently moved and do not have a stable home address.



Post-stratification weights shall be used, drawing upon the population file to provide control totals; more detail is provided in the following item. For estimation of frequencies, means, cross tabulations, and other statistics, the Contractor will utilize the post-stratification weights. The Contractor will estimate weighted statistics representative of the population and will include the weighted standard errors associated with each estimate. The Contractor will also produce subgroup analyses. For analyses comparing subgroups, differences shall undergo significance testing at the 95% confidence level.



The PSTAP Assessment will not be conducted more frequently than once in a 12-month period. At the end of the instrument, Veterans are requested to opt into future waves of data collection and are informed in writing that they shall not be contacted more than once in a 12-month period.





  1. Describe methods to maximize response rates and to deal with issues of nonresponse. The accuracy and reliability of information collected must be shown to be adequate for intended uses. For collections based on sampling, a special justification must be provided for any collection that will not yield “reliable” data to be generalized to the universe studied.



Nonresponse is mitigated for the PSTAP Longitudinal Assessment in four primary ways: (1) offering multiple modes; (2) multiple contact attempts; (3) minimizing the instrument length; and (4) increasing the ease of completing a paper questionnaire by optimizing the visual layout.



Survey response rates, in general, have been dropping over the last few decades. Some of the ways response rates are boosted include offering multiple modes. While some survey modes, such as those involving personal interviews, can also come at significant cost, a combination mode of web, paper, and other electronic methods is both cost-effective and provides anonymity for the respondent. Web and other electronic methods are being offered first to reduce the number of paper surveys that need to be mailed to Veterans who do not respond after the first attempt. Utilizing these methods for the longitudinal survey will also help to reduce burden of the survey respondents and allow for follow-up reminders to further reduce the need for paper surveys.



The final instrument developed is the culmination of cognitive interviews as well as close coordination between VA, the Interagency Partners, and the Contractor. As with any survey development, there is often a desire to include more questions than is feasible without jeopardizing response rates. At all junctures, there was close coordination to ensure that both the concepts being measured and the number of questions were kept at a minimum to decrease respondent burden.



Visual layout can reduce the effort required by a respondent to encode their responses to a question and mark the right category. Such techniques as using grids and alternate shading can decrease this burden.



Despite this multi-pronged strategy, achieving an 80% response rate is unlikely, and has been difficult to achieve for even the largest federal surveys. To assess and mitigate any potential bias caused by nonresponse, the Contractor will conduct a nonresponse-bias analysis (NRBA) and produce nonresponse-adjusted post-stratification weights. The NRBA will draw on demographic information available from the population file (e.g., age, military service branch, grade / rank, etc.) to use as auxiliary variables. It will, at minimum, include the following:



  • Comparison of response rates for different subgroups;

  • Comparisons of the distributions of auxiliary variables for respondents and the full population; and

  • Use of a classification tree algorithm to identify subgroups with low response rates.



Any variables for which the distribution differs significantly (typically defined as p<0.05 for a chi-square test or t-test, as appropriate) between respondents and the full population, or response rates vary significantly by subgroup, will be considered for use in post-stratification weighting. If many potential weighting variables are identified, priority will be given to variables that are more highly correlated with the primary outcomes of interest. The Contractor will post-stratify the survey weights to the population file control totals for the selected variables, collapsing small cells as necessary. After weighting, the distributions of all auxiliary variables will again be compared to the corresponding population distributions to verify that the potential for bias has been reduced (fewer significant differences between the weighted distributions and the population distributions).





  1. Describe any tests of procedures or methods to be undertaken. Testing is encouraged as an effective means of refining collections of information to minimize burden and improve utility. Tests must be approved if they call for answers to identical questions from 10 or more respondents. A proposed test or set of tests may be submitted for approval separately or in combination with the main collection of information.



The PSTAP Longitudinal Assessment was pretested using interviews and a web survey questionnaire. Referrals for the pretesting subjects were obtained by the Contractor’s internal recruiting system. In addition, VA provided test subjects to allow for additional testing with focus on question wording.



Interviews were conducted with four (4) members of the public consistent with OMB regulations that state testing shall not exceed nine (9) members of the public without applying for a generic clearance. These interviews were conducted from July 1, 2019 through August 15, 2019. In addition, program experts reviewed the survey and provided additional input.



The survey pretests were conducted online. Each test subject was sent a link to the online version of the survey which included all instructions, questions, and a list of reflection questions. The reflection questions asked test respondents to provide feedback on the length of time to complete the survey, if the design of the survey allowed for ease of understanding of questions, and allowed for feedback on specific questions.



Given that this assessment is a follow-on to the previously approved cross-sectional survey, there were many questions that are similar or the exact same to the approved survey. Using the feedback from the previous survey, the longitudinal survey was edited to ensure less confusion for respondents and limited the amount of unnecessary text from the cross-sectional survey. One piece of feedback received from testing was that respondents felt it was difficult to identify their exact income and household earnings. That question was changed to allow respondents to select earnings ranges instead of exact numbers. This change will not affect the Contractor’s ability to conduct analysis.





  1. Provide the name and telephone numbers of individuals consulted on statistical aspects of the design and the name of the agency unit, contractor(s), grantee(s), or other person(s) who actually collect and/or analyze the information for the agency.



The PSTAP Longitudinal Assessment is the culmination of significant federal planning and interagency coordination. In October 2018, a contract was awarded to Economic Systems Inc, and their subcontractor, Westat, to develop the instrument, conduct pretesting, and prepare for submission to OMB. Key personnel involved in the final stages of the design of the survey instrument and statistical planning of the study include:



Veterans Benefits Administration (202-530-9053)

Nathan Williamson, Deputy Director,

Kenyonna Power, Contracting Officer’s Representative

William Brinley, Lead Program Analyst



Interagency Performance Management Working Group (PMWG)

Lynne Kelley, Ph.D., PMWG Chair (703- 614-8676)

More than 40 representatives from the 7 federal agencies in this group.



Economic Systems Inc. 703-333-2197

Jacob Denne, Project Manager

Ali Sayer, Vice President



Westat 301-212-2174

Jeffrey Taylor, Research Manager

Elizabeth Petraglia, Senior Statistician

Page 7 of 7

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File Modified0000-00-00
File Created0000-00-00

© 2024 OMB.report | Privacy Policy