Supporting Statement B 9-15-22

Supporting Statement B 9-15-22.docx

Post Separation TAP Assessment Survey

OMB: 2900-0864

Document [docx]
Download: docx | pdf

Office of Management and Budget (OMB)

Clearance Request


for the


Post-Separation Transition Assistance Program (PSTAP)

Assessment


Supporting Statement B:


Collections of Information Employing

Statistical Methods



  1. Describe (including a numerical estimate) the potential respondent universe and any sampling or other respondent selection methods to be used. Data on the number of entities (e.g., establishments, State and local government units, households, or persons) in the universe covered by the collect and in the corresponding sample are to be provided in tabular form for the universe as a whole and for each of the strata in the proposed sample. Indicated the expected response rates for the collection as a whole. If the collection has been conducted previously, include the actual response rate achieved during the last collection.





The design of this study is to conduct an annual Cross-Sectional and Longitudinal Survey of Veterans. The surveys are currently approved by OMB through Clearance 2900-0864. The Cross-Sectional Survey is a census of three cohorts of Veterans separating from service for approximately six (6) months, one (1) year, and three (3) years. Veterans who respond “yes” for being contacted for future surveys in the Cross-Sectional Survey are the participants for the Longitudinal Survey.



As this is an ongoing study, several Longitudinal Survey cohorts are already being surveyed. Each year, an additional cohort is added to the Longitudinal Survey from the previous year’s Cross-Sectional Survey 6-month cohort.



Figure A below provides a visual representation of the cohorts for better understanding. While the cross-sectional study surveys three separate cohorts each year, only the 6-month cohort is invited into the longitudinal study in the following year.




Figure A. Cross-Sectional and Longitudinal Cohorts by Survey Year






Annually, there are roughly 200,000 total separations from the Armed Forces. Each cohort in the study consists of a two-month window, meaning roughly 33,130 separated servicemembers are in each cohort.



Given our experience collecting data from 2019 through 2021, we estimate that approximately 15 percent of each cohort, or about 5,000 veterans, will respond to the PSTAP Cross-Sectional survey and approximately half of respondents (7.5% of the cohort, or 2,500 Veterans) will agree to be contacted for future surveys.



The estimated follow-up response rate for the longitudinal survey is higher than for the cross-sectional survey, because the sample has already responded to the cross-sectional survey and agreed to receive an invitation for future surveys. If 2,500 Veterans agree to participate in the follow up survey, then we expect that 1,250 (an assumed response rate of 50% from the starting sample of Veterans who agreed to receive additional contacts) will respond to the longitudinal survey.



Based on this information, a power analysis was conducted to assess the statistical power of responses using Minimal Detectible Differences (MDDs). The MDD defines the difference in proportions for an outcome measure (e.g., employment) for the treatment group and the control group that must exist to detect a statistically significant relationship. The tables assume a 95% confidence level (alpha = .05), a one-tailed significance test, and a treatment group proportion of 70 percent, since the primary outcome measures (e.g., employment) should be in the range of 70 percent or higher. For this study, an MDD of 10 percent or less will be acceptable to draw conclusions from the survey responses.



We performed power calculations for two types of likely comparisons of interest. Table A assumes within-cohort comparisons between veterans who participated in TAP vs. those who did not take TAP. Table B presents the power calculations assuming between-cohort comparisons limited to the subset of veterans who took TAP in each cohort; this power analysis was added because this comparison is now of analytic interest. Each table below provides MDDs for two scenarios. In the first row (Scenario 1) we assume 80 percent of cases are in the treatment group (i.e., participated in TAP) and 20 percent in the control group (did not take TAP). In the second row (Scenario 2) we assume 85 percent of cases in the treatment group and 15 percent in the control group. For both scenarios, we provide MDDs for response rates of 35 percent, 28 percent, and 20 percent. These rates correspond to a best-case scenario, expected response rate, and worst-case scenario, respectively.

Table A. Minimum Detectable Differences, 95% confidence level (alpha = .05) for within-cohort comparisons*

Subgroup proportions

(treatment vs. control)

Number of respondents, assuming 35%, 28%, and 20% response rates from a starting sample of 4,970

N = 1,740

N = 1,392

N = 994

Scenario I:
80%, 20%

7.9%

8.8%

10.5%

Scenario II:
85%, 15%

8.8%

9.9%

11.7%

*MDDs are for one-tailed comparisons

Table B. Minimum Detectable Differences, 95% confidence level (alpha = .05) for between-cohort comparisons*

Subgroup proportions

(treatment vs. control)

Number of respondents, assuming 35%, 28%, and 20% response rates from a starting sample of 4,970 per cohort

Scenario I:
80%, 20%

N = 1,392 per cohort

N = 1,113 per cohort

N = 795 per cohort

5.0%

5.6%

6.6%

Scenario II:
85%, 15%

N = 1,479 per cohort

N = 1,183 per cohort

N = 845 per cohort

4.8%

5.4%

6.4%

*MDDs are for one-tailed comparisons

As Tables A and B show, the assumed response rates will result in MDDs of 10 percent or less under nearly all scenarios. Based on the power analysis, the number of respondents per cohort needs to be roughly 1,080 to ensure statistically valid data analysis within a cohort, and roughly 450 for between-cohort analyses of TAP participants. This threshold will be met even after accounting for attrition, as shown in Table C. In cases where the Contractor does not believe that the threshold will be met, additional individuals will be added to the sample as discussed in the following section.



The PSTAP Assessment is fielded no more than once in a 12-month period. For the purposes of this information collection request (ICR) renewal of data collection, the three-year average annual burden calculation is shown in Table C below. Attrition is a concern in all longitudinal studies. Attempts to boost participation rates can often be mitigated by using techniques such as incentives, personal interviews, telephone reminders, etc., which carry an additional monetary burden. Therefore, the second and third waves of data collection (i.e., Year 2 and Year 3) are estimated using data from the past years of the PSTAP Assessment.

Table C. Average Annual Hourly Burden Calculation

Cross-Sectional Survey

 

Cohorts 1-3

Cohorts 4-6

Cohorts 7-9

Minutes

Hourly Burden

 

Retention Rate*

Responses

Retention Rate*

Responses

Retention Rate*

Responses

Year 1

n/a

15,250

 

 

 

 

18.5

4,702

Year 2

n/a

 

n/a

15,000

 

 

18.5

4,625

Year 3

n/a

 

n/a

 

n/a

14,750

18.5

4,548

 

 

 

 

 

Average Annual CS Burden

4,625






Total CS Respondents

45,000

Longitudinal Survey

 

Cohorts 1-5

Cohort 6

Cohort 7

Minutes

Hourly Burden

 

Retention Rate*

Responses

Retention Rate*

Responses

Retention Rate*

Responses

Year 1

n/a

3,800

 

 

 

 

18.5

1,172

Year 2

88%

3,344

n/a

950

 

 

18.5

1,324

Year 3

81%

3,078

88%

836

n/a

925

18.5

1,492

 

 

 

 

 

Average Annual Long Burden

1,329






Total Long Respondents

12,933






Total Respondents

57,933






Total Annual Respondents

19,311

 

 

 

 

 

Total Annual Burden

5,954

*Retention rates have been calculated based on a previous year’s of the PSTAP Assessment.

Based on expected response rates and retention rates for the study, the average annual burden for the PSTAP Assessment is 5,954 hours. The retention rates for this study were developed based on a review of the retention rates of the Vocational Rehabilitation and Employment (VR&E) longitudinal study currently being conducted by VA as well as previous years of the PSTAP Assessment.





  1. Describe the procedures for the collection of information including:

    • Statistical methodology for stratification and sample selection;

    • Estimation procedure;

    • Degree of accuracy needed for the purpose described in the justification;

    • Unusual problems requiring specialized sampling procedures; and

    • Any use of periodic (less frequent than annual) data collection cycles to reduce burden.



As discussed in the previous question (Item 1), the study solicits responses from Veterans through two surveys. The Cross-Sectional Survey is a census of three cohorts of Veterans separating from service for approximately six (6) months, one (1) year, and three (3) years. As a result, the Contractor does not utilize sampling or stratification procedures to identify participants. Participants in the Longitudinal Survey include a subset of Veterans who participated in the Cross-Sectional Survey and indicated on the Cross-Sectional Survey that they would be willing to be contacted for future surveys.



Due to the longitudinal study’s dependence on responses to the cross-sectional survey to generate a set of potential respondents, it is possible that lower than expected response rates to the cross-sectional survey could result in a longitudinal sample that is not large enough to meet the necessary sample size determined by the power analysis. As shown above, current response rates allow the contractor to conduct a thorough analysis of Longitudinal Survey responses and draw accurate conclusions. To ensure an adequate longitudinal sample size, the Contractor will monitor the responses to the Cross-Sectional Survey and, if response rates are not large enough, follow a contingency plan to recruit more potential respondents for the longitudinal study. The contingency plan will consist of two strategies. First, the Contractor will re-contact cross-sectional survey respondents who opted out of the longitudinal study to ask them if they would reconsider their decision. This re-contact will consist of an email that reiterates the importance of the study to Veterans. Second, the Contractor will attempt to re-contact non-respondents to the cross-sectional survey using the post-separation email address. This additional survey mode should generate additional responses beyond those generated by the postcard and mail survey, particularly among Veterans who may have recently moved and do not have a stable home address.



Post-stratification weights shall be used, drawing upon the population file to provide control totals; more detail is provided in the following item. For estimation of frequencies, means, cross tabulations, and other statistics, the Contractor will utilize the post-stratification weights. The Contractor will estimate weighted statistics representative of the population and will include the weighted standard errors associated with each estimate. The Contractor will also produce subgroup analyses. For analyses comparing subgroups, differences shall undergo significance testing at the 95% confidence level.



The PSTAP Assessment will not be conducted more frequently than once in a 12-month period. At the end of the instrument, Veterans are requested to opt into future waves of data collection and are informed in writing that they shall not be contacted more than once in a 12-month period.





  1. Describe methods to maximize response rates and to deal with issues of nonresponse. The accuracy and reliability of information collected must be shown to be adequate for intended uses. For collections based on sampling, a special justification must be provided for any collection that will not yield “reliable” data to be generalized to the universe studied.



Nonresponse is mitigated for the PSTAP Assessment in four primary ways: (1) offering multiple modes; (2) multiple contact attempts; (3) minimizing the instrument length; and (4) increasing the ease of completing a paper questionnaire by optimizing the visual layout.



Survey response rates, in general, have been dropping over the last few decades. Some of the ways response rates are boosted include offering multiple modes. While some survey modes, such as those involving personal interviews, can also come at significant cost, a combination mode of web, paper, and other electronic methods is both cost-effective and provides anonymity for the respondent. Web and other electronic methods are being offered first to reduce the number of paper surveys that need to be mailed to Veterans who do not respond after the first attempt. Utilizing these methods for the longitudinal survey will also help to reduce burden of the survey respondents and allow for follow-up reminders to further reduce the need for paper surveys.



In year 1 of Cross-Sectional Survey deployment, the PSTAP Assessment received lower than expected response rates due to limited communication methods implemented. In year 2, the Contractor added emails to the survey methods, which increased response rates by roughly 10 percentage points. This method is currently being implemented with the PSTAP Assessment and has shown higher success rates.



The final instrument developed is the culmination of cognitive interviews as well as close coordination between VA, the Interagency Partners, and the Contractor. As with any survey development, there is often a desire to include more questions than is feasible without jeopardizing response rates. At all junctures, there was close coordination to ensure that both the concepts being measured and the number of questions were kept at a minimum to decrease respondent burden.



Visual layout can reduce the effort required by a respondent to encode their responses to a question and mark the right category. Such techniques as using grids and alternate shading can decrease this burden.



Despite this multi-pronged strategy, achieving an 80% response rate is unlikely, and has been difficult to achieve for even the largest federal surveys. To assess and mitigate any potential bias caused by nonresponse, the Contractor will conduct a nonresponse-bias analysis (NRBA) and produce nonresponse-adjusted post-stratification weights. The NRBA will draw on demographic information available from the population file (e.g., age, military service branch, grade / rank, etc.) to use as auxiliary variables. It will, at minimum, include the following:



  • Comparison of response rates for different subgroups;

  • Comparisons of the distributions of auxiliary variables for respondents and the full population; and

  • Use of a classification tree algorithm to identify subgroups with low response rates.



Any variables for which the distribution differs significantly (typically defined as p<0.05 for a chi-square test or t-test, as appropriate) between respondents and the full population, or response rates vary significantly by subgroup, will be considered for use in post-stratification weighting. If many potential weighting variables are identified, priority will be given to variables that are more highly correlated with the primary outcomes of interest. The Contractor will post-stratify the survey weights to the population file control totals for the selected variables, collapsing small cells as necessary. After weighting, the distributions of all auxiliary variables will again be compared to the corresponding population distributions to verify that the potential for bias has been reduced (fewer significant differences between the weighted distributions and the population distributions).





  1. Describe any tests of procedures or methods to be undertaken. Testing is encouraged as an effective means of refining collections of information to minimize burden and improve utility. Tests must be approved if they call for answers to identical questions from 10 or more respondents. A proposed test or set of tests may be submitted for approval separately or in combination with the main collection of information.



Both the PSTAP Cross-Sectional and Longitudinal Surveys were pretested using interviews and a web survey questionnaire. Referrals for the pretesting subjects were obtained by the Contractor’s internal recruiting system. In addition, VA provided test subjects to allow for additional testing with focus on question wording.



Interviews were conducted with four (4) members of the public consistent with OMB regulations that state testing shall not exceed nine (9) members of the public without applying for a generic clearance. These interviews were conducted from July 1, 2019 through August 15, 2019. In addition, program experts reviewed the survey and provided additional input.



The survey pretests were conducted online. Each test subject was sent a link to the online version of the survey which included all instructions, questions, and a list of reflection questions. The reflection questions asked test respondents to provide feedback on the length of time to complete the survey, if the design of the survey allowed for ease of understanding of questions, and allowed for feedback on specific questions. Both surveys were approved by OMB in previous years.



For this renewal request, some changes were made to both surveys, but were minimal and did not include major question changes. Most changes were simply in survey and program language to reflect the ever-changing nature of TAP. All changes were reviewed by VA personnel. Changes to the survey can be found in the survey crosswalk Excel files as part of this package along with final copies of each survey.





  1. Provide the name and telephone numbers of individuals consulted on statistical aspects of the design and the name of the agency unit, contractor(s), grantee(s), or other person(s) who actually collect and/or analyze the information for the agency.



The PSTAP Assessment is the culmination of significant federal planning and interagency coordination. In October 2018, a contract was awarded to Economic Systems Inc, and their subcontractor, Westat, to develop and maintain survey instruments and administer the surveys. Key personnel involved in the final stages of the design of the survey instrument and statistical planning of the study include:



Veterans Benefits Administration (202-530-9053)

Meredith Bedenbaugh-Thomas, Assistant Director Transition

Kenyonna Power, Contracting Officer’s Representative

William Brinley, Lead Program Analyst



Lynne Kelley, Ph.D., PMWG Chair (703- 614-8676)

More than 40 representatives from the 7 federal agencies in this group.



Economic Systems Inc. 703-333-2197

Jacob Denne, Project Manager

Ali Sayer, Vice President



Westat 301-212-2174

Jeffrey Taylor, Research Manager

Elizabeth Petraglia, Senior Statistician

Page 8 of 8

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File TitleSupporting Statement for VBA Generic Customer Survey Clearance
AuthorVBA
File Modified0000-00-00
File Created2022-09-20

© 2024 OMB.report | Privacy Policy