Supporting Statement B_updatedFINAL_changes accepted

Supporting Statement B_updatedFINAL_changes accepted.docx

Deployment Risk and Resilience Inventory (DRRI)

OMB: 2900-0730

Document [docx]
Download: docx | pdf


Development of the

Deployment Risk and Resilience Inventory (DRRI)

Follow-up Study, VA Form 10-21087

OMB 2900-0730



B. COLLECTIONS OF INFORMATION EMPLOYING STATISTICAL METHODS


1. Provide a numerical estimate of the potential respondent universe and describe any sampling or other respondent selection method to be used. Data on the number of entities (e.g., households or persons) in the universe and the corresponding sample are to be provided in tabular format for the universe as a whole and for each strata. Indicate expected response rates. If this has been conducted previously include actual response rates achieved.



Initial Data Collection (completed in 2012)



Male

Female

Total

Respondent universe

1,340,526

165,683

1,506,209

Corresponding sample

670

837

1507


The respondent universe for both waves of the initial data collection included all individuals who were deployed to Iraq or Afghanistan since 2001. For Wave I, there were 1,200 potential participants, of which 202 could not be reached, with mailing materials returned to the research team as undeliverable. Of the 998 remaining potential participants who we believe were likely to have received the mailing (i.e., mailings were not returned as undeliverable), 85 declined participation by returning an opt-out letter. We received completed surveys from 463 Veterans (59% female, 41% male), yielding a response rate of 46.4%. Of the 463 participants, 71% were Veterans of OEF and 29% were Veterans of OIF. With respect to component, 53% were deployed from Active Duty, 26% from the National Guard, and 20% from the Reserves. The mean age of participants at the time of survey completion was 35.


The number of potential participants for Wave 2 of the initial data collection was 3097. Of these potential participants, 398 could not be reached, with mailing materials returned to the research team as undeliverable, resulting in 2699 potential participants being eligible to receive the survey. Of those whom we believe were likely to have received the survey (i.e., mailings were not returned as undeliverable), 77 declined participation by returning an opt-out letter. We received completed surveys from 1,044 Veterans (54% female, 46% male), yielding a response rate of 38.7%. Of the 1,044 participants, 34% were Veterans of OEF and 66% were Veterans of OIF. With respect to component, 57% were from Active Duty, 26% were National Guard, and 18% were Reservists during deployment. The average age of participants was 35.




Follow-up Data Collection Proposed for this Revision



Male

Female

Total

Respondent universe

480

564

1,044

Corresponding sample

347

407

754


The respondent universe for the follow-up data collection includes participants from the second wave of the initial data collection. All participants from the initial sample who agreed to be re-contacted (approximately 85% of the sample) will be invited to participate in both follow-up data collections. Of the eligible sample, a response rate of approximately 85% (range: 80-90%) is anticipated. We anticipate a high response rate because potential participants are individuals who have a pre-existing investment in the research and prior research has shown higher response rates to follow-up data collections. For example, among Veterans who completed an initial survey for a recent HSR&D study of OEF/OIF Veterans that applied the same re-contact protocol (i.e., Dr. Susan Eisen’s project, IAC 06-259) 86% participated in the follow-up survey.


  1. Describe the procedures for the collection of information, including:

  • Statistical methodology for stratification and sample selection

  • Estimation procedure

  • Degree of accuracy needed

  • Unusual problems requiring specialized sampling procedures

  • Any use of less frequent than annual data collection to reduce burden


Statistical methodology for stratification and sample selection


The initial project involved administering DRRI scales to two stratified national random samples of 463 (Wave I) and approximately 1,044 (Wave II) OEF/OIF veterans. In the original data collection, women and National Guard/Reservist personnel were oversampled relative to their representation in the population to enhance dispersion in deployment experiences, and thus, provided sufficient levels of individual differences for the psychometric analyses (Nunnally & Bernstein, 1994). Relative to women’s proportion in the population (11% based on DMDC figures; C. Park, personal communication, May 2, 2006), women were oversampled to yield a 50% female-50% male gender distribution. Relative to the representation of Regular Active Duty and National Guard/Reservists in the population (70% and 30% based on DMDC figures; C. Park, personal communication, May 2, 2006), the full sample consists of approximately 50% Regular Active Duty and 50% National Guard/Reservists. The follow-up data collections involve recontacting the Wave II sample.


Estimation procedure and Degree of accuracy needed

The sample sizes for the initial data collection wave were purposefully selected so that there would be sufficient power for each form of data analysis proposed. According to Nunnally & Berstein (1994), item analyses should proceed using a 10-to-1 respondents-to-items ratio (per construct). This ratio is considered sufficient to achieve stable estimates of item characteristics, especially item-total correlations and internal consistency reliability coefficients. With item sets for DRRI scales ranging from approximately 20 to 35 items, the minimum sample size needed for these analyses is 350 (maximum number of items (35) x 10 = 350 respondents). For both sets of CTT analyses (initial psychometric analyses in Wave I and confirmation of psychometric properties in Wave II) data from more than 350 participants was available. A sample size of 463 was reached in the first wave of data collection and a sample size of 1,044 was reached in the second wave, which is more than sufficient for the planned analyses. The sample size was also sufficient to conduct IRT analyses on both waves of data collection.


This sample size in the second wave of data collection also facilitated the exploration of secondary research questions about military subgroups (e.g., women, Active Duty versus National Guard/Reservist personnel) and other research questions of interest to the research team.


For the follow-up data collections, with an anticipated sample size of approximately 754 (expected range: 710-799) across the time points, we should have more than adequate power to detect any level of association that may exist among deployment stress exposure, mental health symptomatology, post-deployment functioning, and VA service use in both regression- and structural equation modeling (SEM)-based analyses. The regression analyses that are most prone to Type 2 errors (i.e., low power) are those examining moderator effects of functioning (T2) on VA service use (T3), including gender differences. Assuming a total of 13 independent variables to represent all possible main effects and interactions, and estimating the probability of a Type 1 error (alpha) at .05, power exceeds .99 to detect a moderate effect size (f2 = .15 based on Cohen et al., 2003, p. 95). Power exceeds .70 to detect a small effect (f2 = .02).


The anticipated sample size should also allow sufficient power for SEM models. To confirm this, power calculations were carried out using Preacher and Coffman’s (2006) software in order to compute statistical power for testing a covariance structure model using RMSEA, the minimum sample size required to achieve acceptable power, as well as the power for testing the difference between nested models (MacCallum, Browne, & Cai, 2006). This procedure bases the effect size of SEM analyses on the difference in fit of two models.  Hypothesized RMSEA values for the two models (e.g., the fully and partially mediated models), and degrees of freedom for each model, are used to estimate the sample size needed to achieve a given level of power (i.e., 80%).  Thus, as recommended by MacCallum and colleagues (2006), we selected RMSEA values in the middle range (.03-.10) and with a moderate difference in RMSEA values between models (.01-.02). For the model with the largest number of parameters, a sample size of 281 participants is needed to obtain 80% power at an alpha level of .05. Given this sample size requirement, having at least 281 participants in the T2 and T3 samples would provide greater than or equal to 99% power to compare the fully and partially mediated models. Given our interest in examining gender differences and the possibility that models might be evaluated separately for women and men, we also computed power separately for the genders. With the sample sizes required to test these differences, we will have greater than 99% power to test gender differences in models under multi-group SEM.


In sum, power is more than sufficient in this study to detect any clinically important associations that may exist in the data. Even if response rates fall short of anticipated levels, we should still have sufficient power to investigate the hypotheses under study, as well as a number of exploratory aims.


Unusual problems requiring specialized sampling procedures

There are no unusual problems anticipated for the current study. Sampling procedures used to enhance response rates are described in the next section.


Any use of less frequent than annual data collection to reduce burden

The data collection for the initial OMB-approved project occurred one time only (with two versions of the survey instrument administered to separate samples of participants). The proposed follow-up data collections involve re-assessing the initial Wave II sample at two time points, which will be separated by more than one year, thereby reducing burden.


3. Describe methods to maximize response rate and to deal with issues of non-response. The accuracy and reliability of information collected must be shown to be adequate for intended uses. For collections based on sampling, a special justification must be provided for any collection that will not yield “reliable” data that can be generalized to the universe studied.


As with any study using a survey technique, there are several potential limitations, most of which center on the ability to achieve acceptable response rates. Several steps that were taken for the original OMB data collection will be applied for the proposed follow-up data collections to maximize response rates. Perhaps the greatest constraint is the amount of time that one can reasonably expect a respondent to contribute to a study. The estimated time burden did not exceed one hour for either version of the survey administered in the initial data collection, nor does it exceed one hour for the proposed follow-up data collections, which involve administering shorter surveys than administered in either of the initial data collections. Therefore, it is not anticipated that the length of the survey will be a problem for either of the follow-up data collections. Also included in the budget for the follow-up data collection is $25 to offer all potential participants as a token of our appreciation, which should further enhance response rates.


In addition to survey-length sensitivity, the application of a widely-accepted multi-stage mailing procedure that was used with success in prior research should further enhance response rates. A modification of Dillman’s (2007) well-regarded mail survey procedure was applied to both waves of initial data collection and will also be applied for the proposed follow-up data collections. Specifically, for each wave of data collection, a letter will be mailed as an invitation to participate in the study. The letter will explains the purpose of the study, provide details on procedure s used to enhance the confidentiality of all responses provided, emphasize the voluntary nature of participation, state an estimated time to complete the survey instrument, provide a mechanism to withdraw prior to receiving the questionnaire, emphasize that the interest is in group data and not a particular person’s individual standing, provide information on risks and benefits, and conform to standards for the protection of human subjects. A postcard that can be returned to indicate that an individual does not want to be contacted again will also be included in this mailing. Approximately two weeks later, all potential participants will receive the assessment package with a cover letter that reiterates the points included in the introductory letter. A cover page detailing all elements of consent will be appended to the beginning of the questionnaire. A brief demographic sheet will also be included to obtain data on background and military characteristics for the purpose of describing the sample and making group comparisons. Consistent with Dillman’s (2007) recommendations for repeated contacts with targeted respondents, a reminder postcard will be mailed two weeks later, followed by a second mailing of the assessment package to non-respondents two weeks after and a final reminder postcard two weeks later. Consistent with evidence that response rates are better when incentives are used, also included in the first mailing of the survey will be a token of our appreciation in the amount of $25.


The sampling frame for both waves of the initial data collection consisted of a study panel that was secured from DMDC. Once the sampling frame from DMDC was secured, names and social security numbers were submitted to an Internal Revenue Service (IRS) address search, through a Department of Veterans Affairs Environmental Epidemiology Service (EES) interagency agreement with the IRS. This method was highly effective in obtaining valid addresses for the development of the DRRI, and enabled the investigators to reach more participants than needed for the study. The follow-up data collections will involve inviting participants from the second sample (Wave II) of the initial data collection.


The application of strategies to reduce non-response, along with the use of a stratified sample plan in the original data collection to assure adequate numbers of persons with and without health complaints within important demographic groupings (i.e., 50% women and 50% men, 50% regular active-duty personnel and 50% National Guard/Reservist personnel), should provide us with a sample that includes sufficient dispersion on the deployment risk and resilience factors that are the focus of this investigation for the planned psychometric analyses.


For a psychometric project of this nature, representativeness of the sample with respect to the population is of less concern than achieving a sample that has broad dispersion or a wide range of scores on the attributes that are the focus of the psychometric inquiry and ample representation of the kinds of persons for whom the instrument is intended (Anastasi, 1982; Nunnally, 1978). However, it is still possible that respondents will differ from non-respondents on key variables and this will introduce nonresponse bias. As noted by Kessler, Little, and Groves (1995), a primary approach to deal with non-response bias involves the implementation of data collection strategies that reduce the nonresponse rate. With respect to data collection, we have incorporated a number of different techniques that have been shown to increase participation (Bose, 2001; Dillman, 2007; Fink, 2003; Groves, 1989; Mangione, 1998). These strategies include initiating advance contact to notify sampled individuals about the survey, making repeated contacts with potential participants, offering an incentive for participation, and informing potential participants about procedures that have been implemented to enhance anonymity and confidentiality.


As noted by Kessler et al. (1995) and Bose (2001), even with these efforts, it is still possible that respondents will differ from non-respondents on key variables and this will introduce nonresponse bias. Thus, we will take the additional step of using demographic information about the sample that is available from Defense Manpower Data Center to examine potential differences between respondents and nonrespondents on gender, race/ethnicity, age, and active duty versus National Guard/Reservist Status. Although we do not expect substantial differences between respondents and nonrespondents, if significant differences are obtained, this will be noted in all study reports.



4. Describe any tests of procedures or methods to be undertaken. Testing is encouraged as an effective means of refining collections to minimize burden and improve utility. Tests must be approved if they call for answers to identical questions of 10 or more individuals.


We will apply a well-established survey methodology using scales that have been tested and validated in prior work; therefore, additional testing will not be undertaken.


5. Provide the name and telephone number of individuals consulted on statistical aspects of the design and the name of the agency unit, contractor(s), grantee(s), or other person(s) who will actually collect and/or analyze the information for the agency.


Dr. Lynda King, a research psychologist at the Women's Health Sciences Division of the National Center for PTSD, and Dr. Daniel King, a research psychologist at the Behavioral Science Division of the National Center for PTSD, were consulted on all statistical aspects of the design. Their work telephone number is (857) 364-4938.


Dr. Dawne Vogt, a research psychologist at the Women's Health Sciences Division of the National Center for PTSD, is the lead principal investigator on this project, and is responsible for directing collection and analysis of the data. Her telephone number is (857) 364-5976.


Dr. Brian Smith, also a research psychologist at the Women’s Health Sciences Division of the National Center for PTSD, will be working directly with Dr. Vogt to manage data collection and analysis, as well as reporting of the results. Dr. Smith’s telephone number is (857) 364-6196.

All data will be collected by the research team at the Women's Health Sciences Division of the National Center for PTSD in the VA Boston Healthcare System.



Page 1

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File TitleBold black = OMB questions
Authorvhacobickoa
File Modified0000-00-00
File Created2021-01-25

© 2024 OMB.report | Privacy Policy