Worker Well-Being Supporting Statement B_03_7_18

Worker Well-Being Supporting Statement B_03_7_18.docx

Measuring Well-Being for Total Worker Health®

OMB: 0920-1234

Document [docx]
Download: docx | pdf

National Institute for Occupational Safety and Health (NIOSH)

Measuring Well-Being for Total Worker Health®






Supporting Statement – Section B







Submitted: March 7, 2018




















Program Official/Project Officer

Chia-Chia Chang

395 E St SW, 9200

Washington, DC 20201

202-245-0656

[email protected]


SUPPORTING STATEMENT B

Measuring Well-Being for Total Worker Health


B. Collection of Information Employing Statistical Methods


In this section, an overview of the respondent universe and study population for data collection is provided. The relevant procedures for identifying the study population and the data collection procedures are discussed. There are no unusual problems requiring specialized sampling procedures.


1. Respondent Universe and Respondent Selection


The sample for this survey will be drawn from the existing GfK Knowledge Networks Internet panel (also known as KnowledgePanel®). A random sample will be drawn from all those in the panel who are English speaking, employed, and at least 18 years old.

The completion rate for KnowledgePanel is 65% with some variation depending on survey length, topic, and other fielding characteristics. In contrast, non-probability, opt-in, online panels typically achieve a survey completion rate in the 2% to 16% range. Therefore, GfK will pull a sample of about 1580 in order to obtain 1000 completed surveys. More on sample size is provided in the following paragraph. Since GfK maintains de-identified demographic data for every panel member, RAND will run appropriate statistical tests to determine significant differences between responders and non-responders.


Sample size requirements for psychometric analyses are typically calculated based on a subject to item ratio. Statistical research on sample size has suggested that a range of subject-to-item ratios for factor analyses can be as small as 2:1 and large as 100:1. One review of this topic has found that 75% of studies had a ratio of 10-20:1; our ratio is 12:1, which is considered reasonable.1 Our sample size was influenced also by budget, so we settled on a sample size that powers us to achieve our psychometric goals. If we consider the subgroups that we are examining in the validity tests, we have needed power to conduct these analyses. For example, it is likely that greater than 10% of the sample will be temporary/contract workers, which is considered large enough to conduct a well-powered ANOVA. We anticipate that at least 30% of workers in the sample will have blue collar jobs, which is considered large enough to conduct a well-powered T-test (see Supporting Statement A, Section 2, Purpose and Use of Information, Hypothesis 3, in reference to these predictions).


GfK panel was originally formed in 1999 through random-digit dialing (RDD) and address-based sampling (ABS).2 The sample frame of residential addresses covers approximately 97% of U.S. Households. The sample also includes households that:


  • Have unlisted telephone numbers

  • Do not have landline telephones

  • Are cell phone only

  • Do not have current Internet access

  • Do not have devices to access the Internet


Samples are drawn from among active members using a probability proportional to size (PPS) weighted sampling approach. Individuals may join the panel only after being randomly selected; no one is allowed to “opt in” to the panel. Panel members may complete a maximum of one survey per week; most complete about two surveys per month.


2. Procedures for the Collection of Information


GfK will field the 20-minute survey to approximately 1580 panel members from their KnowledgePanel. Data collection for the online survey will be conducted by GfK using their standard procedures. Panel members who meet specified requirements (e.g. respondents who are English speaking, employed, and at least 18 years old) for the study are invited randomly to complete the survey. When a participant is assigned to a study, they are notified by email that a survey is available to them. Each notification contains a password-protected link that sends them to the survey, which can only be used one time. The respondents only need the link – the panel does not require respondents to complete demographic or other background information each time they login, reducing respondent burden. The respondent is only required to complete their demographic profile one time when they join the panel. Respondents are also able to complete the survey at their convenience, and are able to refuse any survey they wish. After three days, automatic email reminders are sent to nonresponding panel members in the sample. If email reminders do not produce a sufficient response, an automated telephone call can be initiated. Telephone calls only follow reminder emails and follow three to four days after the reminder email. As soon as survey strata reach quota, the survey is marked as complete. The survey stays open as long as needed to reach the desired sample size. Most surveys reach their intended sample size within several days.


3. Methods to Maximize Response Rates and Deal with Nonresponse


Despite the KnowledgePanel’s characteristics that seek to maximize response rates and minimize nonresponse, we do anticipate some nonresponse. The KnowledgePanel assigns a unique ID to participants, which can be used to track, in real time, who has responded to the survey. For those who do not initially respond, we will send follow-up emails, followed by a phone call at regular time points. In reporting our results, we will calculate nonresponse rates according to the standards promulgated by the AAPOR. According to this standard, the response rate will be calculated as the ratio of the number of completed cases to the number of eligible cases. RAND will request available demographic data on nonresponders, and will then run appropriate statistical tests to determine significant differences between responders and non-responders.





4. Tests of Procedures or Methods


RAND does not plan to pilot test the survey. Pretesting was completed prior to OMB submission (cognitive testing of a convenience sample of nine individuals). If the survey needs to be changed after we begin fielding, we will submit a request for nonmaterial changes to OMB. While no further cognitive testing is planned, we will use the generic clearance mechanism for similar procedures in the future if needed.


5. Statistical and Data Collection Consultants


The survey, sampling approach, and data collection procedures were designed by the RAND Corporation under the leadership of:



Vivian Towe, Ph.D. Ramya Chari, Ph.D.

1200 South Hayes Street 1200 South Hayes Street

Arlington, VA 22202 Arlington, VA 22202

(703) 413-1100 x5178 (703) 413-1100 x5216

[email protected] [email protected]


The contact for additional survey questions at GFK is:


Wendy Mansfield, Ph.D.

Senior Vice President | Government & Academic

GfK | Washington | DC | 20016 | USA

[email protected]

(202) 686-0933

1 Costello, A.B., & Osborne, J.W. (2005). Best practices in exploratory factor analysis: Four recommendations for getting the most from your analysis. Practical assessment, research & evaluation, 10(7), 1-9.

2 Knowledge Networks. (n.d.). KnowledgePanel design summary. Available at: http://www.knowledgenetworks.com/knpanel/docs/KnowledgePanel%28R%29-Design-Summary-Description.pdf. Accessed October 18, 2017.

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorJulie Brown
File Modified0000-00-00
File Created2021-01-21

© 2024 OMB.report | Privacy Policy